DSP in Instruments and Measurement Technology

Overview Signal processing refers to the process of filtering, transforming, analyzing, processing, and extracting characteristic parameters. In electronic instruments and measurements, the spectrum analyzer is most often used to perform spectrum analysis on signals to understand and obtain the frequency (or spectrum) characteristics of the signals. Prior to the development of modern computers and related technologies, this process could only be implemented using conventional spectrum analyzers made of hard-wired technology. As we all know, this traditional spectrum analyzer requires a high level both in design and manufacturing and the components used. Especially in the wide frequency range and high index, the design and manufacture are more difficult, and the price is very expensive. However, since the computer and its emerging digital signal processing (ie DSP) technology has matured and developed, the way to resolve the signal spectrum analysis is gradually being replaced by DSP.

Regarding discrete Fourier transform and digital filtering as signal processing, the most directly related to spectrum analysis is the Fourier transform (FT). It is well known that discrete Fourier transform (ie, DFT) and digital filtering are the basic contents of DSP. At present, DFT has many practical and effective fast DFT algorithms, ie, FFT algorithms and software, whose performance mainly depends on the sampling (actually including analog-to-digital conversion) rate and the operating speed of the CPU. The process of converting any signal (mainly an analog that reflects the various changes in the objective physical world, and most of the continuously changing analogs) into digital data that can be processed by the CPU is called "digitization," which includes both sampling and quantification. Steps, quantification is what is commonly referred to as analog-to-digital conversion. The sampling rate is related to the signal being processed. In order to ensure that the digitized signal data does not lose the characteristics of the original signal, the sampling frequency should be greater than or at least equal to 2 times the signal cutoff frequency. This is the famous Nyquist sampling theorem, or Nyquist sampling rate. The Nyquist sampling theorem is easy to prove. As for the speed of the CPU, it is known that the current microcomputer has reached hundreds or even thousands of megahertz. In order to improve or realize the speeding up of operations such as FFT, Texas Instruments (IT) has always been dedicated to the development and production of dedicated DSP chips. The famous TMS320 series chips are well known to the technology community. According to a recent report, the new TMS320C64x has an operating speed of up to 600 MHz. Its eight functional units can simultaneously perform four 16-bit MAC operations or eight 8-bit MAC operations per cycle. A single C64x DSP chip can simultaneously perform one channel of MPEG4 video encoding, one channel of MPEG4 video decoding and one MPEG2 video decoding, and still has 50% of the headroom left for multi-channel speech and data encoding, natural, and there are other vendors also Developed and produced many kinds of special or universal DSP chips.

In the last century, the development of electrical filters went from passive to active and from analog to digital. High-precision passive filters are very difficult techniques from design to manufacture. Although the active filter has greatly improved the performance of the filter and also reduced the difficulty of some manufacturing processes, the combination of the performance improvement and the combination of other signal processing technologies and the realization of the means is still a matter of digital. The filter comes first. Of course, this is also related to the development of EDA technology.

A digital filter is a discrete system whose characteristics or transfer functions are described by difference equations based on Z-transforms. There are two types of digital filters: IIR finite impulse response filters and FIR infinite impulse response filters. The former is also referred to as a "recursive" filter, which is also referred to as a "non-recursive" filter. People can determine the difference equation of the system according to the requirement of the signal processing, and then design the filter according to the difference equation. There are also two ways to implement the filter. One is software-only, that is, it is an algorithm software or software package; the other is hardware, that is, it is designed into a specific hard-wired circuit, and even made into a dedicated or general-purpose chip. Digital filter design methods and mature hardware and software products are not difficult to obtain. No more details here.

Other orthogonal transformations of the signal are known, and Fourier transform or Fourier analysis implies such meanings:

EP A signal is synthesized from the sine wave represented by each component of the FT spectrum. In this sense, we refer to a set of orthogonal sine functions representing these sinusoids as an orthogonal basis function of the Fourier transform (which can also be represented in the form of a complex function). Studies have shown that not only the sine function can be used as the basis function of the orthogonal transformation, but as long as it satisfies the orthogonal complete function system, it can be used as the basis function and analyze the signal by orthogonal transformation (the sine function is naturally a complete orthogonal function. system). Therefore, we refer to these transformations in general terms as "orthogonal transformations." The most interesting non-sinusoidal orthogonal functions for practical use are Rademacher functions, Haar functions, and Wald functions. For a period of time, the most commonly used is the Walsh function, which is the Radmeiach function completed by Walsh in 1923. The Walsh function is a set of rectangular waves whose values ​​are 1 and -1 and are very convenient for computer operations. Walsh functions are arranged or numbered in three ways, columnar or Walsh, Paley, and Hadamard. These three arrangements have their own characteristics. The arrangement with Adama is most convenient for quick calculations. The transformation using the Walsh function of the Hadamard arrangement is called the Walsh-Hadamard transform, or WHT for short, or the Adama transform. Since the operations of discrete orthogonal transformations are often performed by matrix multiplication, the matrix form of the Walsh-Aldamer function group has only two elements, 1 and -l. At the same time, the regularity of this short line of Hadamard is very strong. Generated with a simple algorithm, so the WHT fast algorithm is easy to implement. Now, this kind of fast algorithm and its software already have very mature goods. Of course, when using this kind of transformation we must remember that the spectrum it produces is based on short waves.

Another common orthogonal transform is the discrete cosine transform DCT. It is known that the basis function of a Fourier transform is a sine function, ie the number of times each of its components is a sine wave (or a complex vector) component determines the frequency of the sine wave. The phase of each component then constitutes the phase spectrum of the signal. That is, the Fourier spectrum of a signal consists of two parts, one is the amplitude characteristic and the other is the phase characteristic, or the real part cosine component and the sine part as the imaginary part of the complex vector. In other words, only the amplitude characteristic spectrum does not completely represent the signal, but the phase characteristics must be complemented to be complete. This, of course, not only complicates the expression and operation processing but also increases the amount of data representing the signal. Studies have shown that if the origin of the signal coordinates is properly shifted, only one of the sine or cosine components of the sine wave can exist in the transformed result. This is a sine transform or cosine transform. Discrete cosine transform DCT in signal processing is obtained by shifting the origin of the signal coordinate left half sampling interval. DCT has very good information characteristics. And there are effective fast algorithms, so in the development of the MPEG standard, it is defined as a standard transformation of image compression coding.

At the end of this section, the discrete KL (KarhunenLover) transformation is incidentally introduced. KLT is often referred to as the best transform because the filter and information compression coding with KLT have minimal distortion. However, since the transformation basis function of KLT is indefinite, and there is no fast algorithm so far, it is only used in special occasions.

With respect to wavelet analysis we note all of these transformations or analyses described above. The objects are stationary signals or even periodic signals. In terms of Fourier analysis, its original starting point is the Fourier series. Its mathematical definition indicates that any non-sinusoidal periodic function (signal) can be decomposed into a sine wave (and a constant current) in which the frequency is an integral multiple of its fundamental frequency. The sum of the components). For the integral of the Fourier transform, the integral period is extended to infinite formation. In fact, the concept of frequency is exactly what Fourier proposed in this work. And this kind of analysis method that transforms a thing from a “domain” to another “domain” and then analyzes or represents it from a new perspective or scale has epoch-making significance in the history of science. It is precisely Fourier. from. However, people have also long found that transformation or analysis tools such as Fourier transform can only be used to deal with deterministic stationary signals. For catastrophic nonstationary signals, satisfactory analysis cannot be performed; and Fourier analysis yields The overall spectrum of the signal, but can not get the local characteristics of the signal. As a result, windowed Fourier transforms emerged in the 1980s. The windowed Fourier transform is a localized time-frequency analysis method, which combines the time domain (or airspace) to frequency domain mapping analysis of the traditional Fourier transform by means of windowing, for a local time period (or Spatial interval analysis is performed in the frequency domain. The windowed Fourier transform partially solves the short-term signal analysis problem. However, it has many intrinsic deficiencies, such as short-time high-frequency signals, although the window width and sampling interval can be used to adapt to the frequency increase, but the window is too narrow will reduce the frequency resolution, but also not suitable for low-frequency components . Therefore, this leads to the search for a new transformation (analysis) method. Wavelet analysis emerged in this context and was quickly applied and developed.

The development of wavelet analysis is very rapid. Although it can be traced back to Hilbert's discussion in 1900, and the norm of orthogonality proposed by Hal in 1910, the actual main work should be when Morlet of France analyzed the local nature of seismic waves in 1984. Because the Fourier transform is difficult to meet the requirements, it introduces wavelet concepts. Later, Grossman studied Morlet's signal by a definite function of the stretching and translation system, which opened the way for the formation of wavelet analysis.

Among many scientists who have made great contributions to wavelet analysis, the Mallat algorithm published by Maliat in 1987 has undoubtedly played an important role in promoting the development of wavelet analysis. Naturally, in the development of wavelet analysis, many scientific and technological workers in China have also made great contributions.

Like the other analysis transformations described above, the wavelet transform has two forms, continuous and discrete. However, since wavelet functions are usually short pulse waves, discrete processing is relatively easy, and sometimes people ignore the differences.

In addition to being suitable for handling non-stationary signals that are mutated (or time-varying), wavelet transforms have a very useful feature, namely multiresolution characteristics. The so-called multi-resolution in wavelet analysis, due to the different scale functions used, you can easily get different resolution results. This has been practically applied in the processing of image signals.

Wavelet analysis has achieved many mature results, including a number of common algorithms, software, and solidified devices. For example, the ADV611 chip introduced by AD company, as the video image encoding / decoding and compression, including wavelet filter, can achieve a compression ratio of 7500:1, the image quality is good. In the application of instruments and measurements, there are also many results. For example, if someone uses it in the analysis of X-ray spectrum signals, the quality of spectral line signals processed by wavelet transform is greatly improved. It can be expected that this technology will be further developed and applied more widely.

Conclusion This article briefly introduced the current common signal processing, especially digital signal processing technology. However, they are basically suitable only for processing deterministic signals. There is a large class in signal processing technology called random signal processing or statistical signal processing. This type of processing technology is most widely used in combating noise and signal pollution operations. It is also known as signal estimation or signal restoration. The two most representative technologies are Weiner filtering and Kalmark filtering. The former is also called the least squares filter, which is very effective in recovering the signal from noise. In fact, they have all been proposed very early, but only under the development of modern computers and digital technologies have they been truly applied. So we ended up simply mentioning it as the end of this article.

Posted on