.

# Understanding of sampled systems

## Nyquist's sampling theorem, or more precisely the Nyquist-Shannon theorem, it is a fundamental theoretical principle that governs the design of mixed signal electronic systems.

Modern technology as we know it would not exist without analog to digital conversion and digital to analog conversion. In fact, these operations have become so common that it seems a truth to say that an analog signal can be converted to digital and return to analog without any significant loss of information.

But how do we know it really is? Because sampling is a non-destructive operation, when it seems to discard so much signal behavior that we observe between individual samples?

And digitize it into this:

And then dare to say that the original signal can be restored without loss of information?

## Nyquist-Shannon theorem

This claim is possible because it is consistent with one of the most important principles of modern electrical engineering:

If a system uniformly samples an analog signal at a frequency that exceeds the highest frequency of the signal by at least a factor of two, the original analog signal can be perfectly recovered from the discrete values ​​produced by sampling.

There is much more to say about this theorem, but first let's try to figure out what to call it.

Shannon? Nyquist? Kotelnikov? Whittaker?

I'm not going to decide who deserves the most credit for the wording, the demonstration or explanation of Shannon's sampling and interpolation theory - Nyquist - Kotelnikov - Whittaker. All four of these individuals had some sort of prominent involvement.

However, it appears that Harry Nyquist's role has been extended beyond its original meaning. Eg, in Digital Signal Processing: Fundamentals and Applications di Tan and Jiang, the above principle is identified as the “Shannon sampling theorem”, and in microelectronic circuits Sedra and Smith, I find the following sentence: “The fact that we can perform our processing on a limited number of samples … while ignoring the details of the analog signal between the samples is based on … Shannon sampling theorem. “

Therefore, we should probably avoid using “Nyquist's sampling theorem” O “Nyquist sampling theory”. If we have to associate a name with this concept, I suggest to include only Shannon or both Nyquist and Shannon. And indeed, maybe it's time to move on to something more anonymous, come “Fundamental sampling theorem”.

This is a bit’ confusing but, remember that the sampling theorem indicated above is distinct from Nyquist rate, which will be explained later in the article. I don't think anyone is trying to separate Nyquist from his rhythm, so we end up with a good compromise: Shannon gets the theorem and Nyquist gets the rate.

## Domain Time

If we apply the sampling theorem to a sinusoid of frequency f SIGNAL , we have to sample the waveform in f SAMPLE F 2f SIGNAL if we want to allow a perfect reconstruction.

Another way of saying it is that we need at least two samples per sinusoidal cycle. Let's first try to understand this requirement by thinking in the domain of time.

In the following diagram, the sine wave is sampled at a frequency that is much higher than the signal frequency.

Each circle represents an instant of sampling, that is, a precise moment in which the analog voltage is measured and converted into a number. To better visualize what this sampling procedure has provided us, we can trace the values ​​of the sample and then connect them with straight lines. The straight line approximation shown in the following diagram looks exactly like the original signal: the sampling frequency is very high compared to the signal frequency e, Consequently, the line segments are not significantly different from the corresponding curved sinusoid segments.

When we reduce the sampling rate, the appearance of the straight line approximation differs from the original.

20 samples per cycle (f SAMPLE = 20f SIGNAL )

10 samples per cycle (f SAMPLE = 10f SIGNAL )

5 samples per cycle (f SAMPLE = 5f SIGNAL )

A f SAMPLE = 5f SIGNAL , the discrete-time waveform is no longer a pleasant representation of the continuous-time waveform. However, note that we can still clearly identify the frequency of the discrete-time waveform. The cyclical nature of the signal has not been lost.

## The threshold: two samples per cycle

The data points produced by sampling will continue to retain the cyclic nature of the analog signal as we decrease the number of samples per cycle below five. However, eventually we reach a point where frequency information is damaged. Consider the following figure:

2 samples per cycle (f SAMPLE = 2f SIGNAL )

With f SAMPLE = 2f SIGNAL , the sinusoidal shape has completely disappeared. However, the triangular wave created by the sampled data points did not alter the fundamental cyclic nature of the sinusoid. The frequency of the triangular wave is identical to the frequency of the original signal.

However, as soon as we reduce the sampling rate to the point where there are less than two samples per cycle, this claim can no longer be made. Two samples per cycle, for the highest frequency in the original waveform, they are therefore a crucial threshold in mixed signal systems and the corresponding sampling frequency is called Nyquist frequency:

If we sample an analog signal at a frequency lower than the Nyquist frequency, we will not be able to perfectly reconstruct the original signal.

The following graphs show the loss of cyclic equivalence that occurs when the sampling rate drops below the Nyquist rate.

2 samples per cycle (f SAMPLE = 2f SIGNAL )

1,9 samples per cycle (f SAMPLE = 1.9f SIGNAL )

A f SAMPLE = 1.9f SIGNAL , the discrete-time waveform has acquired a substantially new cyclical behavior. The complete repetition of the sampled pattern requires more than one sinusoidal cycle.

However, the effect of an insufficient sampling frequency is somewhat difficult to interpret when we have 1,9 samples per cycle. The subsequent plot makes the situation clearer.

1,1 samples per cycle (f SAMPLE = 1.1f SIGNAL )

If you did not know anything about a sinusoid and performed an analysis using the discrete time waveform resulting from sampling at 1.1f SIGNAL , you would formulate seriously wrong ideas about the frequency of the original signal. Furthermore, if all you have is discrete data, it is impossible to know that the frequency characteristics have been corrupted. Sampling created a new frequency that was not present in the original signal, but you don't know that this frequency was not present.

The bottom line is this: when we sample at frequencies below the Nyquist frequency, information is permanently lost and the original signal cannot be perfectly reconstructed.

# Frequency domain

We have seen that the frequency characteristics of a sinusoid are irreparably lost when the waveform is sampled at a frequency that does not provide at least two samples per cycle. In other words, we cannot perfectly reconstruct the sinusoid if we sample at a frequency lower than the Nyquist frequency.

Most of the signs, however, they are not single frequency sinusoids. Eg, a modulated RF signal has frequencies associated with the carrier and the base band waveform and an audio signal representing human speech that will cover a frequency range.

We use the Fourier transform to display the frequency content of a signal. Time domain graphs are a good way to convey the effect of an insufficient sampling rate in the context of a single frequency signal, but for other types of signals, I'd rather use the frequency domain.

## Domain-frequency effect of sampling

Let's say we want to digitize an audio signal that includes a mixture of many different frequencies within a specified range. The upper limit of the interval is defined as f MAX and suppose that the interval extends to direct current, even if we can't hear such low frequencies. The Fourier transform of such a signal could look like this:

Mathematical sampling in the time domain

In the mathematical realm, the ideal sampling is equivalent to multiplying the waveform of the original time domain by a train of delta functions separated by an interval equal to 1 / f SAMPLE , which we will call T SAMPLE . (For the rest of the article, we will use f S per f SAMPLE e T S for T SAMPLE). This multiplication causes the sampled signal to be zero between the delta functions and maintains the value of the original signal at any point in time that coincides with a delta function.

Time domain sampling is implemented mathematically: we multiply the analog signal by a sequence of delta functions that occur at the sampling rate.

## Mathematical sampling in the frequency domain

How this time domain sampling procedure affects the frequency representation of a signal? Let's take a look.

The first thing to remember is that multiplication in the time domain becomes a convolution in the frequency domain. Therefore, we can find the Fourier transform of the sampled signal by twisting the Fourier transform of the original signal with the Fourier transform of the delta functions.

It turns out that the Fourier transform of a delta-function train is a delta-function train. The difference is that delta functions are separated by a horizontal distance corresponding to frequency of sampling rather than al period of sampling.

The spectrum of a sequence of delta functions separate from the sampling period is a sequence of delta functions separate from the sampling frequency.

When we count the spectrum of delta functions with the spectrum of the original signal, we create copies of the original spectrum which are moved according to the position of the delta functions. Therefore, the spectrum of a sampled signal consists of several “sottospettri” identical which are centered on ± f S , ± 2f S , ± 3f S and so on.

Adequate sampling frequency results in subspecies that are displaced enough to maintain complete separation.

We now have the information we need to confirm the Nyquist-Shannon theorem through frequency domain analysis. This theorem, as I expressed it before, is the following:

If a system uniformly samples an analog signal at a frequency that exceeds the highest frequency of the signal by at least a factor of two, the original analog signal can be perfectly recovered from the discrete values ​​produced by sampling.

Due to the negative frequency portion of the Fourier transform, the full math bandwidth of the original signal is 2f MAX . Therefore, to ensure that the subsets do not overlap, we have to move them at least 2f MAX . In other words, the sampling rate must be greater than the maximum signal frequency by at least a factor of two.

If this condition is met, the original signal can be perfectly reconstructed. Because? Because the original spectrum has not been changed and we can eliminate the other subsets through the low pass filter. If the condition is not met, the sub-aspects overlap, the original spectrum is changed and no low pass filter will restore the original signal.

## Aliasing

The overlapping of subspecies is the reason why the information is damaged when we use a sampling frequency lower than the Nyquist frequency. The overlapping sections of the sub-suspects are combined by addition; if we try to separate the original spectrum using a low pass filter, the frequency content in the overlapping bands will be different e, Consequently, the signal in the time domain will be different.

The official name for this is aliasing .

The shaded triangular areas represent the aliasing that caused a spectral alteration.

One of the definitions of the name “alias” is “a false or presumed identity”. We use the term “aliasing” because this sampling phenomenon can cause a frequency component to move to a new position in the spectrum and therefore “mask” itself as a different frequency.

We saw it in the previous article, where sampling at 1.1f SIGNAL ha produced a discrete-time waveform that seemed to have a frequency much lower than the frequency of the original analog waveform.

Amilcare Greetings

 VOTE