9

This was triggered by the comments in this question.

I'm using this definition of the Shannon-Nyquist theorem (form wikipedia):

If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.

I was under the impression that the Nyquist theorem is theoretically true in the following sense: if you decompose a signal into sinusoids, and then sample at 2x the highest frequency sinusoid, you can perfectly reproduce the original signal. This is because there's only one sinusoidal curve that fits all the samples at or below 1/2 the sampling rate, and if we're considering the highest frequency component in the original signal, it must be a sinusoid (or else it wouldn't be the highest frequency in the signal).

But others commenting on the question linked above said this only applies to continuous signals. One person elaborated on this as follows:

think about continuous signal reconstruction from discrete-time values: in theory, every value between two sample instants requires the sinc sidelobes of all countably infinitely many samples before and after. That's a bit problematic, especially because we don't know the future. Assuming periodic repetition is one of the common tricks to get around that, so that with only a limited amount of past and no future, we can reasonably sinc-interpolate

I don't really understand this - does it mean that what I said above about the Nyquist theorem is not completely true?

John B
  • 223
  • 1
  • 5
  • Converting to sinusoids via fourier transform is simply one convenient transformation from phase space to frequency space, you are putting too much stock, in my view, into a physical meaning for this in terms of sinusoids. Your example that you begin with, presupposes a known signal with a known maximum frequency component. If this constraint is not there, aliasing would mean there are in fact infinite number of reconstruction to any given sampling. – crasic Mar 04 '21 at 13:40
  • The Shannon/Nyquist limit (sampling at the twice of the signal frequency) applies if the signal is periodic *and* sinusoidal. Otherwise you could, for example, reconstruct a triangle wave from the sample points. The question about knowing the future is something abstract like the negative frequency – Lorenzo Marcantonio Mar 04 '21 at 13:42
  • Consider a phase space compact signal, a square wave is not an infinite number of sine waves, it is a function defined by phase space equation x(0>t>1)=1. The frequency space representation of this is infinitely asymptotic, but that does not mean that a square wave is physically many sine waves as many tend to infer following their first into to fourier, it is a bona fide phase space signal, and using frequency space analysis is useful for understanding filtering and sampling artifacts, but not a useful physical model for the signal in all cases. – crasic Mar 04 '21 at 13:45
  • @crasic yes the whole point is that it has a known maximum frequency component. I added the definition that I'm using to the question. – John B Mar 04 '21 at 13:46
  • @LorenzoMarcantonio no you couldn't reconstruct a triangle for the reason I explained in my question: if the signal you're reconstructing is not a sine wave, then it's not the highest frequency component in the original signal, and therefore you haven't used a high enough sampling rate. – John B Mar 04 '21 at 13:47
  • @crasic re your second comment - a square wave is not band limited though right? I'm not saying that a signal is physically it's fourier transform - but the fourier transform of a band limited signal is a perfect representation of the signal right? That's the assumption I was making anyway. – John B Mar 04 '21 at 13:54
  • My comment was a proof by error: if *it were* possible you could reconstruct a triangle. The corollarium is that the sine is the only signal which has no harmonics – Lorenzo Marcantonio Mar 04 '21 at 14:02
  • @crasic my goal with this question was to find out if my understanding of the theorem is correct, or if the other one was correct (i.e. that it's only true for repeating signals). Of course I have to assume a set of conditions that make the theorem applicable, otherwise my question would be moot. – John B Mar 04 '21 at 14:07
  • John, @Neil_UK did a much better job of capturing what I was getting at in his answer, there is an assumption of band limited, and this is fine. But that assumption is a requirement of the theorem :) – crasic Mar 04 '21 at 14:11
  • @crasic ok, thanks! I must be misunderstanding you or not explaining myself properly, because his answer is exactly how I understood things. – John B Mar 04 '21 at 14:13
  • likely a little of both there is an assumption of band limited, and this is fine. But that assumption is also a requirement of the theorem for certain signals, and that was my point that was stated through contrived examples. I did not have the clarity of thought on the subject that was needed (hence it's a comment! Not an answer!) – crasic Mar 04 '21 at 14:13
  • As a PS , one can easily detect an assumed square wave using an interrupt :) and one can easily resolve phase differences in nano seconds on a basic MCU, so nyquist sampling is not the be all end all. My message is that signals, like friends, exist outside of their representations and all assumptions should be checked. – crasic Mar 04 '21 at 14:25

2 Answers2

9

if you decompose a signal into sinusoids, and then sample at 2x the highest frequency sinusoid, you can perfectly reproduce the original signal.

If the signal can be perfectly decomposed into sinusoids, then by definition, the signal is bandlimited to the frequency of the highest one, and Nyquist applies.

This applies whether the signal is continuous or sampled, as long as we keep our assumptions about the sampling consistent.

If we just have a bunch of uniformly taken samples, and no further information, then we don't have enough information to reconstruct a unique signal from which those samples were taken. We need to make an assumption about which alias was being used. Usually, we make the simplest assumption that the first (or should it be zeroth?) alias was sampled, and the signal has frequencies in the range DC to sample_rate/2.

Often, especially with high IF software defined ratios, a higher alias is sampled, and the signal has frequencies from fs/2 to fs, or even fs to 3fs/2.

If you have a perfect triangle wave, or a square wave, then you have frequencies going to infinity, and your 'if I can decompose ...' condition is false. Once bandlimited, a square or triangle wave looks rounded, and maybe gains overshoots, depending on the group delay of the filter used to remove the high frequency components.

Neil_UK
  • 158,152
  • 3
  • 173
  • 387
4

Shannon-Nyquist - only for repeating signals?

No. Shannon-Nyquist theorem holds for signals which have no components higher than half the sampling frequency. This does not entail that the signal is "periodic" or "repeating".

For example, this modified "sinc" function:

$$msinc(x) = \frac{sin(\omega x)}{x}$$

is not periodic, but has no frequency components above \$\omega\$ (if I recall correctly).

Math Keeps Me Busy
  • 18,947
  • 3
  • 19
  • 65