18

In the quest for a not so expensive PC scope/logic analyzer, I have found a nice little device it looks very well done and I know it will do the job.

However looking at the specifications, I encountered this:

Bandwidth vs Sample Rate

In order to accurately record a signal, the sample rate must be sufficiently higher in order to preserve the information in the signal, as detailed in the Nyquist–Shannon sampling theorem. Digital signals must be sampled at least four times faster than the highest frequency component in the signal. Analog signals need to be sampled ten times faster than the fastest frequency component in the signal.

And consequently it has a sampling rate of 500MSPs but a bandwidth(filter) of 100MHz so a ratio 1:5 for digital signals and a sampling rate of 50MSPs and a bandwith(filter) of 5MHz so a ratio 1:10 for analog signals

As far as I understand Niquist-Shannon only talks about sampling at twice the maximum frequency (in theory), It is of course good not to push the limits and there are no perfect filters. but even a simple UART samples a digital signal at the same speed than the baudrate!

So is this a usual rule of thumb for sampling? or is this something someone from sales may have written? It lets me somehow clueless I have never heard about this.

Dmitry Grigoryev
  • 25,576
  • 5
  • 45
  • 106
LuisF
  • 639
  • 5
  • 12
  • 5
    Cheap scopes cut all kinds of corners in terms of their ability to interpolate the signal samples properly for display, which is why they need such high oversampling ratios in order to get decent visual fidelity. – Dave Tweed Aug 22 '16 at 22:37
  • Price ranges from 100 to 600$USD so not completely in the cheap spectrum but every saved penny is an earned penny true – LuisF Aug 22 '16 at 22:46
  • 7
    Anything under $5000 is cheap enough you're going to have to cut corners when designing a 'scope. – The Photon Aug 22 '16 at 22:55
  • 9
    If you sample a repetitive waveform at 2f, you know nothing about its shape. Was it a square, a sine, a sawtoooth? Who knows ... your samples can't tell you. – brhans Aug 22 '16 at 23:53
  • There's a related rule of thumb regarding active filters in that the bandwidth of the op amps involved must be at least 10X the highest frequency in the passband. – Mike DeSimone Aug 23 '16 at 04:47
  • 1
    There's another weird effect with older or cheaper digital scopes that use linear interpolation to take a series of samples and draw a continuous waveform line. In this case, perfectly good sine waves start degrading into trapezoids or triangles as the signal's frequency approaches the Nyquist frequency. Newer/fancier scopes interpolate with the sinc (sin(πx)/πx) function these days, but with those you have to be sure not to fool yourself into thinking the signal is cleaner than it is, so remember to use the "show samples" feature. – Mike DeSimone Aug 23 '16 at 04:52
  • 5
    @brhans note that your point is absolutely moot. A square wave of frequency \$f\$ has by no means a bandwidth of \$f\$, but spectral components all over the place. – Marcus Müller Aug 23 '16 at 08:28
  • 2
    @MarcusMüller - yes of course it does - but if you only sampled it at 2f you won't know anything about those other spectral components. Only if you know the 'shape' of your sampled waveform in advance can you possibly hope to reconstruct anything interesting from a 2f set of samples, and even then you can't have any more than a crude guess about the phase of the original signal. – brhans Aug 23 '16 at 11:10
  • 1
    All of the hardware UARTs I'm aware of sample the input at a multiple of the baud rate - anything from 4x to 16x or more - and then use filters and/or majority detect systems to decide on the value. Only software 'bit-banged' UARTs are likely to be sampling at the baud rate. – brhans Aug 23 '16 at 11:14
  • 4
    You're wrong about the UART. The classic 16550 UART operating at the highest baud rate takes 16 samples per bit. You cannot get reliable sync with anything less than 3 samples per bit (clock drift will accumulate such that you'll periodically lose one bit). Niquist sampling theorem merely says that you cannot reconstruct a signal with less than 2x sampling frequency, it does not say that you can get a good signal at 2x frequency. – slebetman Aug 23 '16 at 12:55
  • 1
    Software UARTs (seemingly) get away with a single sample per bit if the starting edge is detected with more granularity, like at the CPU clock rate. – ilkkachu Aug 23 '16 at 13:30
  • @slebetman: While 3x is the smallest whole-number sampling rate that can work reliably with a UART, a rate that's definitely above 17/9 and below 2x even in the presence of timing jitter or other uncertainties will suffice; jitter tolerance goes down as sampling rates approach whole even numbers, so a rate of precisely 2x won't work even though a rate which is definitely faster or definitely slower will. – supercat Aug 24 '16 at 06:04

3 Answers3

29

Nyquist-Shannon sampling theorem... often mis-used...

If you have a signal that is perfectly band limited to a bandwidth of f0 then you can collect all the information there is in that signal by sampling it at discrete times, as long as your sample rate is greater than 2f0

it is very concise and contains within it two very key caveats

  1. PERFECTLY BANDLIMITED
  2. Greater than 2f

Point #1 is the major issue here as you cannot in practice get a signal that is perfectly bandlimited. Because we cannot achieve a perfectly bandlimited signal we must deal with the characteristics of a real bandlimited signal. Closer to the nyquist frequency will create additional phase shift. Closer will create distortion, inability to reconstruct the signal of interest.

Rule of thumb? I would sample at 10x the maximum frequency that I am interested in.

A very good paper on the misuse of Nyquist-Shannon http://www.wescottdesign.com/articles/Sampling/sampling.pdf

Why "At 2x" is wrong

Take this as an example: We want to sample a sinewave with frequency f. if we blindly sample at 2f ... we could end up capturing a straight line.

enter image description here

  • 3
    Excellent answer. The 2f Nyquist limit prevents *aliasing* but still permits *amplitide error* of 100% as shown in your figure. With more points per cycle, the amplitude error, phase error, offset error, and frequency error eventually drop to acceptable values. – MarkU Aug 22 '16 at 22:59
  • 6
    This was an excellent answer until the example, which only shows that it's very important that the sample rate is _over_ twice the bandwidth. @MarkU talks about effects that exists when you do _not_ follow the "law". – pipe Aug 22 '16 at 23:07
  • 4
    exactly pipe :) if you read what the OP wrote "sampling at twice the maximum frequency (in theory)" For starters that isn't what the theorem stated (as I wrote) and it is the most common misconception w.r.t. sampling theorem. Is the image crude yes BUT it is to the point why "at twice" is soo very wrong and completely not what N-S stated. –  Aug 23 '16 at 00:24
  • According to the theorem, the example you give is wrong. Indeed, it is _the_ example shown why the sampling frequency should be greater than 2f. In a perfectly bandlimited wave with any frequency greater than 2f would perfectly allow reconstruction of the wave. – bunyaCloven Aug 23 '16 at 06:49
  • 4
    And that is my point. The OP was stating *at* 2x. I was citing the theorem exactly (it never says at 2x, it says greater than WITH a perfectly band limited signal) and also showing why you shouldn't sample at 2x. The example isn't meant to show what should be done BUT why the colloquial interpretation of N-S is soo very wrong –  Aug 23 '16 at 07:14
  • I think a picture that shows frequency f sampled at a rate of about 2.1x is perhaps more informative, since that would contain enough information to reconstruct the wave if it's known to contain only frequency content below Nyquist, but reconstructing the wave without *also* having it contain an alias at 1.2x the actual frequency will be awkward. – supercat Aug 24 '16 at 06:10
13

There's a difference between analyzing a signal for information, and displaying it on a scope screen. A scope display is basically a connect the dots, so if you had a 100 MHz sine wave sampled at 200 MHz (every 5 nsec) AND you had the imaginary component being sampled as well you could reconstruct the signal. Since you only have the real part available, 4 points is pretty much the minimum required, and even then there are pathological situations, such as sampling at 45, 135, 225 and 315 degrees, which would look like a smaller-amplitude square wave. Your scope, however, would only show 4 points connected by straight lines. After all, the scope has no way of knowing what the actual shape is - to do that it would need higher harmonics. In order to make a reasonably nice approximation to the 100 MHz sine it would need about 10 samples per period - the more the better, but 10 is a rough rule of thumb. Certainly 100 samples would be overkill for a scope display, and engineering rules of thumb tend to work in powers of 10.

WhatRoughBeast
  • 59,978
  • 2
  • 37
  • 97
  • 3
    But the imaginary component is (likely) zero... – Oliver Charlesworth Aug 22 '16 at 23:08
  • 2
    @OliverCharlesworth - Not with respect to the sampling clock. Imaginary component is 90 degrees for a sine cycle triggered at zero amplitude, since if it were zero, and both samples would be zero, there is no way to tell that the sine is even there. – WhatRoughBeast Aug 22 '16 at 23:49
  • 1
    Honestly, that just sounds like 2x oversampling. I'm having a hard time modelling how one generates an imaginary component (short of a frequency shift operation or a Hilbert transform). Not claiming this framework is incorrect here, just that I've never seen it used this way. Any Google search terms I should investigate? – Oliver Charlesworth Aug 23 '16 at 07:18
  • Also, not convinced by the "need higher harmonics" - the OP quote is in reference to "the **fastest** frequency component" - given that constraint, (sufficient) sinc interpolation should reconstruct the original waveform for anything > 2f. – Oliver Charlesworth Aug 23 '16 at 09:32
  • 1
    @OliverCharlesworth - "a hard time modelling how one generates an imaginary component" - Exactly. Not feasible, which is why you need to oversample. In the RF world you generate I and Q, but that's not useful here. And as for sinc interpolation, scope manufacturers find it uneconomical, not to mention non-intuitive on the part of users. At maximum scan rate on a digital scope the trace becomes obvious as points connected by straight lines, and the limits of the sample rate become obvious (and, hopefully, as source of caution). – WhatRoughBeast Aug 23 '16 at 12:22
  • Yup - I'm not disagreeing with the fact that oversampling helps mitigate the problem; I'm merely disagreeing with terminology in your answer - i.e. this isn't analogous to deriving a complex signal (unlike IQ processing, which really is analogous to the real and imaginary components of an analytic signal). – Oliver Charlesworth Aug 23 '16 at 12:26
  • @OliverCharlesworth in real analog life, unlike in digital signal processing theory, there is no such thing as a perfectly bandwidth-limited signal which can be recovered exactly from a finite number of terms in a Fourier series. For example if the real world signal is only non-zero for a finite amount of time, its frequency spectrum is continuous and infinite. If you use an analog filter before you sample it digitally (which you should do, to mininize anti-aliasing), a real filter will not have a brick-wall frequency cutoff. Therefore in real life you always need some frequency headroom. – alephzero Aug 23 '16 at 14:53
  • @alephzero - Absolutely. And that would be a great answer to the question. – Oliver Charlesworth Aug 23 '16 at 15:04
10

"even a simple UART samples a digital signal at the same speed..." the UART doesn't need to reconstruct the analog square wave signal that carries the digital information, so it doesn't take the theorem into account.

The Shannon-Nyquist theorem actually talks about the perfect representation of an analog signal. Perfect representation here means that knowing only the samples of the signal you could reconstruct perfectly the time-domain analog signal that was sampled.

Of course this is only possible in theory. In fact the reconstruction formula involves a series of "sinc" functions (\$ \mathrm{sinc}(x) = \frac{\sin(\pi x)}{\pi x}\$), which aren't time limited (they extend from \$-\infty\$ to \$+\infty\$), so they are not really implementable perfectly in hardware. High end scopes use a truncated form of that sinc function to achieve higher bandwidth capability with lower sampler rates, i.e. more MHz with less samples, because they don't simply "join the dots", so they don't need much oversampling.

But still they need some oversampling, because the sampling rate must be greater than 2B, where B is the bandwidth, and the fact that they use a truncated sinc function in the reconstruction doesn't allow to get too close to that 2B figure.

  • 8
    Actually, every UART I've seen samples the data at 8 or 16 times the baud rate. – pipe Aug 22 '16 at 23:49
  • 1
    @pipe I agree, the few that I've seen behave that way too. I was just pointing out a false premise in OP's reasoning. – LorenzoDonati4Ukraine-OnStrike Aug 22 '16 at 23:53
  • @pipe. BTW, I think that they sample so fast only because it allows simpler detection algorithms. I'm not sure, but I think that they could do with much less samples if they used more complicated algorithms (which is impractical and expensive, probably, so the question is moot). – LorenzoDonati4Ukraine-OnStrike Aug 22 '16 at 23:55
  • I think they also sample fast because they can. :) – pipe Aug 22 '16 at 23:56
  • @pipe yep! With modern half a dollar MCUs with 20MHz clocks there is no point in dividing that frequency too much to achieve a smallish ~100ksamples/sec (at best) sampling frequency :-) – LorenzoDonati4Ukraine-OnStrike Aug 22 '16 at 23:58
  • 2
    the reason modern UARTs sample at 8x or 16x (or more or less or somewhere in-between) is so that they can position the sampling of the bits in the middle of the bit period. that is 1.5 bit periods past the **edge** of the start bit. if a UART had real estate in the silicon and clockspeed to spare, it seems to me perfectly reasonable to do some reconstruction using a cheap approximation to a \$ \operatorname{sinc}(x) \$ function (like maybe 3rd-order Hermite interpolation). then they should have decent looking edges for the start bit and decently stable levels for each data bit. – robert bristow-johnson Aug 23 '16 at 02:47
  • @robertbristow-johnson You confirm my guess: detecting the middle of the bit pulse like that is algorithmically cheap: you don't need an FP-ALU to implement that, so it can be done with simple digital hardware. On the other hand, even with gross approximations, producing a sinc pulse and then detect the middle of the pulse is much more expensive (in silicon area and complexity) for no great advantage. – LorenzoDonati4Ukraine-OnStrike Aug 23 '16 at 04:41
  • 3
    Some MCU UARTs, like the old MC6811, sampled three times in the middle of a bit (clocks 5, 7, and 9 since it used 16X oversampling), used a majority function to get the data bit value, and set a "noise flag" status bit if the samples didn't all match. They also used multiple samples to confirm the start bit edge. This not only helped detect and mitigate some noise, it could also give you a little more clock frequency tolerance. – Mike DeSimone Aug 23 '16 at 04:56
  • @MikeDeSimone: If a transmitting UART were configured to 1.5 stop bits, and always padded inter-byte intervals to a whole number of bit times beyond that, the noise flag would detect framing errors even in cases where stop bits seemed to happen in the right spots. – supercat Mar 22 '20 at 21:01