11

The effective number of bits (ENOB) describes the "real" resolution of an ADC.

I paid no attention to it when I was first taking the class but now when I think about it I cant really make sense of it that the result can be something like 6.8 bits. I can't really make sense of what 6.8 bits physically represent.

I would appreciate it if someone could expain why the ENOB isn't rounded down.

For instance why isn't ENOB = 6.8 = 6 bits? How would you interpret a rational ENOB in a real scnerio?

reirab
  • 103
  • 5
Emre Mutlu
  • 840
  • 6
  • 23

6 Answers6

17

It is a result of calculating the number of bits after the imperfections such as noise and distortion are taken into account.

An ENOB of 6.8 basically tells you that for example an 8-bit ADC has real world performance that is better than an ideal 6-bit ADC but worse than an ideal 7-bit ADC.

You can also think of it as not having 2^6=64 or 2^7=128 discrete steps in the signal like on ideal 6 or 7 bit ADC, but 2^6.8 or about 111 discrete steps.

Justme
  • 127,425
  • 3
  • 97
  • 261
11

What is ENOB?

ENOB is one of the most commonly misunderstood specifications around; contrary to popular belief, it has nothing to do with the number of bits you can "trust" it's only a measure of the noise of an ADC.

How is ENOB Calculated?

Every ADC has quantization noise; this is the noise generated by the fact that the ADC turns a continuous voltage into discrete steps.

enter image description here

(Image source: Wikipedia - Quantization (signal processing))

The quantization noise of an ideal ADC is (approximately) \$\frac{1}{\sqrt{12}} \text{ LSBs RMS}\$. The effective number of bits is the bit-depth of an ideal ADC with the same quantization noise as your ADC.

As an example, if your ADC measures from 0 to 5V and has an RMS noise of 1mV, we can calculate the ENOB as:

$$ \text{ENOB} = \log_2 \left( \frac{1/ \sqrt{12}} {\text{Noise}_\text{RMS} } \right) $$

$$ \text{ENOB} = \log_2 \left( \frac{1/ \sqrt{12}} {\frac{1\text{mV}_\text{RMS}}{5\text{V}} } \right) = 10.5\text{ Bits} $$

ENOB is typically measured with a full-scale sine wave with a defined frequency; it's frequency-dependent. (1kHz is most common, followed by 500Hz. Some manufacturers will publish ENOB vs. frequency plots)

Note: If it's measured with a full-scale sine wave (as it almost always is), then it's also a measure of Signal to Noise and Distortion (SINAD) can be calculated as:

$$ \text{SINAD} = \text{ENOB}\cdot 6.02 \text{dB} + 1.76 \text{dB} $$

What Does ENOB Measure?

ENOB calculations include effects from INL/DNL, distortion, jitter, etc.; these all contribute to the noise of the ADC.

ENOB does not include the effects of gain or offset error, clock error, intermodulation distortion, etc.

References and further reading:

SamGibson
  • 17,231
  • 5
  • 37
  • 58
Mark Omo
  • 487
  • 6
  • 20
8

A bit is just a digit, in base 2.

So, a 8-bit number has an ENODD (Effective Number of Decimal Digits) of \$ log_{10}(256) \$ = 2.4 decimal digits.

A 2-digit decimal display, which shows values between 0 and 99, has an effective number of decimal digits of 2, or an effective number of bits of \$ log_2(100) \$ or 6.6.

For an ADC, the ENOB is the log2 of the ratio of signal to (noise + distortion + whatever other errors). And that latter term is not necessarily a power of two, so the ENOB won't be an integer.

Suppose it has 8 bits, so the value is 0..255 or 256 distinct values.

Then, it has a total noise + distortion + whatever other errors of 6 LSB. So if you get a reading, you know the real value is within that interval.

ENOB = log2( 256 / 6 ) = log2(256) - log2(6) = 5.4 bits.

If the ENOB was 5 bits, then the last 3 bits would be garbage. But if it is 5.4 bits, then the last 3 bits contain a bit more information.

bobflux
  • 70,433
  • 3
  • 83
  • 203
  • Out of curiosity, why can you say that the last three bits contain more information, instead of just that the sixth bit contains more information and the last two are garbage? Is there some significance to the last two, even if the ENOB is less than 6? – Hearth Mar 05 '21 at 14:22
  • @Hearth Well, imagine I take a measurement using a device that's accurate to within 200 mm. If the device reads 500 mm, then the real value is between 300 and 700 mm (and probably near the middle of that range), whereas if it reads 599 mm, the real value is between 399 and 799 mm (and probably near the middle of that range). So the last two digits don't tell you a lot, but they do give you *some* potentially useful information. – Cassie Swett Mar 05 '21 at 20:06
  • @Hearth If noise has a gaussian probability distribution, which is usually the case, throwing away the last bits can turn the noise into a binomial distribution, which is more difficult to average out. It will also change the frequency spectrum of the noise and make it more difficult to filter out. You can also think of the histogram of ADC values. If you throw away the last bits, you get less bins in your histogram, which makes it more difficult to see where the center is. – bobflux Mar 05 '21 at 20:19
6

I think that a simpler answer can be more useful here, especially to answer the question: I can't really make sense of what 6.8 bits physically represent.

Here is a perfectly realizable and very common 1.5 bit ADC (or 1.58… depending on how you count). A simple window comparator telling you if an input is below, within, or above a certain range. This is very useful (and common) if you want control the temperature with a heater and a fan, detect if a voltage is within specifications, etc:

schematic

simulate this circuit – Schematic created using CircuitLab

Input D1 D0
Below Ref- 0 0
Between Ref- and Ref+ 0 1
Above Ref+ 1 1

Seeing how it has 3 possible digital states, you can not call it a "2 bit" ADC which would have 4 states. Nor can you call it a "1 bit" ADC. It is somewhere in between, and if you have to compare it to those other ADCs you can say that it has log2(3)=1.58496… bits.

pipe
  • 13,748
  • 5
  • 42
  • 72
1

An ideal AD converter generates some quantization noise because the LSB is always rounded. If our ADC has N bits and the input signal has the maximum amplitude without clipping the conversion result has SNR approximately 6N decibels assuming there's no other errors than the quantization noise and the input signal is uniformly distributed. If the statistical distribution of the signal is non-uniform the formula 6dB/bit is, of course, not valid.

Unfortunately the circuits of the ADC insert some extra noise to the analog signal and there's more random error in the conversion result than the plain LSB rounding.

Let's have a commercial 16 bit ADC named X. The manufacturer of X guarantees that X has less than 1/2 LSB linearity and offset error, but adds that ENOB of X is only 14.4 This should be interpreted as follows:

If one measures the SNR of the digital output signal of X with maximum input signal amplitude and calculates reversely how many bits an ideal ADC should have for the same SNR (limited only by the quantization noise) the reversed formula gives 14.4 bits.

My simplified formula for ENOB is SNR/db divided by 6. Thus the digital output signal of X has SNR= 6*14.4 dB = 86.4 dB. An ideal 16 bit ADC would have 96 dB.

X is more noisy than an ideal 15 bit ideal ADC but less noisy than an ideal 14 bit ADC. X also performs slightly better than an otherwise equal 16 bit ADC which has ENOB=14.3 bits.

Unfortunately I do not know what is the assumed max amplitude input signal which is commonly used to determine the ENOB. In addition my SNR=6dB/bit is not based on proper statistical analysis. It assumes that the average quantization error is as voltage equivalent with 1/4 of the LSB and the signal has uniform voltage distribution.

But this article has obviously more realistic formula for the SNR of the ideal ADC: https://www.analog.com/en/analog-dialogue/raqs/raq-issue-90.html# and from that derived formula for ENOB.

ENOB is not truncated to an integer- why in the hell should the maker of ADC claim his product is worse than it actually is. To get the practical accuracy of X closer to the claimed 16 bit one should use averaging or more advanced estimation methods to filter the output sample sequence. ENOB is a statistical measure for the accuracy of a single conversion.

  • "Unfortunately the circuits of the ADC insert some extra noise to the analog signal and there's more random error in the conversion result than the plain LSB rounding." Fortunately this creates a dithering effect which actually results in more information than a perfectly-rounding noise-free ADC. – Ben Voigt Mar 05 '21 at 22:51
  • 2
    The dithering effect can be utilized only in filtering which combines several samples. For a single conversion result it's useless. –  Mar 05 '21 at 23:18
  • Correct, but that means the information-theoretic number of bits in each sample is higher. – Ben Voigt Mar 07 '21 at 01:55
  • That's not at all clear. Noise in ADC really can increase the self-information in the numeric sample stream outputted by an ADC. I do not buy an ADC to let it produce high self-information content own symbol streams. I consider ADC as a transmission channel which should relay as great part as possible of the self-information of the discrete symbol stream which is coded into the input signal. A proper mathematician could calculate in what conditions the dithering caused by noise really increases the number of bits per second that can be got transmitted through the ADC. Can you show that math? –  Mar 07 '21 at 12:52
0

It represents the fact that you have imperfect knowledge of the noise.

The ENOB is related to the noise floor of the ADC. Note that noise, almost by definition, cannot be a constant DC value. The noise will be unpredictable, but have some statistical distribution, perhaps Gaussian, perhaps 1/f, or perhaps something else.

That means the noise can vary in amplitude, perhaps big enough to cover up the bottom 2 bits of an 8 bit ADC, giving you 6 bits of useful data above this noise. Or can perhaps vary to smaller and only cover up the bottom 1 bit out of 8, giving you 7 bits of useful data. Maybe even occasionally be near zero (much less than 1 LSB). So that 8th bit is occasionally useful in lowering the average "error". But you don't know when, so you have to use the statistical averages and the distribution.

Perhaps the measured internal noise of some ADC can vary around in some statistical distribution that averages somewhere between 1 and 2 LSBs of the ADC, producing an average useful range above this varying noise distribution, equivalent to 6.8 bits (out of 8 data bits).

The key is that ENOB doesn't tell you exactly about any one specific single measurement sample, but about a large number of ADC samples, where the statistics average out.

hotpaw2
  • 4,731
  • 4
  • 29
  • 44