2

This question is inspired by this comment by KyranF:

"built-in 24-bit ADCs" I wouldn't trust that as far as I could throw it.

How does a bathroom scale automatically power on when I step on it?

As the title plainly states, is there any foundation for this comment and are built-in ADCs any less accurate than a dedicated chip, and if so why? What parameters on an ADC should be looked at for accuracy and precision?

R4D4
  • 239
  • 3
  • 10
  • 1
    Of course a µC internal ADC is as reliable and accurate as a standalone part with the same specs. It would generally backfire if common µC never met their specs. – PlasmaHH Nov 25 '16 at 09:55
  • 2
    That comment is very short-sighted in my opinion. The specification and how you use the ADC (its environment, supply noise for example) also play a large part. I can use a top-of-the-line standalone ADC in a crappy design and it will perform worse than a low-specced ADC build into a µC but used properly. – Bimpelrekkie Nov 25 '16 at 09:58
  • And KyranF is insulting a TI chip there, TI generally knows what they're doing. I would not pay too much attention to KyranF's comments. – Bimpelrekkie Nov 25 '16 at 10:04
  • 1
    *What parameters on an ADC should be looked at for accuracy and precision?* That could fill pages, I suggest you read a book about AD conversion, it's all explained there. Also in a sufficiently noise-free environment, you can take multiple samples with a low-precision ADC, average the data and get more precision. Even absolute precision if you calibrate. – Bimpelrekkie Nov 25 '16 at 10:08
  • As far as I know a builtin adc could potentially perform better than a same specd' standalone part due to the close coupling making it less susceptible to noise. But this is not backed-up by any data. Just a hunch. – zakkos Nov 25 '16 at 10:09
  • 2
    @FakeMoustache not only in a noise-free environment – Oversampling is awesome because for uncorrelated noise, it enhances the SNR! http://dsp.stackexchange.com/questions/34021/does-oversampling-improve-processing-gain/34027#34027 – Marcus Müller Nov 25 '16 at 10:45
  • @MarcusMüller Of course, you're right, I didn't think of that. – Bimpelrekkie Nov 25 '16 at 10:53

3 Answers3

8

Are built-in ADC converters in MCUs as reliable and accurate as their standalone counterparts?

Why shouldn't they? You know, every device you buy has their specs, and they will strive to follow them, because if they don't, they risk their company's good name or worse.

… comment by ${some Person on the Internet}

I ${ranting about something} as far as I ${totally unverifiable problem}

Well. Yes. There's someone on the internet voicing a negative opinion without backing it with facts. That's not really new[citation needed] – but anyway, I think it's safe to simply ignore this.

Look at whatever you want to digitize. Make sure you properly filter it. How high is your noise after that? You don't really have to have an ADC that can do better than the dynamic range between that average noise amplitude (\$\sigma\$ for things that follow central Gaussian distributions) and maximum signal amplitude. Figure out that dynamic range, and then figure out the Effective Number of Bits you need (ENOB). Often, your requirements are even lower – the bathroom scale example is very good: To be honest, a bathroom scale more accurate than 200g is senseless – you simply can't tell the difference whether a person drank a glass of water or gained weight. Let total range be 1 kg – 200 kg, then you need but a resolution of \$\frac 1{1000}\$ of the maximum value – in other words, a 10 bit converter will happily do. People still tend to buy the devices that say "\$\pm 10\text{ g}\$", so you'd need another factor of 20 to make marketing happy – that's about 4.5 bits. So, with an ADC that has 14 bits, you'd be better than any customer needs, and marketing wouldn't even have to lie¹.

Note your signal's bandwidth, and derive a minimum sampling rate from that (typically \$f_\text{sample} > 2\cdot f_{bandwidth}\$).

Your ADC's datasheet, be it a built-in one, or be it a separate one, will clearly state what it can do. It will state an ENOB or noise power, and it will clearly state the raw number of bits you get (which is > ENOB due to the noise in an ADC).

Don't let trolls on the internet troll you. The market has a clear indication for you: There's so many engineers that wanted to have ADCs integrated in their microcontroller, that basically every microcontroller at least has a variant with a built-in ADC. That might mean having those is actually useful.

Sure, they're not designed to give you 14 effective bits at 200MS/s, or 20 bits with a bias current of 10pA at 1MS/s – but that's not your use case, is it?


¹ they usually don't give a damn about that, though, for hard-to-verify senseless features in consumer products. Buy a consumer stereo and read the claimed sound output power. Use at full volume until batteries are empty. Patent that technology of glorious energy creation as perpetuum mobile.
Marketing is basically the opposite of that random person you quoted: They go out and make positive remarks unbased on any factual thing, but on a random aspect of the system they don't really understand.
Marcus Müller
  • 88,280
  • 5
  • 131
  • 237
2

For the same spec, yes, you will get the same functionality. If you can get the spec you need as an ADC integrated in an MCU, the overall cost should be good. If you want a particularly high performance ADC, there is less chance that you'll find an MCU dedicated to the niche which matches your application (this situation improves over time as manufacturers explore new markets). Some applications might effectively call for a high-spec ADC, with an embedded MCU - if the market is big enough and mature enough, the part will exist.

Pay attention to the specs of the MCU. It may be that to achieve the best noise performance, the mcu needs to be in sleep mode during the measurement. At least in this case, the manufacturer has done the qualification for you, meaning there are no surprises when you perform integration or make a last minute PCB tweek.

I think its fair to say that the comodity ADC on a generic low cost MCU will have a published spec that reflects its cost, and you'd be foolish to assume it's suitability based solely on number of bits.

Sample rate, stability, linearity, noise performance - most of these are of little importance when it comes to typical sensor applications (but you still might want to design knowing what to expect).

Sean Houlihane
  • 3,733
  • 1
  • 16
  • 25
1

ADCs sharing silicon with a logic-box (MCU, FPGA) have to endure the constant ringing of VDD and GND and the silicon substrate; this ringing will decay by about 10nanoseconds/6dB of amplitude (remembered from scope photographs); if the baseline ringing amplitude is 500 milliVolts PP, and you want a 16-bit SAR to be useful (with its 300 microvolt LSB, for 5v FullScale), you may need to WAIT until the substrate ringing has decayed from 500mV to 0.3mV, or 1,600:1 or 64dB or approximately 64/6 = 10+ * 10nanoseconds = 100+ nanoseconds.

Thus you need to arrange the on-chip clocks to provide 100+ nanoseconds of QUIET TIME, for onchip ADCs at the 16-bit level.

TI used 24-bit ADC on a common substrate with MCU; the MCU ran as fast as 33Mhz, but the 24-bit ADC only_met_spec with 8MHz clock, or 125nanosecond quiet times.

Thus Sean Houlihane cautions us "Putting MCU to Sleep may be necessary"

Can you delay the ADC action (even on a per-binary-search-decision) until the ringing has decayed to near zero? enter image description here

analogsystemsrf
  • 33,703
  • 2
  • 18
  • 46