3

I'm writing the firmware for a data-acquisition board using the dsPIC33FJ64GP804 MCU and I noticed something strange reading the electrical characteristics for 12-bit A/D conversion:

12-bit specifications

The ADC clock period (emphasis mine) is listed as 117.6ns, which is an oddly-specific number, especially considering there's no direct hardware obstacle to try running your ADC much faster, e.g. with TAD = FCY, which could be as low as 25ns at the highest officially allowed clock speed. So the limit doesn't come from there.

The characteristics for 10-bit conversion seems more like it's been derived from actual characterization testing:

10-bit specifications

So where does this weird value come from? Something related to the settling time of the sample&hold capacitor (esp. considering TSAMP = 3 TAD for 12-bit and TSAMP = 2 TAD for 10-bit)?

Edit

To clarify, I understand TAD = 25ns would be asking for trouble. My main questions are:

  1. Why is TAD different for the 10-bit and the 12-bit case at all?
  2. Where does the 12-bit number come from, could IC characterization (which I guess involves statistical methods and uncertainty) really produce a number that precise?
JRE
  • 67,678
  • 8
  • 104
  • 179
anrieff
  • 5,199
  • 1
  • 27
  • 46

4 Answers4

2

There are some clues in the datasheet.

For instance:

The AD12B bit (AD1CON1<10>) allows each of the ADC modules to be configured by the user as either a 10-bit, 4-sample/hold ADC (default configuration) or a 12-bit, 1-sample/hold ADC.

Then this:

• In the 12-bit configuration, conversion speeds of up to 500 ksps are supported

• There is only one sample/hold amplifier in the 12-bit configuration, so simultaneous sampling of multiple channels is not supported

I agree the unusually precise value is odd, but if you convert that to a sample rate it yields about 607K samples per second (a bit above the maximum stated rate) (14 ADC periods are required for a 12 bit conversion).

In the ADC reference manual there is a schematic of the effective ADC input in the two modes;

DSPIC33F ADC effective input

Note the difference in the input capacitance; this is necessary as the sample capacitor has to hold the charge for a longer period of time for the conversion to be accurate and will therefore require a longer time to actually charge up during the sample period.

Looking at the values, it appears that the effective capacitance for 12 bit mode is formed of all 4 sample and hold capacitors (4 * 4.4 = 17.6, 18 when rounded) and that makes sense of the statement that there is only one sample and hold in 12 bit mode. Probably achieved by switches isolating the other channel sample and holds from their caps and switching them in to form a single effective device.

Hence a longer ADC period (longer charge time and a longer hold time).

The value in the datasheet may be from calculations or experimentation (I do not know which).

Peter Smith
  • 21,923
  • 1
  • 29
  • 64
  • Yeah, I figured out it's from calculations (10⁹ns / 500ksps / 17, see [my answer](https://electronics.stackexchange.com/a/497320/18035)). Many thanks for pointing me in the right direction, you deserve that tick :) – anrieff May 03 '20 at 09:00
0

Although the minimum adc period is 25 ns for 40MHz, due to the hardware limitations (sample and hold), they must specify a minimum time. That does not mean that you can not sample faster, it merely means that if you sample faster they cannot guarantee correct operation.

Navaro
  • 912
  • 7
  • 15
  • I understand that there has to be a minimum, and 25ns, even if possible, would be asking for trouble. I've edited my question with clarifications. – anrieff Apr 21 '20 at 10:12
0

For old parts in this family the Tad wasn't different: the circuit for 10 bits was the same as 12 bits, you just need to wait two more bits for conversion. It does make me wonder :)

Assuming it's not an error, I think the Tad increases just so that they can run the conversion slower. A SAR conversion error doesn't just wipe out the last bit: you have to get every bit at 12 bit accuracy to get any bit at 12 bit accuracy.

I don't know how the Tad number was calculated: all we know is that they felt comfortable with 117.6, and not 117.5

And I think the sample time increases with Tad just to reduce ADC clock noise. Have I got that wrong? I don't think the sample and hold circuit is connected to the ADC clock at all. With the sample time increase, they've increased both the number of ADC clock cycles (reducing ADC clock noise), and minimum sample time (from 152ns to 353ns). I don't know which was the more important. Sample time depends on the amplifier and capacitor and leakage, so they need to wait until it has settled to with 12 bits, but they could have done that by increasing the multiplier more, not the Tad.

david
  • 4,364
  • 15
  • 30
0

I figured out where the odd value came from, but the credits and answer points are to Peter Smith's answer for nudging me in the right direction:

if you convert that to a sample rate it yields about 607K samples per second (a bit above the maximum stated rate) (14 ADC periods are required for a 12 bit conversion).

Upon closer examination of the datasheet, 14 ADC periods are required for conversion, but 3 more are required for sampling. With only one S&H unit you cannot parallelise sampling and conversion, so it's 17 periods per conversion at minimum. If you do ADC conversions back-to-back at the maximum possible rate of 500 ksps (2µs per conversion), then TAD comes at 2µs/17 = 117.64ns. Which is - TADAAA - the value in the datasheet.

So it seems that Microchip had the 500ksps target in mind, characterized it to perform well at that target, and derived the unusually precise value we see, from the not-so-precise planned specification of sample rate. It's entirely possible that the ADC unit works well at 550 ksps too, so lower values of TAD could also work, perhaps even down to the 76 ns bound for 10-bit conversion.

anrieff
  • 5,199
  • 1
  • 27
  • 46