5

I have a clock. It generates ticks. I want to know the error in PPM relative to another clock, so I count the ticks.

Let's say the oscillator is 1 MHz (for simplicity). I should count 1,000,000 ticks per second. I plot the (expected_tick_count - actual_tick_count) per other-clock-second. I get a nice line which represents the accumulating error, in microseconds. Linear regression results in R^2 = 0.9999 (i.e. a perfect line), and a slope of 27. To me, this implies 27 PPM error in the oscillator.

I assume this means I actually get 999,973 ticks per second instead of 1,000,000. So I tune the oscillator frequency so that it includes an extra 27 ticks per second.

I re-run my test. I expect to see negligible error. Instead, I find another line, with a slope of about 3. So it appears as if my initial regression analysis was off by 3 PPM, despite an R^2 that is the envy of statisticians everywhere.

Am I missing something?

ajs410
  • 8,381
  • 5
  • 35
  • 42

4 Answers4

4

What accuracy does the datasheet for your oscillator claim? Typically, a crystal oscillator will only guarantee to be within 25ppm.

Perhaps the act of measuring the frequency is having an effect also.

Toby Jaffey
  • 28,796
  • 19
  • 96
  • 150
  • It's not the accuracy of the oscillator that I'm concerned with, because I am attempting to compensate for the error. My problem is that after analyzing the accuracy and compensating for it, it is still measurably inaccurate. The error remains the same run after run, so my measurements are precise. – ajs410 Apr 27 '11 at 19:51
  • If the error is repetitive, than it implies that your 27 tick-per-second adjustment is the issue. What happens if you try and adjust by 30 ticks per second? Also, is there a possiblilty that your reference clock is changing by 3 PPM? – Adam Lawrence Apr 27 '11 at 20:06
  • Yes, my regression is not calculating the correct error. If I adjust by 30 PPM, I get the expected negligible 0.1 PPM error. If the reference clock was changing by 3 PPM, I would be getting results all over the map, instead of consistently getting 27 PPM error "uncompensated" and 3 PPM error "compensated" and 0.1 PPM "double-compensated". – ajs410 Apr 27 '11 at 20:15
2

How are you tuning the oscillator to get the 27 more ticks/sec, that is different from the way you are measuring the difference? Are you are using two different references that could be slightly off, by say, 3 ppm?

Edit: Then, from our comments below, and despite the error having been determined by the more accurate device, I'd suggest that the secondary error is due, at least in part, to having tweaked your OUT against itself, known to have been out by 27ppm.

I've seen this happen when I've calibrated a new bicycle odometer by riding an independently measured distance, and compensating the device for the %-age error in its reading. A second trial will come really close but it's not perfect. After the second tweak, the device will be good to within the 0.01 mile (53') limit of the display.

JRobert
  • 3,162
  • 13
  • 27
  • To an extent, I can control the frequency of the oscillator-under-test (OUT). After tuning OUT for 27 extra tick/s, I re-run the accuracy analysis (same reference), and I am still 3 tick/s short. Tuning OUT for an additional 3 tick/s (on top of the original 27), re-run accuracy analysis, and now I am 0.1 tick/s short, which is as good an accuracy as I can get. – ajs410 Apr 27 '11 at 20:31
  • What I'm wondering is how you can know how much you've moved the frequency of OUT, other than by using the accuracy analysis? In which case, the analysis should re-confirm the adjustment you just made. If you're using a different means - reference - to monitor your adjustment, how does its reference compare to that used for the accuracy analysis? Maybe I'm missing something, but if you're using A to tweak the OUT, and B to see how well you did, don't you need to also compare A to B before you can critique the results? – JRobert Apr 28 '11 at 20:28
  • I only have two clocks, OUT and REF. My goal is not to sync to "real time", but sync to REF, regardless of its accuracy. I analyze OUT against REF with regression, determine how many tick/sec OUT is ahead or behind REF (R^2 = 0.999), and tune OUT to compensate for the missing ticks. I then run the analysis a second time, expecting to get fractional tick/sec. Instead I get another R^2=0.999 line. Adding the second adjustment to the first generates the desired fractional tick/sec. REF remains stable throughout the analysis, done over minutes to hours. – ajs410 Apr 29 '11 at 15:52
  • I appreciate the anecdote on the odometer - it makes me feel like I'm not actually losing faith in mathematics. I think I see what you're talking about now. It's not that REF is 27 ppm above OUT, but that OUT is 27 ppm below REF. – ajs410 May 03 '11 at 15:59
2

I don't have any potential solutions, just a couple of further questions.

I've seen oscillators drift for a couple of minutes after tuning, so there may be a post adjustment issue you're dealing with.

If you take data over a longer time, does it change the results?

What are you using as a reference clock? How accurate is that?

The other thing that comes to mind, and I'm not going to explain it correctly because it's still something I'm struggling with is this. As you are plotting the accumulated error, your operating in the phase domain, as you've measuring accumulated error in time. However, you're measuring the frequency, so there may be a phase/frequency thing going on. I'll try to dig up some more references.

rfdave
  • 1,821
  • 10
  • 7
  • The "oscillator's current tick" is read from a register. I "tune" the oscillator by using a floating-point multiply; the oscillator never actually changes speed. After "tuning", the tick register is cleared, to prevent the old tick count from interfering with regression analysis. Data has been taken over the course of minutes to hours. After two analyses, the error is always below 1 PPM (the resolution of the tick), and the error remains stable for over an hour. I want to sync to the reference clock, regardless of its accuracy. – ajs410 Apr 28 '11 at 15:34
  • @ajs410 Whats the nature of this 'floating-point multiply' step? If your using IEEE floats, they are not precise by nature. Depending on how your doing this it could be your issue. – Mark May 05 '11 at 23:39
  • It's a double precision float. I don't think the accuracy of the multiply is the issue, because after the second regression analysis it is dead on. If it was a precision issue, the second regression analysis would not create a better result. – ajs410 May 11 '11 at 20:03
0

I was unable to determine what the problem is. I even went to the math stackexchange to see if the math types had any input, but all I got was "your residuals aren't randomly distributed so linear regression is inappropriate".

Ultimately, iterative application of linear regression manages to narrow in on the true value given enough time. Why the iteration is required is a mystery.

ajs410
  • 8,381
  • 5
  • 35
  • 42