6

I am building a device on the AVR platform. The device will need some timing information, so I was thinking of reimplementing Arduino millis-like functionality (though not exactly like this). However, after doing some back of the envelope calculations (partly based on this post), it started to appear to me that millis eats up at least 5% of the CPU time on a 20Mhz processor, and proportionally more than that on a 16Mhz one:

  • Each millisecond, the timer (timer0) overflows, triggers an interrupt, which increments millis
  • An ISR should take 26 clocks for pre/post ISR routine (5 PUSH + POP plus CLR, IN, and RETI
  • The ISR itself should take about 21 clocks (load 32-bit value, increment it, and store it back in SRAM)
  • This yields almost 50 clocks, or 50 microseconds each millisecond.
  • In fact, Arduino ISR function is slower than that (as revealed by this post) because it is really careful about keeping millis as accurate as possible, which takes more CPU cycles still

I don't need millis precision, so I am considering implementing centis or even decis to save processor cycles for other stuff. Is this unreasonable? Are my calculations wrong? It would seem like an odd design choice, or am I missing something?

angelatlarge
  • 3,611
  • 22
  • 37
  • 1
    Unless you're creating it for end-user programming, forgo having *any* sort of function for this and just do the timing the old-fashioned way. – Ignacio Vazquez-Abrams Oct 14 '14 at 04:58
  • 3
    Aren't you running at 20MHz? Then 50 clock cycles are around 3us and not 50. Also, have you checked the generated machine code to see if your numbers are correct - they seem a bit high to me – Tom L. Oct 14 '14 at 04:59
  • @TomL. Yup, you are right, forgot about the 20 in front, I was treating the CPU speed as 1Mhz. I have not decompiled the Arduino code. What seems high? – angelatlarge Oct 14 '14 at 05:10
  • 1
    @IgnacioVazquez-Abrams What is "timing the old-fashioned way"? – angelatlarge Oct 14 '14 at 05:11
  • Figuring out how much time you need, setting the timer, and letting it go. – Ignacio Vazquez-Abrams Oct 14 '14 at 05:15
  • 1
    Well, I need to time multiple events which have different timeout intervals, hence the millis or equivalent. – angelatlarge Oct 14 '14 at 05:16
  • Regarding your calculations: Are you on a 8bit or 32bit device? Also, the optimizer will make some operations quite efficient (maybe some load/store operation can be done in a single cycle). It very much depends on your platform, so you might really check the code which is generated to make these estimations – Tom L. Oct 14 '14 at 05:26
  • I find the "millis" function a quite nice idea for a simple scheduler when you have multiple tasks at hand; I've been using a nearly identical implementation for the past 10 years. – Tom L. Oct 14 '14 at 05:27

1 Answers1

1

50 clocks is not 50 us when running at 16 MHz, it is about 3 us. So 0.3% overhead, not 5%. You can certainly do that, but I have never had any issues with running a small ISR at 1 kHz like that before. It might be a good idea to re-implement it just so you can add more stuff to run on each timer tick, or schedule stuff to run on future ticks. A method I use quite often is to schedule something to run on e.g. 1 second intervals by storing the current time + interval in a variable and then when the time exceeds the future timestamp, I increment the timestamp by the inteval and then run the code. If a tick gets skipped, then the current call just gets delayed by one tick. It would be very easy to use this method to run stuff on 10 ms or 100 ms intervals.

alex.forencich
  • 40,694
  • 1
  • 68
  • 109