6

I have just started programming on ARM, I had some experience on AVR but not that much. The question is probably too trivial but the material about ARM is too little on the net... sorry anyways.

I know for implementing delays in ARM we can use timers or busy-loops. I tried those methods out. But one of my friends suggests there is a "delay.h" headerfile for arm that contains all the necessary delay functions just like avr, delay_ms and ... .

First of all, as stupid as this question may sound, is there really a built-in delay function in ARM programming. Is it in delay.h?

If so, why is the use of timers for implementing delays so popular? Why don't they just use the built-in function?

I'm studying LPC17xx series.

Polynomial
  • 10,562
  • 5
  • 47
  • 88
kasra5004
  • 143
  • 1
  • 3
  • 8
  • 2
    Think of an ARM CPU as a full-blown CPU that you might use for an operating system such as Linux. Now imagine what would happen if you blocked execution on the entire chip whenever you wanted time delay functionality. The whole thing would lock up during the delay. That's why timers are preferred. – Polynomial Sep 01 '13 at 22:26
  • @Polynomial please convert to answer. +1 – Krista K Sep 01 '13 at 22:30
  • @ChrisK Shall do :) – Polynomial Sep 01 '13 at 22:30
  • Which platform are you working in? "NXP Leaflet" or the such? Some of the board families will have forums and a community to help support people with questions. In chipKIT code, I have seen a `for()` loop count to 100 and the comment said it was 10usec delay. – Krista K Sep 01 '13 at 22:33
  • 1
    You seem to be mistaking features of the *processor* for features of a given toolchain or library, which is where you might find a header file. In terms of the processor itself, most curent ARM variants have a system time which can be enabled, and used to run software event timers, but the use of that is optional. – Chris Stratton Sep 02 '13 at 01:06

4 Answers4

5

Many compilers and libraries support the ARM family. I wouldnt waste time searching for a single solution defined in a single filename. That is most certainly not "the" solution for "ARM".

As with AVR and most other processors, certainly microcontrollers, you have the choice of blocking in a foreground loop either using a calibrated loop, or polling a timer. You can also use a timer based interrupt if need be. It depends on what you are needing to perform a delay for and what else might be going on during that delay. Depending on the specific chip and amount of time you want to delay you might be able to put the chip asleep for that period of time.

If you are using a library for other things like SPI, I2C, etc for the target processor and toolchain then you likely have timer routines as well and can just use those. If you are rolling your own (most portable but also the most work) then look at the timers, in the long run it will be easier to maintain code that is based on a timer rather than based on a calibrated (count to N) loop.

There are a number of toolchains and libraries tailored to the NXP LPC family in particular if that is the path you wish to take.

In no way, shape, or form. Is there a single solution that you should limit yourself to. Not for ARM not for AVR, not for any of them.

old_timer
  • 8,203
  • 24
  • 33
3

Sometimes code will be run in a context where nothing else is going on. If the processor sets an I/O bit, then runs a bunch of instructions that take a total of 1ms to execute, and then clears the I/O bit, then the wire controlled by that bit will pulse high for 1ms. Simple delay methods can be useful in such contexts. They are far less useful, however, in contexts where other things may be going on. If while the aforementioned delay was running, an interrupt occurred that required 100us to service, the pin would be pulsed high for 1.1ms rather than 1ms. Worse, if one tries to use such a delay within an interrupt service routine, the main-line code and all lower-priority interrupts will be suspended until such delay is complete.

Generally, the aforementioned approach to implementing is appropriate when either (1) one is writing something like a boot-loader that should be able to operate without interrupts, and nothing else will be going on while it's running; (2) one needs a delay which is sufficiently short there's not much point trying to find something else to do. For example, if one is using a 32MHz processor to communicate with a device that requires 1.6us of idle time between data bytes, inserting code that spends 52 cycles doing nothing would mean the ARM could be well-positioned to send the next byte about 1.6us later. By contrast, if the ARM tries to find something else to do, it would likely be busy doing that at the end of the 1.6us, and wouldn't send the next byte until some time later.

supercat
  • 45,939
  • 2
  • 84
  • 143
2

Think of an ARM CPU as a full-blown CPU that you might use for an operating system such as Linux. Now imagine what would happen if you blocked execution on the entire chip whenever you wanted time delay functionality. The whole thing would lock up during the delay, leading to a completely unusable experience for any kind of user-interactive system. That's why timers are preferred.

Timers are useful because they avoid the blocking situation. The CPU continues executing code, and jumps back to a handler routine once the timer elapses. This offers a form of asynchronous operation that allows for much more flexible code.

In architectures that don't directly support internal timers, but do support external interrupts, an external timing device can be used. The timer is programmed with a time offset (usually a scalar and multiplier) and an interrupt number. When the external timer triggers the interrupt, the CPU reads an interrupt table to find the interrupt vector, which is a linear address to which execution is transferred in order to handle the timer event.

Polynomial
  • 10,562
  • 5
  • 47
  • 88
  • Thank you very much for the nice explanation. Still, can we use functions such as delay_ms in ARM too? As inefficient as they are,do they exist in the official syntax? – kasra5004 Sep 01 '13 at 22:44
  • Depends on what you mean by "official syntax". ARM is a CPU with an instruction set. I don't know what libraries you're running on it, and the only experience I have with ARM is with the raw assembly. The way I'd do it is via the System Control Coprocessor (CP15), via the [Cycle Count Register](http://infocenter.arm.com/help/topic/com.arm.doc.ddi0333h/Bihcgfcf.html). Essentially, you configure the Performance Monitor Control Register (PMCR) to make the CCR tick on a 64-cycle scale, then loop until a certain number of ticks pass, based on the clock frequency of the processor. – Polynomial Sep 01 '13 at 23:02
  • (continued from last comment) So, if your clock frequency is 40MHz, and the CCR ticks every 64 cycles, that's ticks 625,000 per second. That gives you a resolution of 1.6 microseconds, and a maximum delay of 3,435.97 seconds (since CCR is a 32-bit signed register). – Polynomial Sep 01 '13 at 23:06
  • 4
    ARM CPUs cover a wide range of applications from tiny devices which directly compete with 8-bit micros up through high end devices appropriate for lightweight laptops and power-efficient server clusters. And besides that, both busy waiting and timer-based delays have their roles, even in large systems - for example, when an operating system kernel needs to wait a tiny amount of time for hardware, it may busy wait rather than using a timer which does not have fine enough granularity. – Chris Stratton Sep 02 '13 at 01:03
  • 2
    What Chris said: "ARM" is too wide a concept as to give us a useful clue of what you're doing. Are you using a Cortex-M0 running at 24 MHz? Or an A15 running with four cores at 1.8 GHz each? What else is going on on this system? Are you using an OS or just banging the metal? The answer to those questions will greatly change the recommended answer to your question. – Jon Watte Sep 02 '13 at 04:55
  • -1 because what Chris and Jon said - lots of things are "full-blown CPU"s. I could argue that even a tiny 6-pin PIC with ~200 bytes of program memory is that (please define "CPU" in this context). Busy-waiting **very much** has it's place in every CPU architecture, albeit almost exclusively in bare-metal programming. However, the great majority of ARM devices on the market are running bare-metal code, so if any assumptions should be made, it's that the OP is asking about *that*. – Connor Wolf Jun 03 '14 at 07:41
1

It is of course possible to use an ARM processor to do a simple delay. See my code below. Depending on what else you need your processor to do it may not be the best solution however. This code runs on an LPC2000 series processor from NXP.

/**********************************************************************/
/* Timer.c                                                            */
/* Implements a delay :                                               */
/*   TMR0 is a mcrosecond timer capable of timing events of upto      */
/*   4294.967295 seconds (1 hour 11 min 36.967295 sec)                */
/*   includes both delay and stop watch functions                     */
/*       void Delay(unsigned int delay); delay in microseconds        */
/*   Does not use interupts.  Note timer may rollover during delay    */
/*   so code checks for this                                          */
/*   Note: delay is minimum delay as interupts can slow it down       */
/**********************************************************************/
/*
 * Note: Assumes TMR0 is already setup with 1 microsecond tick
 */

#include <LPC2103.h>

#include "Timer.h"

void Delay(unsigned int delay){
    unsigned int start;
    unsigned int stop;
    unsigned int now;

    start = T0TC;
    stop = start + delay + 1;           /* +1 because call could arrive just
                                       before TOTC changes */
    if (stop > start){                /* usually is */
        do{
            now = T0TC;
        }while (now < stop && now >= start);  /* Catch timer roll-over as done */
    }else{                                    /* Need timer to roll over */
        do{
            now = T0TC;
        }while (now < stop || now >= start);  
    }
}

Even if you are happy to just sit in a loop waiting for a delay a timer may better as its more difficult to estimate delay times with a simple loop such as

for(i=1000; i >0; i--){
    ;
}

Because a good optimising compiler may just optimise this out.

Even it it does not get optimised its difficult to know exactly how long this delay will take because the NXP LPC series processors have memory accelerators and the processor has pipe lining.

Warren Hill
  • 4,780
  • 20
  • 32
  • Just for the record: For the loop to work with optimization turned on it must be `for(volatile int i=1000; i >0; i--) ;` – Morty Apr 02 '18 at 15:42