12

On the Atmel SAM-D21 series microcontrollers, many peripherals use a clock which is asynchronous to the main CPU clock, and accesses to these peripherals must go through synchronization logic; on peripherals whose clock is slow relative to the CPU time, this can add some really huge delays. For example, if the RTC is configured to use a 1024Hz clock (as appears to be the design intention) and the CPU is running at 48Mhz, reading the "current time" register will cause bus logic to insert over 200,000 wait states (a minimum of five cycles of the 1024Hz clock). Although it's possible to have the CPU issue a read request, execute some other unrelated code, and return 200,000+ cycles later to fetch the time, there doesn't seem to be any way to actually read the time any faster. What is the synchronization logic doing that would take so long (the time is specified as being at least 5 ⋅ PGCLK + 2 ⋅ PAPB and at most 6 ⋅ PGCLK + 3 ⋅ PAPB)?

By my understanding of synchronization, a single-bit synchronizing circuit will delay a signal by 2-3 cycles of the destination clock; synchronizing a multi-bit quantity is a little harder, but there are a variety of approaches which can guarantee reliable behavior within five cycles of the destination clock if it's faster than the source clock, and only a few cycles more if it's not. What would the Atmel SAM-D21 be doing that would require six cycles in the source clock domain for synchronization, and what factors would favor a design whose synchronization delays are so long enough as to necessitate a "synchronization done" interrupt, versus one which ensures synchronization delays are short enough as to render such interrupts unnecessary?

supercat
  • 45,939
  • 2
  • 84
  • 143
  • 2
    Thank you for this question. It made my finally understand the issue at my hand. I came here because I couldn't understand why clearing the Watchdog Timer (WDT) would take nearly 5 tremendous milliseconds on the SAMD20/21. Now I know it is by hardware design, not an error of mine. (The WDT is clocked at 1024 Hz, which is the only sensible option.) Now I can at least deal with it accordingly. – T-Bull Dec 18 '16 at 20:04
  • 2
    @T-Bull: The really fun thing about the watchdog on those parts is that it is disabled between the time software issues the reset command and the time the command passes through the synchronizer. If the device goes to sleep during that interval, the watchdog won't run unless or until something *else* wakes up the part. – supercat Dec 18 '16 at 23:04

1 Answers1

2

That's a different way of doing things to me, I'm used to my architectures where my registers are either on my CPU clock or at least 1/2 of that clock. So you write your registers and they're ready right away. Perhaps they're doing it this way for power savings? If they're putting peripheral registers in their own separate really slow clock domain maybe they don't have to wake up and run the main oscillator or CPU clock but can keep updating values on the peripheral.

If that's the case then you could write a register in your super slow peripheral block, then disable the power island for the whole CPU or clock gate it, and let the slow synchronizer read it until it's happy and then interrupt the CPU to bring it out of sleep.

Alternatively it could let you cram the maximum amount of instructions into your awake time, instead of spinning six cycles and waiting for every write.

As to why they use so many synchronization cycles, could be paranoia, or they could be meeting some high reliability standard for one of their customers. I can't say for sure but I know I've seen customers with demands like every ram shall have ecc and be preloaded to a set value, etc.

I guess that's not a definitive answer but those are my thoughts after looking through the datasheet a bit.

Some Hardware Guy
  • 15,815
  • 1
  • 31
  • 44
  • 2
    The "six cycles" is six cycles of the peripheral clock; if one sets e.g. the real-time clock module to be fed at 1024Hz (which seems to be Atmel's recommendation) and the CPU clock is at 48MHz, six cycles of the peripheral clock will be 281,250 cycles of the CPU clock which is an awfully long time to be spinning, especially if there are any interrupts that need servicing. Spinning is only moderately horrible if the slow clock is 8Mhz (meaning a 36-CPU-cycle spin) but a hard fault would be better than a spin on a 1024Hz clock. – supercat Nov 23 '15 at 17:07