6

Substantial edit--note that David Kessner's answer was written in response to the original posting; view the edit history to see what he was responding to

From what I've read of digital design, there is a very strong tendency toward the use of strictly synchronous circuits in which the only 'sequential' subsystems are flip flops which share a common clock. Signals which cross between clock domains almost always require double synchronizers.

I've seen a number of articles that suggest that fully-asynchronous designs are very hard, and are prone to having unforeseen pitfalls. I can certainly appreciate that if the inputs to any type of latching element have no specified timing relationship, it's mathematically impossible to absolutely guarantee anything about the output, and that even getting things to the point where odd behaviors are unlikely enough that, for practical purposes, they don't happen is often difficult without a double synchronizer.

A number of blogs also talk about the evils of gated clocks, and suggest that it is much better to feed an ungated clock to a latch along with a "latch enable" signal, than to gate the clock. Gated clocks not only require great care in their implementation to avoid 'runt' clock pulses, but unless extreme care is taken to balance out delays, circuits operated from separately-gated clocks must be viewed as being in their own clock domain.

What I haven't seen discussed much is the notion of circuits which use sequential subsystems that aren't all triggered by the same clock, but will always be stable within a certain duration of a clock edge. If one is trying to implement something like an N-bit event counter, having many flip flops all driven by a common clock will require, at minimum, charging and discharging the gates of 2N transistors with every clock transition. If one were to instead use a 'ripple' arrangement for the first few stages, one could substantially reduce the frequency of signals reaching the upper stages, thus reducing current consumption.

I've seen a few processors that feature an asynchronous prescalar stage on the input of a counter, but none of the prescalars I've seen allow for the processor to read them. Further, nearly all of the chips I've seen that have such prescalars make it impossible to write to the timer value without clearing the prescalar. My suspicion is that on many such devices, the prescalar does not actually clock the main counter, but instead is used to determine, on any given cycle of the system clock, whether or not the counter should be advanced. While some such systems provide a mode in which one of the counters may be set to "fully asynchronous" mode, allowing operation within sleep, it tends to avoid gaining or losing counts if one needs to use the timers for anything other than a full-period overflow and have them count consistently when switching between waking and sleeping.

It would seem that some of these problems could be eased by the use of a graycode counter, and that the implementation of such a counter could be eased by the use of a "semi-synchronous" design as described above. It's possible to design a relatively compact and fast asynchronous bidirectional quadrature-input graycode counter which will tolerate metastability on either input as long as the other is stable (during the time one input is metastable, one output will be undefined; provided the metastable input stabilizes before the other input has a transition, the output will resolve itself to the proper state). The outputs would not be synchronous to any particular clock, but if the inputs change on a particular clock edge, the relationship to the outputs would be predictable. Has anyone ever heard of such a circuit being used?

Shashank V M
  • 2,279
  • 13
  • 47
supercat
  • 45,939
  • 2
  • 84
  • 143
  • 3
    These are good questions, but the stackexchange format doesn't handle this well. I think I count about 7 questions, and honestly they're not closely related. – W5VO Sep 04 '11 at 02:39
  • 1
    @supercat, I have to agree, do you think you could constrain this question down to a more precise question and ask any other questions you already have for sure and ask further questions as answer bring them forward/give them root? – Kortuk Sep 04 '11 at 02:44
  • @Kortuk: I just rewrote the thing; I'm not sure if I should adjust the title, since the focus is more on sync-vs-async circuits in general, rather than just counters. – supercat Sep 04 '11 at 22:08
  • @W5VO: I've tried to refocus it a bit, though I guess it could probably still use further trimming. – supercat Sep 04 '11 at 22:56
  • 1
    @supercat, thank you for taking the time and advice. – Kortuk Sep 04 '11 at 23:23
  • @supercat - reads better now, thanks. Definitely some good discussion point there, and the kind of stuff I am *very* interested in myself (FPGA wise, but I would love to be involved in designing an ASIC someday..) FWIW I think a great place to discuss this would be over at EDAboard in the ASIC or PLD forum, I have picked up quite a bit there. The EE times PLD newsletter (by "Max the Magnificent") is excellent too. Example [here](http://www.eetimes.com/design/eda-design/4218444/Latches-and-timing-closure--a-mixed-bag?cid=NL_ProgrammableLogic&Ecosystem=programmable-logic) – Oli Glaser Sep 05 '11 at 16:02
  • @Oli Glaser: Thanks. A related notion I've wondered about is the pros and cons of having a latching element which e.g. samples on a rising clock edge and outputs on a falling clock edge. Such an approach is common with things like SPI, but I don't know about the use of such things within chips. – supercat Sep 05 '11 at 23:59

1 Answers1

9

Wow, your question isn't terribly focused, and it's not obvious what you are really asking for. But let me give this one a try. Sorry if I didn't get it quite right.

Ripple counter vs. normal synchronous counter: Who says that people don't use ripple counters? People use whatever they have available that works best. In FPGAs, nobody uses a ripple counter because the logic blocks do a sync counter so much better than a ripple. But if you're designing a custom chip then a ripple counter can be more advantageous when it comes to power consumption and logic size. It would not surprise me at all of some people use ripple counters in their ASICs. Sync counters would still be better for speed and simplicity of timing.

Gray Counter vs. Binary Counter: People do use gray counters in ASICs and custom chips. In FPGAs, where binary counters are faster, people still use Gray counters when the count value has to go across clock domains, such as in FIFOs.

Multi-phase clocks: These are certainly used in the design. There are reasons why the PLLs in FPGAs can often output 0, 90, 180, and 270 deg phase-shifted versions of the original clocks. But as the clock frequencies go up, using multiple clocks gets harder due to clock skew and clock distribution issues. It's not impossible at high frequencies but it just isn't done as much.

Sync vs. Async: Sync circuits are not just easier to simulate but easier to design and easier to guarantee that they work correctly. Verification and timing analysis tools are difficult-to-impossible to use with async circuits.

MCU Counter Circuit: Do you KNOW that there are no MCUs that do it that way? If it did, how could you tell? Maybe the prescalers on the timer are ripple counters. Maybe the timer itself is a Gray-coded counter and reading/writing the registers automatically converts it to/from binary. My point is this: the guys who design super-low power MCUs (like the MSP430) do every trick in the book to reduce power consumption. Many of those tricks, like using ripple counters and Gray code where appropriate, are completely invisible to people like you and I. They can, and probably are, using those tricks plus a couple of hundred other tricks that you haven't thought of yet.

One thing that you haven't mentioned is the use of completely async circuits. This is where all of your talk about clocks eventually goes when taken to it's logical conclusion. There have been companies that have tried to build large-scale CPUs that are completely async, including one group that tried to bring an async ARM to market. The benefits are amazing: super-low power, faster processing, and less EMI among them. But the disadvantages are more amazing yet. The main one is that the complexity of designing this chip is huge and is not economically viable today. A secondary problem is that the number of transistors about doubles when compared to an equivalent sync chip.

Even so, there are CPUs on the market today that use async logic in some of its blocks, like the FPU, but nobody uses it on a large scale.

TonyM
  • 21,742
  • 4
  • 39
  • 62
  • +1 for a very good effort at answering a very vague and difficult (multi)question. – Oli Glaser Sep 04 '11 at 17:45
  • Last point first: I understand that fully asynchronous designs are to be avoided when practical, because any time one has a latching circuit that combines signals with unknown timing relationships, it's very difficult to make *any* guarantees about behavior. I guess my main interest is with circuits in which all signals change in response to something derived from a clock edge, rather than being changed by the edge directly; one example of such a circuit would be an asynchronous quadrature-input graycode counter which is simpler than a fully-synchronous grayscale counter. – supercat Sep 04 '11 at 22:21
  • @supercat Yea, people do that in ASIC's and custom chips. The things written in books and articles, and taught in school, are sometimes a long way away from what is done in reality. –  Sep 04 '11 at 22:27
  • I know that multi-phase clocks were very common in older designs (a couple I've studied in detail are the MOS 6502, the Atari TIA, etc.) and continue to be used in things like the PIC. The articles I'd read, though, seem to strongly advocate using edge-triggered flip flops for everything, all run from a common clock. – supercat Sep 04 '11 at 22:30
  • 1
    MPU counter circuits: Many CPU's offer asynchronous prescalars for some of their counters, but they're limited (see edit above). As an embedded systems engineer who knows something about VLSI internals and has design intuition but probably not enough to get a job in VLSI design, I find it frustrating that a lot of chips seem to spend hardware on features that could be emulated nicely in software, at least if the hardware included some other features that should be cheaper to implement. One of my big desires would be to have "sleep-agnostic" timing system, which could... – supercat Sep 04 '11 at 22:36
  • @supercat It is my belief that 95% of the hardware out there was not designed with software in mind. The old 16550 UART is a great example of that. With some simple tweaks it could have made SW a lot more efficient. –  Sep 04 '11 at 22:45
  • 1
    ...easily time intervals from 1/65536 of a second to hours or days without regard for waking or sleeping. Better still would be a battery-backed counter which could keep time for years. Very few CPU's seem able to allow precise wakeup; I've yet to see a battery-backed clock system that allows such precision. I'm not quite clear why few real-time clock subsystems provide a nice way of reading finer than one-second resolution, and some don't even promise that they can be read without dropping a count. – supercat Sep 04 '11 at 22:50
  • 1
    Many hardware designs indeed seem to have been done without consulting programmers. I'm pretty good at working around limitations in hardware designs, but that doesn't mean I like doing it. I'd love to chat about that subject, but it would be getting more than a little off-topic here. – supercat Sep 04 '11 at 22:55