-1

In connection with question CMOS gate logic switching time based on input vectors one of the answers mentioned hazards. Although the question was about the switching time, finally I did not find the answer to the question. Say, we have several one-bit adders, chained, and the lower carry out bits provide inputs for the next carry in bits. When adding an all-one number and 1 (and we do not have special handling of carry), the first carry is set with some time delay, causes to recalculate the next bit with some more delay, and so on. In a 64-bit adder, at the end of the chain it can be a considerable delay. Depending on the relation between the transfer-to-the-next-bit to switching-time, the new carry-in inputs may arrive at different times with respect to the arrival of the two summand bits. That is (I expect) it may affect the operation: it may cause not only glitches, but also unwanted switching. I.e., I expect that it may cause increase in switching time, if the changed carry-in appears in an early phase of switching, and may cause additional switching if it arrives in a later phase. What are the timing relations here (I mean wiring delay till the next adder(s) to switching time)? (if we consider this operation as a "computation", the arrival of any of the input operands starts the computation, and the arrival of the next operand starts the computation again, in parallel with the previous one. In biology, there exists a "refractory" period, which limits parallel operation; in electronics, as far as I know, no similar limitation exists. How then this unwanted parallelization is handled?)

Addendum: in connection with this question I found at https://www.sciencedirect.com/topics/engineering/dynamic-power-dissipation

some definition-like terms, and a kind of answer to my question. "useful data switching activity (UDSA) and redundant spurious switching activity (RSSA). RSSA or glitching can be an important source of signal activity. Glitching refers to spurious and unwanted transitions that occur before a node settles down to its final steady-state value that occurs due to partially resolved functions. Glitching can cause a node to make several power-consuming transitions."

katang
  • 101
  • 3
  • You describe a flawed circuit and you will get flawed performance. Use a synchronous system. – Andy aka Jul 31 '21 at 14:02
  • I am interested theoretically in origins of the performance loss of a synchronous system, introduced by the synchronization. – katang Jul 31 '21 at 14:25
  • 1
    @Andyaka Even in a synchronous system this phenomenon occurs in the combinational logic between registers, because the delay paths are not all equal. – Elliot Alderson Jul 31 '21 at 14:34
  • I don't know what you mean by "performance loss". These logic hazards increase power consumption but they don't cause the logic to run slower. – Elliot Alderson Jul 31 '21 at 14:37
  • 1
    The synchronization clock signal must be longer because of the longer switching time, so the operating frequency will be lower, i.e. we lose performance. Yes, this phenomenon occurs everywhere. The longer the word length, the more significant the effect. – katang Jul 31 '21 at 17:52
  • Note that the source you linked is from 2005, and cites sources that are a decade older than that. The paper itself is behind a paywall, so it's not clear if it provides evidence that RSSA "increases power consumption considerably". – Elliot Alderson Aug 05 '21 at 17:35
  • In the last few years the power consumption (in its ratio to payload computing) did not change considerably. Essentially the same effect (clock skew) is analyzed in book Advanced Electronics Materials and Novel Devices, ISBN 9783-527-40927-3, in 2012, where it is claimed that in modern processors about 30% of power consumption goes only for clock distribution. My guess is similar to RSSA. I would be glad, however, if you could provide a newer source, especially if it deals with power consumption in function of ration of switching time to gate delay. – katang Aug 06 '21 at 18:55

2 Answers2

2

In any non-trivial combinational logic block there will be multiple signal paths from a given input bit to a given output bit, and these paths will have different propagation delay times. Therefore, it is inevitable in typical designs that the output bits may change more than once before they reach their final value. This is true even if all of the input bits change at the same time. Usually, this is just an accepted characteristic of digital logic. It increases power consumption a bit, but it doesn't cause the system to be slower.

We can prevent the early invalid outputs from propagating any further by adding registers in between combinational logic blocks, which is often called pipelining. We configure the clock signals to the registers so that we are certain that the output of every combinational logic block will be final and valid before we load it into a register.

There are techniques for reducing the glitches, such as using one-hot codes, but these techniques come with their own costs and are only used in special cases.

Elliot Alderson
  • 31,192
  • 5
  • 29
  • 67
  • The only claim, I debate: "It increases power consumption a bit". It is certainly true if you consider one single bit. If you have chained bits, the effect can increase up to the ("little bit")^(number of bits). My guess is that it increases power consumption considerably, for example in the adder I mentioned. The motivation behind my original question was to find out, how much. – katang Jul 31 '21 at 18:02
  • 2
    Power consumption in CMOS is composed of several elements. Leakage current has become a major source of power consumption in advanced circuits. Power consumption due to switching is proportional to the switched capacitance and the frequency of switching. These glitches in combinational logic usually appear on small, local nets with low capacitance. The real power hogs are the clock networks and the long internal buses. You can guess if you want, but you should run some actual analysis. – Elliot Alderson Jul 31 '21 at 18:44
1

Hazards in logic are also called metastable or race conditions with 2 or more inputs that are changing opposite states at almost the same time in an “OR/NOR” gate. The output might produce a “glitch” during the time when both are low.

Similarity for “AND/NAND” when a glitch occurs for the transition with both appear to be high.

The solution is to choose the same or inverted clock when the inputs are stable to synchronize and latch a valid outputmin a FF or register.

Diode OR logic is similar, but in single diode high speed switching circuits with DCDC converters, the diode conducts after a single transistor turns off and the reverse recovery time is similar to a “Refractory period” when the diode cannot change state fast enough due to storage capacitance to begin conduction from reverse to forward bias.

Let me be perfectly clear.

@ElliotAlderson hazard = Glitch is the outcome of a 0-hazard, 1-hazard , from input race or metastable conditions. For outcome to be a hazard, the input terms are similar for Meta means inbetween states, race is a time factor in between transitions with unexpected output glitch.

This may occur with chip tolerances, temperature and supply tolerances if the metastable condition exists.

Tony Stewart EE75
  • 1
  • 3
  • 54
  • 182
  • 1
    Metastability is something entirely different, not to be confused with other types of hazards, glitches and race conditions –  Jul 31 '21 at 14:21
  • Your answer that maybe I need to use inverted clock, implies that the switching time may be prolonged. Could you please add some (maybe rough) estimation about the timing relations? (I am interested in the principial nature of the timing, rather than in a solution need in the practical design) And, about the additional energy consumption of the glitches? I mean that mainly the switchings are responsible for power consumption) An URL would also be OK. – katang Jul 31 '21 at 14:22
  • 1
    Metastability not the same as a hazard or glitch. – Elliot Alderson Jul 31 '21 at 14:32
  • Is it clear now? – Tony Stewart EE75 Jul 31 '21 at 15:34
  • 1
    I will go along with the idea that metastability can cause glitches, but I strongly disagree with "Hazards in logic are also called metastable". Metastability occurs in storage elements while hazards occur in combinational logic. – Elliot Alderson Jul 31 '21 at 18:47
  • According to Kleene's logic it is just a 3rd logic state like x, don't care, but called M in a combinational Karnaugh Map. It can be stored and propagate by registers FWI @ElliotAlderson p58 https://www.mpi-inf.mpg.de/fileadmin/inf/d1/teaching/summer18/TKDS/lec6.pdf that supports my assertion – Tony Stewart EE75 Aug 01 '21 at 09:40
  • Although he demonstrates the worst-case model for metastability (propagation) presented in the same logic, i.e., hazard-free circuits are the same as metastability- containing circuits. So you have a point there is more to this than I understand – Tony Stewart EE75 Aug 01 '21 at 09:44
  • Also glitches can cause metastable such as pattern dependent SDRAM crosstalk dynamic soft errors – Tony Stewart EE75 Aug 01 '21 at 09:49