28

As I know, a clock controls all of the logic operations, but it also limits the speed of a computer because the gates have to wait for the clock to change from low to high or high to low depending on the component. If no clock was incorporated, the gates would change as fast as they could whenever given the command to, so why wouldn't that increase the computers speed and why are clocks used?

skyler
  • 10,136
  • 27
  • 80
  • 130
  • Don't have time for a proper answer, but at the most basic level, so all the digital things are marching to the beat of the same drummer. Look up synchronous vs. asynchronous. – Matt Young Dec 16 '13 at 17:58
  • possible duplicate of [clock signals in computers and machines](http://electronics.stackexchange.com/questions/33945/clock-signals-in-computers-and-machines), (and that question was closed as not a real question...) – amadeus Dec 16 '13 at 18:54
  • For a narrow range of tasks, [analog computers](http://en.wikipedia.org/wiki/Analog_computer) can be faster than digital computers. – Nick Alexeev Dec 16 '13 at 19:34
  • 1
    So they know what time it is! (sorry, couldn't resist) – Scott Seidman Dec 16 '13 at 22:52
  • Relevant: http://stackoverflow.com/questions/530180/what-happened-to-clockless-computer-chips – Connor Wolf Dec 16 '13 at 23:47

6 Answers6

36

Clocks are used in computers for the simple reason that most if not all of the circuity is synchronous sequential logic.

In a synchronous circuit, an electronic oscillator called a clock generates a sequence of repetitive pulses called the clock signal which is distributed to all the memory elements in the circuit.

Now, that may not seem satisfying and granted, you would reasonably ask "why are synchronous circuits used in computers?" but that's an easy question to answer too:

The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is called propagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values, before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of a synchronous circuit.

An active area of research is asynchronous computing where most if not all of the circuitry is asynchronous sequential logic.

Alfred Centauri
  • 26,502
  • 1
  • 25
  • 63
  • The Wikipedia piece on asynchronous sequential logic is rather brief; it might have been helpful to distinguish between logic which has no single clock but can guarantee either that circuits' inputs won't cause race conditions or, at worst, that the outputs of any circuit whose input may have had a race condition will not be used. – supercat Dec 16 '13 at 23:57
  • 1
    I think it is worth to note that there was a fully asynchronous computer built by [Jacek Karpiński](http://en.wikipedia.org/wiki/Jacek_Karpi%C5%84ski) which was named KAR-65. Unfortunately I can't find anything about it in English. – elmo Dec 17 '13 at 13:31
8

A circuit like an arithmetic logic unit will take a couple of numbers as inputs and produce a number as ouptut. It can guarantee that within some period of time, all bits of the output will have reached their correct final states, but the actual amount of time for the output bits to become valid could vary considerably based upon a variety of factors.

It would be possible to construct an ALU with a "valid" input and a "valid" output, and specify that provided the "valid" input is low for a sufficient amount of time before a computation is performed, and the data inputs contain the desired values before the "valid" input goes high, the "valid" output won't go high until the output bits are in fact correct. Such a design would probably require about twice as much circuitry as a conventional ALU [basically it would have to keep track of whether each bit was "known" to be zero or "known" to be one; its "valid" output would become true once the state of every output bit was known].

To make things worse, allowing those parts of a CPU which would be capable of running faster to do so will only be helpful if they aren't waiting all the time for slower parts to play catch-up. To make that happen, there must be logic to decide which part of the machine is "ahead" at a given moment in time, and select a course of action based upon that. Unfortunately, that kind of decision is one of the hardest ones for electronics to make reliably. Reliably deciding which of two events happened first is generally only easy if one can guarantee that there will never be any "close calls". Suppose a memory sequencer is handling a request from processing unit #1 and unit #1 has another request pending after that. If unit #2 submits a request before the first request from #1 is complete, the memory unit should handle that; otherwise it should handle the next request from unit #1. That would seem like a reasonable design, but it ends up being surprisingly problematic. The problem is that if there's some moment in time such that a request received before that moment that will be processed immediately, and a request received after that will have to wait, the amount of time required to determine whether a request beat the deadline will be roughly inversely proportional to the difference between the time the request was received and the deadline. The time required for the memory unit to determine that a request from #2 beat the deadline by one femptosecond might substantially exceed the amount of time that would have been required to service a second request from unit #1, but the unit can't service either request until it decides which one to service first.

Having everything run off a common clock not only eliminates need for the circuitry to determine when the output of a computation is valid, it also allows timing "close calls" to be eliminated. If everything in the system runs off a 100Mhz clock, no signal changes in response to a clock until 1ns after the clock edge, and everything that's going to happen in response to a clock edge happens within 7ns, then everything that's going to happen before a particular clock edge will "win" by at least 3ns, and everything that's not going to happen until after a clock edge will "lose" by at least 1ns. Determining whether a signal chances before or after the clock, when it's guaranteed not to be "close", is much easier than determining which of two arbitrarily-timed signals happens first.

supercat
  • 45,939
  • 2
  • 84
  • 143
7

Imagine you have an 8 bit integer being sent from memory to an ALU for a calculation and (at least for this example) that the memory circuit provides the signals on the 8 data lines before the ALU requires them and at slightly different times.

The use of a clock here would ensure that the 8 data lines held the correct value for the integer being represented for one clock cycle and that the ALU will "collect" that data within the same clock cycle.

I realise that that was probably not the best description, essentially without a clock ensuring data consistency would be much more difficult than any possible increase in speed would make it worth, you would run into a lot of race condition issues.

4

Digital systems can be either synchronous or asynchronous. On asynchronous systems, the output can change at any given moment, different from the synchronous systems, which depend on the clock to change its outputs.

Most digital systems are synchronous (even though they can have some asynchronous parts) because the project and the defect analysis can be done with more ease, since the outputs can only change with the clock.

I've pretty much copied this from Digital Systems: Principles and Applications, 10th edition by R. J. Tocci et al.

gabrieljcs
  • 614
  • 1
  • 5
  • 7
3

Well, if you're designing a synchronous system, you have a target clock rate, and you design the logic to complete all calculations during a cycle within one clock period. This also means that you need to incorporate a safety margin to allow for various conditions, such as low power supply voltage, high temperature, and a "slow" chip. Synchronous chips are designed so that the longest logic path (slowest calculation) will finish in time under all these adverse conditions. As a result, when conditions aren't terrible, you will have a lot more time/margin between when the logic completes its operation and the next clock latches the result. Because you (usually) can't change your clock frequency, you lose this speed.

There are completely asynchronous logic paradigms that exist, for example one that I'm familiar with is NULL convention logic. Using broad strokes to describe what is happening, the logic circuit is able to identify when a calculation has completed, and is able to effectively create its own "clock" signals. This lets the circuit run as fast as it can, and has some modest power and EMI benefits. Unfortunately, you pay a penalty for the asynchronous nature in terms of design density as well as top performance. Also, while the software tools for synchronous design and validation are mature, a lot of the design and validation for asynchronous designs is still manual, resulting in a greater effort required for designing and building an asynchronous design.

This also completely neglects the case that sometimes you need a clock for a specific application to be a time reference. For example, your sound card can't operate asynchronously because it needs to update the ADC or DAC at a specific, precise sample rate or the audio will be distorted.

W5VO
  • 18,303
  • 7
  • 63
  • 94
1

If no clock was incorporated, the gates would change as fast as they could whenever given the command to, so why wouldn't that increase the computers speed and why are clocks used?

To put it simply: because humans aren't super-intelligent beings, and have to take shortcuts to make designing billion-element circuits possible.

When our machine overlords ascend, they may very well get rid of the clock, overcome niggling minutae like making a clock-less circuit manufacturable despite process variation, and take advantage of some speed gains.

To expand a bit: discrete, predictable things are easier to rationally analyze and design. As a huge added benefit, they self-correct (in this case, the timing self-corrects). This is the reason we use digital logic in the first place. (Similarly, in programming, we often use integers instead of floating-point, familiar control structures instead of goto-spaghetti, and languages with a few, clear rules instead of very "flexible" languages where you're never quite sure what will happen until the code runs.)

  • Even beyond the fact that synchronous logic is easier to design, a computer which is runs off a 10MHz clock will generally be designed so that anyplace where it has to determine whether event X happens before Y, one or both of the events will be delayed as necessary to have a particular relationship to the master clock so the events will never happen simultaneously. Further, in the few cases where the possibility of simultaneous action would be possible, it will be acceptable to add a two- or three cycle fixed delay to coerce one or both of the signals to a fixed clock relationship. – supercat Dec 17 '13 at 16:56
  • If the design were asynchronous, one might find that a random 99% of instructions take 5ns, 0.9% randomly take 10ns, 0.09% take 30ns, 0.009% 100ns, 0.0009% 300ns, 0.00009% 1us, 0.000009% 3us, etc. with no firm guarantee as to how long the system might take to resolve a timing ambiguity. In most cases, having performance which is sub-optimal but predictable is better than performance which is on average faster but has unpredictable variations that are sometimes severe. – supercat Dec 17 '13 at 17:02
  • @supercat 1) Can't a circuit be designed where simultaneous events never occur? (at least if the inputs are sufficiently regular) 2) Can't a circuit be designed where it doesn't matter if simultaneous events occur? – Aleksandr Dubinsky Dec 17 '13 at 17:03
  • If the relative timing of two events is known, one may prevent them from happening simultaneously by delaying one or the other. The more accurately their relative timing is known, the less delay will be required. If the relative timing is not known, it's possible to resolve two events that could happen simultaneously into an indication of which happened first, but to minimize the worst-case behavior, one must accept some pretty severe compromises in the best-case behavior. The compromises that would be necessary for a clock-less computer to work would be worse than using a clock. – supercat Dec 17 '13 at 17:17