40

Would it be theoretically possible to speed up modern processors if one would use analog signal arithmetic (at the cost of accuracy and precision) instead of digital FPUs (CPU -> DAC -> analog FPU -> ADC -> CPU)?

Is analog signal division possible (as FPU multiplication often takes one CPU cycle anyway)?

alex.forencich
  • 40,694
  • 1
  • 68
  • 109
zduny
  • 497
  • 4
  • 8
  • It doesn't answer you question, but here is an interesting article on the use of analog electromechanical computers in warships http://arstechnica.com/information-technology/2014/03/gears-of-war-when-mechanical-analog-computers-ruled-the-waves/ – Doombot Dec 02 '14 at 18:36
  • There have been proposals from time to time to use multi-state digital logic -- eg, "flip-flops" with four states instead of two. This has actually been done in some production memory chips, since it reduces the wiring bottleneck. (I don't know if any currently produced chips use multi-state logic, though.) – Hot Licks Dec 02 '14 at 22:35

6 Answers6

49

Fundamentally, all circuits are analog. The problem with performing calculations with analog voltages or currents is a combination of noise and distortion. Analog circuits are subject to noise and it is very hard to make analog circuits linear over huge orders of magnitude. Each stage of an analog circuit will add noise and/or distortion to the signal. This can be controlled, but it cannot be eliminated.

Digital circuits (namely CMOS) basically side-step this whole issue by using only two levels to represent information, with each stage regenerating the signal. Who cares if the output is off by 10%, it only has to be above or below a threshold. Who cares if the output is distorted by 10%, again it only has to be above or below a threshold. At each threshold compare, the signal is basically regenerated and noise/nonlinearity issues/etc. stripped out. This is done by amplifying and clipping the input signal - a CMOS inverter is just a very simple amplifier made with two transistors, operated open-loop as a comparator. If the level is pushed over the threshold, then you get a bit error. Processors are generally designed to have bit error rates on the order of 10^-20, IIRC. Because of this, digital circuits are incredibly robust - they are able to operate over a very wide range of conditions because the linearity and noise are basically non-issues. It's almost trivial to work with 64 bit numbers digitally. 64 bits represents 385 dB of dynamic range. That's 19 orders of magnitude. There is no way in hell you are going to get anywhere near that with analog circuits. If your resolution is 1 picovolt (10^-12) (and this will basically be swamped instantly by thermal noise) then you have to support a maximum value of 10^7. Which is 10 megavolts. There is absolutely no way to operate over that kind of dynamic range in analog - it's simply impossible. Another important trade-off in analog circuitry is bandwidth/speed/response time and noise/dynamic range. Narrow bandwidth circuits will average out noise and perform well over a wide dynamic range. The tradeoff is that they are slow. Wide bandwidth circuits are fast, but noise is a larger problem so the dynamic range is limited. With digital, you can throw bits at the problem to increase dynamic range or get an increase in speed by doing things in parallel, or both.

However, for some operations, analog has advantages - faster, simpler, lower power consumption, etc. Digital has to be quantized in level and in time. Analog is continuous in both. One example where analog wins is in the radio receiver in your wifi card. The input signal comes in at 2.4 GHz. A fully digital receiver would need an ADC running at at least 5 gigasamples per second. This would consume a huge amount of power. And that's not even considering the processing after the ADC. Right now, ADCs of that speed are really only used for very high performance baseband communication systems (e.g. high symbol rate coherent optical modulation) and in test equipment. However, a handful of transistors and passives can be used to downconvert the 2.4 GHz signal to something in the MHz range that can be handled by an ADC in the 100 MSa/sec range - much more reasonable to work with.

The bottom line is that there are advantages and disadvantages to analog and digital computation. If you can tolerate noise, distortion, low dynamic range, and/or low precision, use analog. If you cannot tolerate noise or distortion and/or you need high dynamic range and high precision, then use digital. You can always throw more bits at the problem to get more precision. There is no analog equivalent of this, however.

alex.forencich
  • 40,694
  • 1
  • 68
  • 109
  • 1
    This deserves much more upvoting! – John U Dec 03 '14 at 01:38
  • I knew it! I just couldn't put it into good words. Nice additional info on the wireless receivers. – Smithers Dec 06 '14 at 17:59
  • Isn't memory also a problem for analog computers .I haven't heard of analog memory devices . –  Dec 01 '15 at 16:51
  • 2
    Sample and hold circuit? Magnetic tape? Phonographic record? Photographic film? Analog memory devices certainly exist, but they have slightly different characteristics from digital ones. – alex.forencich Dec 01 '15 at 18:16
  • 10 megavolts?! I think he forgot that we have something called op-amp and its job is to amplify. Analog can support any range simply by scaling. – Dr. Ehsan Ali Mar 04 '16 at 13:36
  • 1
    Any range, yes. But any range with any arbitrary resolution? Not so much. – alex.forencich Mar 06 '16 at 01:14
  • 1
    @ehsan amplification does not increase your dynamic range, your minimum value (the noise floor) gets amplified right along with the maximum. – mbrig Mar 16 '17 at 01:25
  • @Ehsan you can scale it down to 10 volts instead but then you need an accuracy of 0.000001 picovolts. – user253751 Apr 16 '18 at 04:13
21

I've attended an IEEE talk last month titled “Back to the Future: Analog Signal Processing”. The talk was arranged by IEEE Solid State Circuit Society.

It was proposed that an analog MAC (multiply and accumulate) could consume less power than digital one. One issue, however, is that an analog MAC is a subject to analog noise. So, if you present it with the same inputs twice, the results would not be exactly the same.

Nick Alexeev
  • 37,739
  • 17
  • 97
  • 230
19

What you're talking about is called an Analog Computer, and was fairly widespread in the early days of computers. By about the end of the '60s they had essentially disappeared. The problem is that not only is precision much worse than for digital, but accuracy is as well. And speed of digital computation is much faster than even modest analog circuits.

Analog dividers are indeed possible, and Analog Devices makes about 10 different models. These are actually multipliers which get inserted into the feedback path of an op amp, producing a divider, but AD used to produce a dedicated divider optimized for large (60 dB, I think) dynamic range of the divisor.

Basically, analog computation is slow and inaccurate compared to digital. Not only that, but the realization of any particular analog computation requires the reconfiguration of hardware. Late in the game, hybrid analog computers were produced which could do this under software control, but these were bulky and never caught on except for special uses.

WhatRoughBeast
  • 59,978
  • 2
  • 37
  • 97
  • 6
    I like your answer, (+1) and the question. But I'll disagree on the speed part. Analog is plenty fast. The problem is precision and perhaps most importantly noise. Analog always has some noise. Digital is noise free, computer-wise. – George Herold Dec 02 '14 at 03:09
  • Thanks for the kind words. But. Analog may be "plenty" fast but in general digital is faster. And noise is easy to simulate. – WhatRoughBeast Dec 02 '14 at 05:37
  • In the 70's I picked up part of an analog computer at Boeing Surplus. One component was an incredibley heavy cube about 2 feet an a side. It had I think two or three motors and an array of steel shafts with clamp like clutches to select x y and z. The rest was arrays of Beckman ten turn pots. It was an "initial conditions" unit to set all the constants, or variables set to a value for a particular run, since the computers were mostly used to solve differential equations and generate a family of curves. This piece was controlled by a digital device. – C. Towne Springer Dec 02 '14 at 06:52
  • 4
    Analog is fast, if it's just arithmetic, exp, sqrt etc. But as soon as you add a capacitor or inductor, needed for differentiation and integration, then it's slow. The analog computers of history were often used for solving differential equations - they were "slow". But some just did algebra. So I can see why different people may have different views on analog computation speed. – DarenW Dec 02 '14 at 07:43
  • A mixer is just a fast multiplier. – copper.hat Dec 02 '14 at 08:01
  • @DarrenW It is possible to compute square root and exponential with analog circuit? – zduny Dec 02 '14 at 10:46
  • 1
    Could you explain why analog is slow? In digital computer some instructions are "slow" because they need few iterations to be completed. But with analog I believe it only takes one pass to get the result. – zduny Dec 02 '14 at 10:52
  • Look at it like this. The analog circuit has a phase delay through the circuit. As such the frequency of analog signals that can be processed by the circuit is limited by this delay time. Some circuits this limitation could keep the circuit from being used more than at a few kilohertz rate. At digital computer at the speed of today's processors could be executing math operations at 100's of MHz rate. – Michael Karas Dec 02 '14 at 10:58
  • @GeorgeHerold Perhaps not just noise, process variation might also make analog computation less reliably consistent (across chips rather than across time). (Analog computation was proposed for a perceptron branch predictor, e.g., Renée St. Amant et al., "Low-Power, High-Performance Analog Neural Branch Prediction", 2008.) –  Dec 02 '14 at 11:34
  • 1
    @mrpyo - Absolutely, you can do both functions. If you take a multiplier and connect both inputs together, it becomes a "squarer". If you use the circuit The Photon used in his answer with both inputs tied to the op amp output it generates square roots. The voltage/current relationship in a diode is exponential, so you can use that to generate exponents. And by putting a diode in a feedback path you get logarithms. In all cases, though, the dynamic range can be limited by amplifier offsets, drifts, etc. And for the diode circuits there are other error sources as well. – WhatRoughBeast Dec 02 '14 at 14:19
  • It should be noted that analog computers were inherently capable of doing integrals and differentials, and quite simply and efficiently, while digital computers had difficulty with these (especially prior to high-quality floating-point implementations). And analog computers were often slow so that the operators could observe them in "real time" -- it was not an inherent requirement. – Hot Licks Dec 02 '14 at 22:32
10

Is analog signal division possible (as FPU multiplication often takes one CPU cycle anyway)?

If you have an analog multiplier, an analog divider is "easy" to make:

schematic

simulate this circuit – Schematic created using CircuitLab

Assuming X1 and X2 are positive, this solves Y = X1 / X2.

Analog multipliers do exist, so this circuit is possible in principle. Unfortunately most analog multipliers have a fairly limited range of allowed input values.

Another approach would be to first use log amplifiers to get the logarithm of X1 and X2, subtract, and then exponentiate.

Would it be theoretically possible to speed up modern processors if one would use analog signal arithmetic (at the cost of precision) instead of digital FPUs (CPU -> ADC -> analog FPU -> DAC -> CPU)?

At heart it's a question of technology---so much has been invested in R&D to make digital operations faster, that analog technology would have a long way to go to catch up at this point. But there's no way to say it's absolutely impossible.

On the other hand, I wouldn't expect my crude divider circuit above to work above maybe 10 MHz without having to do some very careful work and maybe deep dive research to get it to go faster.

Also, you say we should neglect precision, but a circuit like I drew is probably only accurate to 1% or so without tuning and probably only to 0.1% without inventing new technology. And the dynamic range of the inputs that can be usefully calculated on is similarly limited. So not only is it probably 100 to 1000 times slower than available digital circuits, its dynamic range is probably about 10300 times worse as well (comparing to IEEE 64-bit floating point).

The Photon
  • 126,425
  • 3
  • 159
  • 304
  • 5
    Hey I've got an old AD multiplier that does 10 MHz. I bet I can get something faster now. Just to throw a monkey wrench into this topic, if quantum computing ever pans out it will be analog. – George Herold Dec 02 '14 at 03:18
  • @GeorgeHerold, that's my best argument why quantum computing is snake oil. – The Photon Dec 02 '14 at 06:01
  • Very neat trick. Except I think that computes A(x1) / (1 + A (x2)), which should be accurate for a large gain A. – Yale Zhang Dec 03 '14 at 22:52
  • @georgeherold A mixer is really just a fast analog multiplier with slightly odd input requirements, and I think microwave people are getting those up to 60 GHz or more these days – mbrig Mar 16 '17 at 01:40
  • @mbrig, the difficulty is the op-amp and keeping the feedback loop closed. – The Photon Mar 16 '17 at 02:21
7
  1. No, because DAC and ADC conversions take much more time than digital division or multiplication.

  2. Analog multiplication and division is not that simple, uses more energy and that would be not cost efficient (compared to digital IC).

  3. Fast (GHz range) analog multiplication and division ICs have precision about 1%. That means all you can divide on fast analog divider is... 8-bit numbers or something like that. Digital ICs deal with numbers like this very fast.

  4. Another problem is that floating point numbers cover very huge range - from very small numbers. 16-bit float number range is \$3.4*10^{-34}\$ to \$3.4*10^{34}\$. That would require 1360dB dynamics range (!!!) if I didnt messed up anything.

Here you can look at analog dividers and multipliers offered by Analog Devices (link)

enter image description here

These things are not very useful in general computing. These are much better in analog signal processing.

Kamil
  • 5,926
  • 9
  • 43
  • 58
  • 4. Not exactly. Floating point numbers are represented in scientific notation, basically two numbers - coefficient and exponent both cover more limited range. – zduny Dec 02 '14 at 03:17
  • @mrpyo Are you sure? I think 16-bit float range is much higher than numbers I wrote before edit (something like 0000000000000.1 and 10000000000000). – Kamil Dec 02 '14 at 03:24
  • http://en.wikipedia.org/wiki/IEEE_floating_point For C `float` it's 23 bits for coefficient, 8 bits for exponent and 1 bit for sign. You would have to represent those 3 ranges in analog. – zduny Dec 02 '14 at 03:28
  • Couldn't you reduce required frequency by having many units in series and using only one at the time? – zduny Dec 02 '14 at 03:37
  • @mrpyo I don`t understand what you mean by this: "You would have to represent those 3 ranges in analog.". – Kamil Dec 02 '14 at 03:37
  • @mrpyo You are thinking about parallel/multiple thread processing? That would be possible with some multiplexing, but still - 8-bit ADCs are still about 256x slower than digital IC (when you are using similar technology transistors). – Kamil Dec 02 '14 at 03:44
  • This is how you multiply numbers in scientific notation: `n*10^a + m*10^b = a*m^(a+b)` So you can have separate analog signals for n, a, m, b that cover much more limited range and still do multiplication on very wide range of floating point numbers... – zduny Dec 02 '14 at 03:47
  • Aaah now I got it. Thats interesting, but still - digital way is much more time, energy and cost efficient. – Kamil Dec 02 '14 at 03:52
  • @mrpyo: Analog operations typically don't work like that or aren't designed to work that way which is why floating point numbers have higher dynamic range. Of course, as you've stated you CAN implement analog floating point numbers but that would slow down the calculations again. – slebetman Dec 02 '14 at 03:52
  • 4
    The true analog equivalent of Floating Point would be the logarithmic domain, therefore absurdly high dynamic range (higher than the FP mantissa) is not necessary. Otherwise, good points. –  Dec 02 '14 at 12:14
0

Actually, researchers are now re-visiting analog computing techniques under the context of VLSI, because analog computation could provide much higher energy efficiency than the digital ones in specific applications. See this paper:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7313881&tag=1

Nate
  • 1