4

Whenever I need to add two numbers, I just write a+b in VHDL. I never have to bother about what is synthesized, as long as timing and area constraint is met I think the objective is achieved.

I have recently came across these: Brent–Kung adder, Kogge–Stone adder and Ling adder. I literally never heard of these although I have been writing VHDL for FPGAs for many years now. Are these algorithms actually ever used in real life?

quantum231
  • 11,218
  • 24
  • 99
  • 192
  • 3
    Unless you're designing adder units to go into FPGAs (rather than just using the ones provided by the vendor) I imagine you wouldn't really have a reason to think about the exact algorithm (which likely depends on intricate details like the cost, power and delay of individual gates on the vendor's target node). – user1850479 Sep 05 '22 at 02:54
  • 3
    Synthesis will typically implement `a+b` using the least logic that fulfils your timing constraints. It may even switch between any or all of these algorithms as you ratchet up the clock speed constraint. So yes they are used in real life, but unless you are writing a synthesis tool you may never have to worry about them. –  Sep 05 '22 at 09:52
  • I was reading about a book on floating point unit hardware design. This is where I came across these algorithms. What struck me the hardest was that I never heard of these in my life before. I only know about the ripple and carry look ahead adders. – quantum231 Sep 05 '22 at 16:27
  • 2
    Wikipedia has a category on [binary adders](https://en.wikipedia.org/wiki/Category:Adders_(electronics)) which list the types you mentioned, and others. The individual pages on each type of adder give suggestions for further reading. There is also a category on [binary arithmetic](https://en.wikipedia.org/wiki/Category:Binary_arithmetic) which covers a wider range of topics. – Graham Nye Sep 06 '22 at 14:10

2 Answers2

4

They definitely can be if you're doing VLSI design. On FPGAs, the fabric will generally be designed with various standard operations in mind, especially building efficient adders. So you really just want to use "+" and then let the tools implement the optimized adder structure that uses all of the hard carry chain logic and what not. While there is nothing preventing you from building your own adder, it's probably not going to beat what the tools do for "+" that's optimized for the actual FPGA fabric.

Now, there may be a few exceptions to this if you're trying to do something particularly non-standard. For instance, if you need an adder tree. Some tools might be smart enough to know about adder trees and construct efficient ones in the fabric. Others might not, in which case making your own explicit adder tree might be more efficient.

alex.forencich
  • 40,694
  • 1
  • 68
  • 109
3

I literally never heard of these although I have been writing VHDL for FPGAs for many years now.

What struck me the hardest was that I never heard of these in my life before.

I learn something new every day and I've been at it for decades, so not hearing about something is not unusual. We all get used to it. You should expect such experiences every day if you'll be exploring things in great detail, in any field, pretty much.

There are probably hundreds of floating point and integer block designs used on chips that neither of us have heard about.

I've not heard about those algorithms either, and I've designed floating point hardware from discrete elements (not on a chip though, and only for hobby projects).

The reason I haven't heard of them is that the problems these algorithms solve doesn't quite arise in hobby circumstances, unless your hobby is chip design. When making an adder from discrete gates or even discrete transistors, interconnect density and capacitance are rarely a problem, or can be addressed with only a minor cost. When designing an IC, on the other hand, you've got to deal with the so-called logical effort due to gate- and wire capacitances driving the design, essentially, and the transistor sizes have to be scaled according on how far they drive and how many gates.

Thus, those algorithms don't necessarily optimize computational complexity but interconnect complexity, or more specifically die area taken up by a given arithmetic function. Usually the die area and the interconnect complexity go hand-in hand if you keep the number of gates "constant".

On a chip, a gate is basically free, and nearby gates add very little to signal propagation times. Even on an ancient CMOS process that was hot news 20 years ago, gates switch well under 1ns. It's the interconnect length and branching that slows everything down.

That's also why some "computationally light" algorithms are not optimal on chips, and doing more computations or even repeating some computations locally can speed things up dramatically as long as it decreases the interconnect.

I only know about the ripple and carry look ahead adders.

The "building block" algorithms used in modern digital design are the domain of numerical methods and are a specialist subject. You could take an entire graduate course of study, culminating in a doctoral dissertation, just on such algorithms and their hardware implementation.

There's much, much more to digital design than you need to know to write VHDL for FPGAs. The tools, and the designers of the chips themselves, have abstracted a lot of it away. And the rest only applies if you're doing cell (building block) design for chips.

Are these algorithms actually ever used in real life?

Yes. They were invented to solve practical problems people had. Although sometimes an algorithm is invented "in vacuum", without a practical application in mind, if it's worth publishing then often there will be a use for it.