38

Transistors serve multiple purposes in an electrical circuit, i.e switches, to amplify electronic signals, allowing you to control current etc...

However, I recently read about Moore's law, among other random internet articles, that modern electronic devices have a huge number of transistors packed into them, with the amount of transistors that are in modern electronics being in the range of millions, if not billions.

However, why exactly would anyone need so many transistors anyway? If transistors work as switches etc, why would we need such a absurdly large amount of them in our modern electronic devices? Are we not able to make things more efficient so that we use wayy less transistors than what we are using currently?

tshepang
  • 103
  • 3
Kenneth .J
  • 643
  • 3
  • 9
  • 14
  • 7
    I'd suggest going down to what your chip is made of. Adders, Multipliers, Multiplexers, Memory, More Memory... And think of the numbers of these things that need to be present there... – Dzarda Jul 01 '14 at 09:14
  • Typically you don't find lots of transistors discreetly mounted to PCBs but as part of logic ICs such as FPGAs, CPUs, Memory etc. Our desire to make products smart means many more products now contain a CPU than was the case a few years ago. – Warren Hill Jul 01 '14 at 09:32
  • 9
    Somewhat related (and self-promoting): [Why does more transistors = more processing power?](http://electronics.stackexchange.com/q/5592/15426) –  Jul 01 '14 at 10:33
  • 1
    Also the continuous use of transistors as replacements for most mechanical devices helped shape modern consumer electronics more than anything else. Image your phone clackering each time it turns the backlight on or off (whilst being the size and weight of a car) – Mark Jul 01 '14 at 12:00
  • 7
    You ask why we cannot "make things more efficient" to use fewer transistors; you assume that we seek to minimise the number of transistors. But what if power efficiency is improved by adding more for control? Or more notably time efficiency in doing whatever computation? 'Efficiency' is no one thing. – OJFord Jul 01 '14 at 18:19
  • One relatively minor detail is that sometimes transistors are used to substitute for other devices, particularly resistors. Where in discrete circuitry you might see a resistor, on a chip it's apt to be a transistor, rigged to produce the desired current flow. This is because it's often easier to make transistors than resistors on-chip. – Hot Licks Jul 01 '14 at 19:43
  • 2
    It's not that we need that many transistors to build a CPU, but since we can make all those transistors, we might as well use them in ways that make the CPU faster. – user253751 Jul 02 '14 at 11:32
  • @immibis - Yeah, the ultimate answer is that it keeps engineers employed. – Hot Licks Jul 03 '14 at 11:29
  • @HotLicks you make it sound like the engineers are not creating value. – user253751 Jul 03 '14 at 11:30
  • @immibis - I are one, and I can tell you that some does some don't. (But I was joking.) (Though, ultimately, a sanitation worker may be creating more "value" than an engineer, yet guess which gets paid more.) – Hot Licks Jul 03 '14 at 11:37
  • pretty much the same reason you have so many types of wheels. – user34920 Jul 07 '14 at 11:20
  • First of all we need a clear definition of efficiency. Imagine we had a private jet, a concorde, and a 747... (insert Morgan Kaufmann series book here) – fuzzyhair2 Jul 22 '14 at 12:48

12 Answers12

48

Transistors are switches, yes, but switches are more than just for turning lights on and off.

Switches are grouped together into logic gates. Logic gates are grouped together into logic blocks. Logic blocks are grouped together into logic functions. Logic functions are grouped together into chips.

For example, a TTL NAND gate typically uses 2 transistors (NAND gates are considered one of the fundamental building blocks of logic, along with NOR):

schematic

simulate this circuit – Schematic created using CircuitLab

As the technology transitioned from TTL to CMOS (which is now the de-facto standard) there was basically an instant doubling of transistors. For instance, the NAND gate went from 2 transistors to 4:

schematic

simulate this circuit

A latch (such as an SR) can be made using 2 CMOS NAND gates, so 8 transistors. A 32-bit register could therefore be made using 32 flip-flops, so 64 NAND gates, or 256 transistors. An ALU may have multiple registers, plus lots of other gates as well, so the number of transistors grows rapidly.

The more complex the functions the chip performs, the more gates are needed, and thus the more transistors.

Your average CPU these days is considerably more complex than say a Z80 chip from 30 years ago. It not only uses registers that are 8 times the width, but the actual operations it performs (complex 3D transformations, vector processing, etc) are all far far more complex than the older chips can perform. A single instruction in a modern CPU may take a many seconds (or even minutes) of computation in an old 8-bitter, and all that is done, ultimately, by having more transistors.

Majenko
  • 55,955
  • 9
  • 105
  • 187
  • NAND = 4 not 2 Transistors and FF's are more than just 2 NORs – placeholder Jul 01 '14 at 14:16
  • NAND is 2 transistors, for example: http://cpuville.com/logic_gates.htm. A simple SR takes 2 NANDs. – Majenko Jul 01 '14 at 15:32
  • 2
    Oh my! you really need to rethink that. Show even ONE design that has Million of transistors that is done in Bipolar!! ALL of these designs are CMOS, – placeholder Jul 01 '14 at 15:59
  • The first two paragraphs explains very well the use of transistors. Talking about bipolar transistors is okay for the sake of simplicity, but you shouldn't use that as the base unit for "counting" transistors in a chip. @placeholder is right, every digital chip uses CMOS.. – strnk Jul 01 '14 at 16:25
  • 2
    Fair point. Added a second schematic to highlight the difference, and the subsequent doubling of transistors just from that. – Majenko Jul 01 '14 at 17:08
  • 4
    weak vs strong pullup is a completely different issue from TTL vs CMOS. BJTs do come in PNP, after all. CMOS does not involve "doubling of transistors". Large-scale integration does, since transistors are far smaller than pull-up resistors in any ASIC process. – Ben Voigt Jul 02 '14 at 03:52
  • +1 Although the answer (as all answers so far) does not address what seems to be a deeper question: couldn't we do some of the same things with something else other than (piling up) transistors? But maybe that would be a question for http://physics.stackexchange.com/ – Rolazaro Azeveires Jul 02 '14 at 08:58
  • @RolazaroAzeveires You mean quantum computing? I think that's a little outside the remit of this site, yes :P – Majenko Jul 02 '14 at 09:32
  • @majenko I wasn't thinking of any specific thing, but yes, that could be a line of thought, and yes, not an "electronics" answer – Rolazaro Azeveires Jul 02 '14 at 09:35
  • 1
    That is not a TTL NAND gate. That is an RTL logic gate. – fuzzyhair2 Jul 22 '14 at 12:49
  • @BenVoigt: CMOS generally involves doubling the number of *active* transistors compared with NMOS, though NMOS requires using a lot of passive-pullups (which, though passive, are still transistors); some circuits have NMOS realizations whose active-transistor count is less than half that of what would be achievable in CMOS. Although all digital logic today is almost always CMOS, that wasn't really true until the 1990s. Even into the late 1980s NMOS was still widely in use. – supercat Mar 09 '15 at 15:47
  • @supercat: Sure, the number of transistors in NMOS (open-drain) and TTL open-collector is about the same, and about half of push-pull TTL or CMOS. Like I said, the choice to use push-pull or weak pullup is a difference axis from BJT vs FET. And the fact that on-die pullup "resistors" are really implemented by transistors agrees with my point that the transistor doubling came from LSI. – Ben Voigt Mar 09 '15 at 16:05
  • @fuzzyhair2: Wouldn't RTL have series resistors? The "transistor-transistor" in TTL is the direct coupling of an output stage to an input stage, and that NAND gate has it. – Ben Voigt Mar 09 '15 at 16:08
  • @BenVoigt: My understanding is that the reason for the two "T"'s in "transistor-transistor logic" is that a transistor pulls each a signal high and a transistor pulls each signal low; in RTL, a resistor pulls each signal high and a transistor pulls it low. – supercat Mar 09 '15 at 16:26
  • @supercat: A bit of research will correct your misunderstanding. Wikipedia has articles on the relevant logic approaches. – Ben Voigt Mar 09 '15 at 16:27
  • Although, I do apologize for overlooking the fact that the gate shown does have (useless) series resistors. It still is not the typical RTL NAND gate, however. – Ben Voigt Mar 09 '15 at 16:31
  • @BenVoigt: Okay, thanks; I'd thought TTL outputs were either totem-pole or else open-collector with *no pull-up device". I wasn't aware that it was "acceptable" for a device to have a purely-passive pull-up and still be called TTL. The base resistors aren't useless, though, unless a gate is the only thing fed by a particular output. – supercat Mar 09 '15 at 16:35
  • @BenVoigt: http://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/TTL_npn_nand.svg/220px-TTL_npn_nand.svg.png – fuzzyhair2 Mar 26 '15 at 01:42
17

I checked on local supplier of various semiconductor devices and the biggest SRAM chip they had was 32Mbits. That's 32 million individual areas where a 1 or a 0 can be stored. Given that "at least" 1 transistor is needed to store 1 bit of information, then that's 32 million transistors at an absolute minimum.

What does 32 Mbits get you? That's 4 Mbytes or about the size of a low quality 4 minute MP3 music file.


EDIT - an SRAM memory cell according to my googling looks like this: -

enter image description here

So, that's 6 transistors per bit and more like 192 million transistors on that chip I mentioned.

Andy aka
  • 434,556
  • 28
  • 351
  • 777
  • ... and now imagine 8GB memory with 68719476736 bits of information – Kamil Jul 01 '14 at 09:50
  • 1
    ... except they don't use transistors in DRAM. – Majenko Jul 01 '14 at 09:56
  • 1
    @Majenko: At least not as many as for other technologies. 1 transistor + 1 capacitor (on microscopic scope obviously) for 1 bit - if I remember correctly. – Rev Jul 01 '14 at 09:58
  • 29
    Each bit of SRAM is at least 4 and often 6 transistors so 128 million transistors or more. DRAM doesn't use transistors *for storage* - but each bit (stored on a capacitor) has its own transistor switch to charge the cap. –  Jul 01 '14 at 10:23
  • 6
    Now imagine the transistors in a 1T SSD (granted 3 bits/cell, and it's on more than one chip) but that's still 2.7 trillion transistors just for the storage- not counting addressing, controlling and allowance for bad bits and wear). – Spehro Pefhany Jul 01 '14 at 11:59
7

I think the OP may be confused by electronic devices having so many transistors. Moore's Law is primarily of concern for computers (CPUs, SRAM/DRAM/related storage, GPUs, FPGAs, etc.). Something like a transistor radio might be (mostly) on a single chip, but can't make use of all that many transistors. Computing devices, on the other hand, have an insatiable appetite for transistors for additional functions and wider data widths.

Phil Perry
  • 131
  • 1
  • 3
    Radios these days *are* computing devices, or at the very least contain them. Digital synthesis of FM frequencies, DSP signal processing of the audio (a biggie), digital supervisory control of station switching and so on. For example, the TAS3208 http://www.ti.com/lit/ds/symlink/tas3208.pdf – Spehro Pefhany Jul 01 '14 at 18:15
  • 1
    You're still not going to see tens or hundreds of million, much less billions, of transistors used for a radio. Sure, they're becoming small special-purpose computers with all that digital function, but nothing on the scale of a multicore 64 bit CPU. – Phil Perry Jul 02 '14 at 17:55
  • @PhilPerry surely a digital radio has something like an ARM in it? Not billions of transistors, but well into the tens of millions. –  Jul 04 '14 at 07:21
  • Well, if you've crossed "the line" from analog radio to a _computer_ that (among other things) receives radio signals, you'll use lots of transistors. My point still stands that the OP's question about _electronic devices_ sounds like confusion between classic analog radios, etc. and computing devices. Yes, they perform in very different manners even if they're both black boxes pulling music out of the air. – Phil Perry Jul 08 '14 at 13:27
4

As previously stated, SRAM requires 6 transistors per bit. As we enlarge our caches (for efficiency purpose), we require more and more transistors. Looking at a processor wafer, you may see that cache is bigger than a single core of a processor, and, if you look closer at the cores, you will see well organized parts in it, which are also cache (probably data and instruction L1 caches). With 6MB of cache, you need 300 millions transistors (plus the addressing logic).

But, also as previously stated, transistors are not the only reason to increase the number of transistors. On a modern Core i7, you have more than 7 instructions executed per clock period and per core (using the well-known dhrystone test). This means one thing : state-of-the-art processors do a lot of parallel computing. Doing more operations at the same time requires to have more units to do it, and very cleverer logic to schedule it. Cleverer logic requires much more complex logical equations, and so much more transistors to implement it.

Ricardo
  • 6,134
  • 19
  • 52
  • 85
Jacen
  • 508
  • 4
  • 12
  • SRAM has not required 6 transistors in quite a few years. In fact 6T Sram is pretty wasteful when you get use 1T 2T or 4T srams as essentially drop in replacements. – cb88 Mar 24 '16 at 17:44
2

Stepping away from the details a bit:

Computers are complex digital switching devices. They have layer upon layer upon layer of complexity. The simplest level is logic gates like NAND gates, as discussed, Then you get to adders, shift registers, latches, etc. Then you add clocked logic, instruction decoding, caches, arithmetic units, address decoding, It goes on and on and on. (Not to mention memory, which requires several transistors per bit of data stored)

Every one of those levels is using lots of parts from the previous level of complexity, all of which are based on lots and lots of the basic logic gates.

Then you add concurrency. In order to get faster and faster performance, modern computers are designed to do lots of things at the same time. Within a single core, the address decoder, arithmetic unit, vector processor, cache manager, and various other subsystems all run at the same time, all with their own control systems and timing systems.

Modern computers also have larger and larger numbers of separate cores (multiple CPUs on a chip.)

Every time you go up a layer of abstraction, you have many orders of magnitude more complexity. Even the lowest level of complexity has thousands of transistors. Go up to high level subsystems like a CPU and you are talking at least millions of transistors.

Then there's GPUs (Graphics Processing Units). A GPU might have a THOUSAND separate floating point processors that are optimized to do vector mathematics, and each sub-processor will have several million transistors in it.

Duncan C
  • 1,398
  • 2
  • 15
  • 27
1

Without attempting to discuss how many transistors are needed for specific items, CPU's use more transistors for increased capabilities including:

  • More complex instruction sets
  • More on-chip cache so that fewer fetches from RAM are required
  • More registers
  • More processor cores
1

Majenko has a great answer on how the transistors are used. So let me instead go from a different approach vector and deal with efficiency.

Is it efficient to use as few transistors as you can when designing something?

This basically boils down to what efficiency you're talking about. Perhaps you're a member of a religion that maintains it is necessary to use as few transistors as possible - in that case, the answer is pretty much given. Or perhaps you're a company building a product. Suddenly, a simple question about efficiency becomes a very complicated question about the cost - benefit ratio.

And here comes the kicker - transistors in integrated circuits are extremely cheap, and they're getting ever cheaper with time (SSDs are a great example of how the cost of transistors was pushed down). Labor, on the other hand, is extremely expensive.

In the times when ICs were just getting started, there was a certain push to keep the amount of components required as low as possible. This was simply because they had a significant impact on the cost of a final product (in fact, they were often most of the cost of the product), and when you're building a finished, "boxed" product, the labor cost is spread out over all the pieces you make. The early IC-based computers (think video arcades) were driven to as small per-piece cost as possible. However, the fixed costs (as opposed to per-piece costs) are strongly impacted by the amount you are able to sell. If you were only going to sell a couple, it probably wasn't worth it to spend too much time on lowering the per-piece costs. If you were trying to build a whole huge market, on the other hand, driving the per-piece costs as low as possible had a pay-off.

Note an important part - it only makes sense to invest a lot of time in improving the "efficiency" when you're designing something for mass-production. This is basically what "industry" is - with artisans, skilled labor costs are often the main cost of the finished product, in a factory, more of the costs comes from materials and (relatively) unskilled labor.

Let's fast forward to the PC revolution. When IBM-style PC's came around, they were very stupid. Extremely stupid. They were general purpose computers. For pretty much any task you could design a device that could do it better, faster, cheaper. In other words, in the simplistic efficiency view, they were highly inefficient. Calculators were much cheaper, fit in your pocket and run for a long time of a battery. Video game consoles had special hardware to make them very good at creating games. The problem was, they couldn't do anything else. PC could do everything - it had a much worse price / output ratio, but you weren't railroaded into doing a calculator, or a 2D sprite game console. Why did Wolfenstein and Doom (and on Apple PC's, Marathon) appear on general purpose computers and not on game consoles? Because the consoles were very good at doing 2D sprite-based games (imagine the typical JRPG, or games like Contra), but when you wanted to stray away from the efficient hardware, you found out there's not enough processing power to do anything else!

So, the apparently less efficient approach gives you some very interesting options:

  • It gives you more freedom. Contrast old 2D consoles with old IBM PCs, and old 3D graphics accelerators to modern GPUs, which are slowly becoming pretty much general purpose computers on their own.
  • It enables mass-production efficiency increases even though the end products (software) is "artisan" in some ways. So companies like Intel can drive the cost of unit of work down much more efficiently than all the individual developers all over the world.
  • It gives more space for more abstractions in the development, thus allowing better reuse of ready solutions, which in turn allows lower development and testing costs, for better output. This is basically the reason why every school-boy can write a full-fledged GUI-based application with database access and internet connectivity and all the other stuff that would be extremely hard to develop if you had to always start from scratch.
  • In PCs, this used to mean that your applications basically got faster over time without your input. The free-lunch time is mostly over now, since it's getting harder and harder to improve the raw speed of computers, but it shaped most of the PC's lifetime.

All this comes at a "waste" of transistors, but it's not real waste, because the real total costs are lower than they would be if you pushed for the simple "as few transistors as possible".

Ricardo
  • 6,134
  • 19
  • 52
  • 85
Luaan
  • 260
  • 1
  • 6
1

Aside from increasing raw storage capacities of RAM, cache, registers and well as adding more computing cores and wider bus widths (32 vs 64 bit, etc), it is because the CPU is increasingly complicated.

CPUs are computing units made up of other computing units. A CPU instruction goes through several stages. In the old days, there was one stage, and the clock signal would be as long as the worst-case time for all the logic gates (made from transistors) to settle. Then we invented pipe lining, where the CPU was broken up into stages: instruction fetch, decode, process and write result. That simple 4- stage CPU could then run at a clock speed of 4x the original clock. Each stage, is separate from the other stages. This means not only can your clock speed increase to 4x (at 4x gain) but you can now have 4 instructions layered (or "pipelined") in the CPU, resulting in 4x the performance. However, now "hazards" are created because one instruction coming in may depend on the previous instruction's result, but because it's pipelined, it won't get it as it enters the process stage as the other one exits the process stage. Therefore, you need to add circuitry to forward this result to the instruction entering the process stage. The alternative is to stall the pipeline which decreases performance.

Each pipeline stage, and particularly the process part, can be sub-divided into more and more steps. As a result, you end up creating a vast amount of circuitry to handle all the inter-dependencies (hazards) in the pipeline.

Other circuits can be enhanced as well. A trivial digital adder called a "ripple carry" adder is the easiest, smallest, but slowest adder. The fastest adder is a "carry look-ahead" adder and takes a tremendous exponential amount of circuitry. In my computer engineering course, I ran out of memory in my simulator of a 32-bit carry look-ahead adder, so I cut it in half, 2 16 bit CLA adders in a ripple-carry configuration. (Adding and subtracting are very hard for computers, multiplying easy, division is very hard)

A side affect of all this is as we shrink the size of transistors, and subdivide the stages, the clock frequencies can increase. This allows the processor to do more work so it runs hotter. Also, as the frequencies increase propagation delays become more apparent (the time it takes for a pipeline stage to complete, and for the signal to be available at the other side) Due to impedance, the effective speed of propagation is about 1 ft per nanosecond (1 Ghz). As your clock speed increases, it chip layout becomes increasingly important as a 4 Ghz chip has a max size of 3 inches. So now you must start including additional buses and circuits to manage all the data moving around the chip.

We also add instructions to chips all the time. SIMD (Single instruction multiple data), power saving, etc. they all require circuitry.

Finally, we add more features to chips. In the old days, your CPU and your ALU (Arithmetic Logic Unit) were separate. We combined them. The the FPU (Floating point unit) was separate, that got combined too. Now days, we add USB 3.0, Video Acceleration, MPEG decoding etc... We move more and more computation from software into hardware.

user9170
  • 111
  • 1
1

Another side of the "so many transistors" story is that these transistors are not individually designed-in by a human. A modern CPU core has on the order of 0.1 billion transistors, and no human designs every one of those transistors directly. It wouldn't be possible. A 75 year lifetime is only 2.3 billion seconds.

So, to make such huge designs feasible, the humans are involved in defining the functionality of the device at a much higher level of abstraction than individual transistors. The transformation to the individual transistors is known as circuit synthesis, and is done by very expensive, proprietary tools that collectively cost on the order of a billion dollars to develop over the years, aggregating among the major CPU makers and foundries.

The circuit synthesis tools don't generate designs with the least number of transistors possible. This is done for a multitude of reasons.

First, let's cover the most basic case: any complex circuit can be simulated by a much simpler, perhaps serial, CPU, with sufficient memory. You can certainly simulate an i7 chip, with perfect accuracy, if only you hook up enough serial RAM to an Arduino. Such a solution will have much less transistors than the real CPU, and will run abysmally slowly, with an effective clock rate of 1kHz or less. We clearly don't intend the transistor number reduction to go that far.

So we must limit ourselves to a certain class of design-to-transistors transformations: those that maintain the parallel capacity built into the original design.

Even then, the optimization for minimal number of transistors will likely produce designs that are not manufacturable using any existing semiconductor process. Why? Because chips that you can actually make are 2D structures, and require some circuit redundancy simply so that you can interconnect those transistors without requiring a kilogram of metal to do so. The fan-in and fan-out of the transistors, and resulting gates, does matter.

Finally, the tools aren't theoretically perfect: it'd usually require way too much CPU time and memory to generate solutions that are globally minimal in terms of transistor numbers, given a constraint of a manufacturable chip.

0

I think what the OP needs to know is that a 'simple switch' often needs several transistors? Why? Well, for many reasons. Sometimes extra transistors are needed so that power usage is low for either 'on' or 'off' state. Sometimes transistors are needed to deal with uncertainties in voltage inputs or component specifications. A lot of reasons. But I appreciate the point. Look at the circuit diagram for an OP-AMP and you see a few dozen transistors! But they wouldn't be there if they didn't serve some purpose to the circuit.

Jiminion
  • 101
  • 1
0

Basically all the computer understands is 0s and 1s.. which is decided by these switches.. Yes, the transistors' functions are more than that of switches. So if a switch is can decide if the output has to be a 0 or a 1 (assuming that as a single bi operation), the more the number of bits. the more transistors.. so no wonder why we have to embed millions of transistors into a single microprocessor.. :)

0

In the era of technology, we need smart devices (small, fast and efficient). These devices are made up of integrated circuits (IC's) which contain a no. of transistors. We need more and more transistors to make IC smarter and faster because in electronics, every circuit in an IC is made from an adder, subs-tractor, multiplier, divider, logic gates, registers, multiplexers, flip flops, counters, shifters, memories and microprocessors etc. to implement any logic in devices and these are made up of transistors only (MOSFETs). With the help of transistors, we can implement any logic. So we need more and more transistors.....

enter image description here

Deepak Berwal
  • 245
  • 6
  • 13