24

I have not seen a single reference where a whole computer is built inside a chip itself instead of modularizing and spreading it on a board.

I acknowledge that having modular parts enable versatility, but can big silicon companies such as Intel, AMD produce a whole computer with cpu, chipset, RAM, memory controllers, all in a microchip?

I am familiar the concept of SoC but havent seen ALL the components inside single chip.

Shashank V M
  • 2,279
  • 13
  • 47
user0193
  • 416
  • 1
  • 3
  • 9
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/118064/discussion-on-question-by-jhonnys-is-there-a-theoretical-possibility-of-having-a). – Voltage Spike Jan 05 '21 at 14:57
  • no because of yield in general then finding the equipment to build a chip that big without having lots of problems being consistent (it is essentially a photographic process, that needs a focus area). for a long time now multi chip modules are used instead to get bigger "chips", essentially a pcb with multiple chips on it but you end up having a trade off of how much do you put on that module vs how much on the motherboard, as answered below, there is a lot of stuff you really have to have on a pcb and not a module. – old_timer Jan 05 '21 at 15:03
  • what does microchip have to do with this? – old_timer Jan 05 '21 at 15:03
  • 1
    The thing is, as it becomes possible to put more on a single chip, the stuff people want to put there becomes greater as well. Multipliers, caches, all that silly stuff! – Hot Licks Jan 05 '21 at 17:51
  • @HotLicks well if it suits the general purpose need, then why not. Cashes and multipliers help accelerate logic core. – GENIVI-LEARNER Jan 05 '21 at 18:21

15 Answers15

51

It is theoretically possible. However I would describe it as practically impossible.

When manufacturing silicon chips, you have a certain defect density over your wafer. Usually chips in the center are fine and at the edges more defects are present. This is usually not a big problem: Lets say you have 1000 single chips on one wafer and 20 defects, caused by process variations, particles on the wafer etc. You will get 20 Chips loss at most, which is 2%.

If you manufactured a single chip on this wafer you would have lost 100%. The bigger the Chip gets the smaller your yield gets.

I work in the semiconductor industry and have yet to see a wafer where all chips are fully functional. Nowadays we have very high yield numbers. But still: There are defects on the wafer.

Another thing is: Not all components in your computer can be manufactured on a silicon chip. For example the coils which are used for the DC/DC regulators cannot be implemented on-chip. Inductors on Chips are quite a pain. They're usually only done for > 1 GHz transformers for signals coupling etc. A power inductor with several nH or even uH is not possible (tm).

Another downside are multiple technologies. CPUs are usually done in a very small CMOS technology for the high transistor integration. However, let's say a headphone output has to drive 32 Ohm headphones. Manufacturing a headphone amplifier in a 7 nm FinFET technology is not ideal. Instead you use different semiconductor technologies with lower frequency but higher current capability. There a a lot of different semiconductor technologies that are used to manufacture all the chips for a single computer.

Regarding memories like DRAMs and nonvolatile memories like Flash: These also need specific technologies. One downside of manufacturing modern Microcontrollers (Processors with RAM and ROM on board) is, that the semiconductor process is somewhat bottlenecked by the internal flash these controllers need. More powerful processors usually don't have onboard program memory (except for a very small mask-ROM which holds the bootloader).

It is still better to combine multiple dedicated chips than trying to put everything on one die. As you've already stated: With modern SoCs there are a lot of formerly separate components now on a single IC.

However, putting everything on one chip is

  1. not very flexible
  2. not very cost efficient due to the higher yield losses
  3. not ideal from a technical perspective.
Robert Harvey
  • 1,056
  • 1
  • 10
  • 22
GNA
  • 1,471
  • 13
  • 18
  • Can you please clarify `It is still better to combine multiple dedicated chips than trying to put everything on one die`, isnt combining same as putting them in same die? – user0193 Jan 03 '21 at 21:49
  • 6
    No. Because this allows you to manufacture these circuits in different technologies. It also improves your yield. If you take the 7 or 5 nm TSMC technology AMD uses to manufacture its high end CPUs, you wouldn't be able to manufacture a decent Ethernet-Chip or an audio chipset with this technology. – GNA Jan 03 '21 at 21:54
  • I am accepting this as a formal answer and I think this is most direct answer to what I was looking! Putting everthing in single wafer is not possible mainly due to the difference in **manufacturing process** like _CMOS_ and _TSMC_ for various components! – user0193 Jan 03 '21 at 21:59
  • 1
    @JhonnyS indeed, for something on the scale of today's full-"computers" that is true (though it is not for smaller systems well exceeding the functionality of the PC's of the past) - but many many people already explained that quite earlier in the lifecycle of this question. – Chris Stratton Jan 03 '21 at 22:02
  • 5
    Let's just clarify: TSMC is a manufacturer. – GNA Jan 03 '21 at 22:02
  • @ChrisStratton yes, I do agree but having read `DC/DC regulators cannot be implemented on-chip`, `Manufacturing a headphone amp in a 7 nm fin-fet technology is not ideal`, `Inductors on Chips are quite a pain. They're usually only done for > 1 GHz transformers for signals coupling` plus other concrete cases such as `CMOS` technology etc. gave very strong intuition to me. – user0193 Jan 03 '21 at 22:14
  • 1
    Plenty of computers don't have a DC/DC converter. They're a fairly recent idea. Historically the power supply has been entirely distinct from the motherboard, to which it provided the precise voltages needed. You started to get on-board regulators when logic voltage standards diverged, when core voltages dropped, and *especially* when you got to core which needed different supply voltages when running at different speeds. Plenty of "computers" exceeding legacy PCs still have none of these, or in fact accomplish it with on-chip *linear* regulation. – Chris Stratton Jan 03 '21 at 22:16
  • 9
    All x86 computers that I know of have DC/DC regulators for the core voltage. These core voltage regulators are on PC mainboards for ages. Saying they're "fairly recent" is not very accurate. They've been the standard for more than two decades. I have an old pentium 4 mainboard laying around from the year 2000. And it uses DC/DC converters for the core supply. Even the pentium 4 CPU required almost 100 Amps on its core voltage. https://www.intel.com/content/dam/support/us/en/documents/processors/pentium4/sb/30056103.pdf Table 9 page 22. ~1V with ~100A without a DC/DC converted would be crazy. – GNA Jan 03 '21 at 22:25
  • 1
    Of course we can start comparing modern emiconductor technology with the capabilities of the '80s. But that's not very useful. Maybe in future it will be possible to integrate all of a now "modern" PC in a single chip. But current performance cannot be achieved by current technologies using a single chip. Small joke: Even old '70s tech can't be implemented on a single chip. Where do you put in your punch-cards? – GNA Jan 03 '21 at 22:32
  • "let's say hour headphone" --> "our"? – chux - Reinstate Monica Jan 03 '21 at 23:32
  • Great answer. I also work in semiconductor, and it always makes me laugh how "20 chips at most" loss is reasonable in silicon, whereas in III-V "only 20 chips yielded over the entire wafer" is not unheard of – Shamtam Jan 04 '21 at 03:55
  • You could maybe imagine putting some inductors inside the same plastic package that houses the SoC. If it was going to be big enough to have an AC wall plug, or a 20VDC plug for an external power supply, and USB-C (and maybe HDMI) for keyboard and video anyway, it would have some room inside the package. Or the required external HW would include some external inductors and maybe capacitors, although I think the OP is trying to avoid a mobo entirely? Anyway, just thought it was amusing to picture a modern CPU with it's >7 layers of whisker thin metal wires, and also a power supply inductor :P – Peter Cordes Jan 04 '21 at 06:40
  • @Shamtam: Yes. I also used to develop for an GaAs HEMT technology. The foundry specified a yield of like ~15%. And those were only the chips that didn't instantly break down. Those with working high frequency parts were even less. It was guite a pain... – GNA Jan 04 '21 at 19:03
  • 1
    @PeterCordes There are power modules in a single mold package. These have exactly what you talk about. a power inductor molded into a single black blob with the control asic and decoupling caps. Intel's Enpirion pwer series for example. Other manufacturers like TI and LT/AD also build these modules. Additionally, I remember a tear-down video on youtube of a high frequency oscilloscope. The track-and-hold Chip hat a coax-connector somehow molded into it to allow a very direct connection from the input to the ASIC. However, I don't find the youtube video right now. – GNA Jan 04 '21 at 19:13
  • https://superuser.com/questions/277655/what-is-a-single-chip-microcomputer – bandybabboon Jan 05 '21 at 07:44
  • It doesn't invalidate your answer in any way, but I thought you might find this interesting: https://www.cerebras.net/. Disclaimer: this is my employer, and the product is, in fact, a single chip per wafer :) Not a computer, mind you, just a chip, which needs a lot of support from surrounding components that are almost as full of interesting engineering challenges as the chip itself. But it's manufactured using current fabrication technology. – mathrick Jan 05 '21 at 08:25
  • So, [to be fair](https://www.youtube.com/watch?v=G19B7lTgwCE), the idea of an entire CPU (or even more than a transistor) on one piece of silicone was once practically impossible as well. Almost everything you mention is engineering problems, and not fundamental ones; the advantage of fewer manufacturing steps may eventually lead towards this. Just wire a chip up a battery and some USB ports and go. – Yakk Jan 05 '21 at 16:05
  • @GNA if you ever find the youtube video on the high frequency oscilloscope you were talking about please do care to share in comments. – user0193 Jan 14 '21 at 23:11
  • 1
    @JhonnyS Here it is. I found it. I've set hte timestamp to the moment you can see the T&H-Chips. https://youtu.be/U3w_EWgGQuk?t=1513 However, my memory served me a little wrong. it is not molded. But an interesting package anyway. – GNA Jan 14 '21 at 23:25
  • @GNA awsome! Thanks a lot! – user0193 Jan 14 '21 at 23:42
30

can big silicon companies such as Intel, AMD produce a whole computer with cpu, chipset, RAM, memory controllers, all in a microchip?

Keep in mind that what you might think of as one silicon wafer may actually be multiple chips in one package. A good example is the latest generation AMD Ryzen 9 CPUs that are made of multiple "chiplets" that are bonded together in one package. AMD does it to improve yield and reduce cost, but the same method could be used to provide flash memory, CPU, and DRAM in the same package.

I have not seen a single reference where a whole computer is built inside a chip itself instead of modularizing and spreading it on a board.

What you are describing is a micro-controller or system-on-a-chip.

Many of the micro-controllers and system-on-a-chip devices out there have non-volatile storage, RAM, CPU, and peripherals in one die. In terms of capability they are comparable to a 1980s or early 1990s era PC.

  • CPU clock rates for those devices are in the 10s to 100s of MHz range (which is comparable to a CPU in a 80s and 90s era PC).
  • Flash memory may be up to a few MB (which is comparable to a 1980s era hard drive).
  • RAM may be up to around 1MB (which is comparable to an early 1980s PC).
  • ARM or power PC based chips may feature an MMU. Linux can run on some of those chips.

While not technically one chip, Texas Instruments offers a technology called "Package on Package" for their OMAP mobile phone processors. The PoP chips are BGAs that features solder pads on their top side and balls on the bottom side. Instead of placing several chips next to each-other on a PWB, you stack a CPU, flash memory, and DRAM vertically right on top of each-other.

Another technology that comes close is the Xilinx ZYNQ FPGA. You can get a system running Petalinux with as little as 3 chips + power supplies. Some of the peripherals would also require a physical layer transceiver if they do not fit one of the available IO standards supported by the chip.

  • At a minimum you need to add a flash memory so you can boot the chip. You can then pull the OS off of the flash memory or load it over a network.
  • There are several MB of available onboard memory. But if you want more than that you can add a DRAM chip up to 2 GB.
  • It features either one or two ARM CPU cores running in the 600-700 MHz range that can run petalinux.
  • The ZYNQ chip features lots of built in peripherals such as Ethernet, USB, serial, etc. The only thing you need to add is physical layer transceivers.
  • The chips feature a large piece of FPGA fabric that can be used to crate any additional logic you need.
  • Some things like LVDS video can be created directly from the chip using FPGA resources.
user4574
  • 11,816
  • 17
  • 30
  • My understanding is that the chiplets on the Zen architecture are bonded to some sort of larger "motherboard-like" silicon chip, not just wire-bonded directly to each other. – Hearth Jan 03 '21 at 23:12
  • @Hearth I will remove the word "wire" from "wire-bonded" to reflect that they are connected together without specifying how. – user4574 Jan 04 '21 at 03:42
  • An SoC doesn't have to be a microcontroller. The fact that most microcontrollers are SoCs and most other things aren't is related to the answer to this question, but what the OP is describing doesn't have to be a microcontroller. – Peter Cordes Jan 04 '21 at 06:42
  • 4
    `In terms of capability they are comparable to a 1980s or early 1990s era PC.` that is pretty old information. Modern SoCs are much faster and exceed capabilities of PCs from early 2000's in both computing power and peripherals. – Alexey Kamenskiy Jan 04 '21 at 07:26
  • @AlexeyKamenskiy Do those parts have RAM on the same die? – grahamparks Jan 04 '21 at 17:22
  • 2
    @AlexeyKamenskiy Are there SoCs (excluding FPGA variants) that feature more than a few MB of flash and RAM **on-chip**? A quick search on Digikey (one of the largest parts distributors in the world) in the "System On Chip" category shows 2200 active part numbers. The "Microcontrollers" category shows 53000 part numbers. Among all those part numbers the largest RAM size on any chip is only 4.5MB. All but 2 part numbers had 16MB or less of internal flash. – user4574 Jan 04 '21 at 18:05
15

Theoretically, yes. Wafer-scale integration has been discussed in the past.

Practically, no. Manufacturing processes for DRAM and flash are customized and tweaked for those products, so there are extra process steps that are not needed for normal logic. Those process steps drive up the cost of everything on the wafer. Trying to integrate more and more logic on a larger and larger silicon device will lead to a higher number of defects per device, which will increase the need for redundancy and self-repair.

Finding a package to reliably support, connect, and dissipate heat from a very large piece of thin, brittle silicon is another problem.

It just doesn't make sense. If it did, Intel and ARM and AMD would be doing it.

Elliot Alderson
  • 31,192
  • 5
  • 29
  • 67
  • 7
    This of course depends on what one means by a "computer" - DRAM is hardly the only storage technology out there. There exist today both FPGAs with enough block RAM which could fully re-create early personal computers within a singe chip, and MCUs which could *emulate* them faster than they originally ran and with DMA likely even the video. Static RAM in a modern process vastly exceeds the capacity of DRAM in a historic process, since the complexity difference is only around 6:1 so 64K or even several times that of SRAM on the same die as a processor is really no big deal. – Chris Stratton Jan 02 '21 at 19:50
  • @ChrisStratton Yes, your point is well taken. I was trying to read between the lines of the question, based on the reference to a "motherboard". – Elliot Alderson Jan 02 '21 at 19:57
  • I know xilinx is use microblaze as a cpu emulation, but usually they are slower then the processor they are emulating themselves. Is there any reference? I would like to read. – GENIVI-LEARNER Jan 02 '21 at 19:57
  • 2
    @GENIVI-LEARNER microblaze is a CPU, not any sort of emulation. You can build other types of CPU's in an FPGA, too. – Chris Stratton Jan 02 '21 at 20:00
  • 1
    @GENIVI-LEARNER There would be many thousands of references. I would suggest perusing the IEEE Journal of Solid State Circuits, and that would help you understand the terminology and keywords that you could use for a broader literature search. – Elliot Alderson Jan 02 '21 at 20:00
  • 2
    @ElliotAlderson legacy PCs had motherboards as well; today you can put effectively all of that functionality on one die. No, you cannot do so with the functionality of today's PCs. Though stacked hybrids such as used in many phones, the Raspberry Pi, and the Octavo Beagle-on-a-chip come fairly close from a board design perspective, in that with the CPU + RAM in one package most of what is external is power, "disk", and communication. – Chris Stratton Jan 02 '21 at 20:02
  • @ChrisStratton so basically you are saying its not just possible to put everything in one die, its already been done in Raspberry Pi, Octavo. Well I thought both had multiple periferals outside the package (expection of GPIO which by definition has to be outside), LCD driver, MMU, and other bus driver – GENIVI-LEARNER Jan 03 '21 at 13:36
  • No. That is quite opposite of what I said and near completely wrong. – Chris Stratton Jan 03 '21 at 14:07
  • I suppose if someone came up with a process that could economically detect and repair defects on the wafer, that would solve the yield problem. No idea how that would work though. – Jeremy Friesner Jan 04 '21 at 20:56
  • @JeremyFriesner: For NAND flash, that's effectively done by specifying that certain parts of the flash are guaranteed to be operational, and a certain fraction of the remainder is specified to be operational, and parts that are not operational are guaranteed to be marked. – supercat Jan 04 '21 at 22:34
9

In addition to all the other answers: We are kind of getting there, where it makes sense.

Early computers had dedicated chips for cache, graphics, audio, bus controller (PCI, AGP etc.), memory controller, network, serial/parallel ports etc. etc.

Today most of that is integrated in the CPU.

Look at a modern AMD Ryzen mainboard, for example this Asrock Deskmini A300:

enter image description here

It pretty much only provides the connectors, slots for RAM, storage and lots of decoupling capacitors. The few chips you see are just there for I/O stuff (e.g. Realtek audio codec, ethernet PHY, ESD protection, level shifters etc.), BIOS, fan control and power regulation.

Michael
  • 847
  • 6
  • 12
  • I believe this is ATX-micro board, and bit curious if the requirement to package too many components into tiny die necessitates the use of too many decoupling capacitors? – user0193 Jan 03 '21 at 21:30
  • 2
    The ethernet PHY (physical layer) is normally also a separate chip, so it can do essentially analog things. (https://en.wikipedia.org/wiki/PHY). Many of the I/O ports (audio like you mentioned, network, and maybe video), probably aren't wired directly to the chipset or the CPU, although the USB ports might be. So yes, "I/O stuff", and *most* of the work that a NIC does is integrated. Dealing with different voltage levels is AFAIK often a reason to have an interface chip of some kind; also maybe to reduce the chance of over-voltage damage on one port taking out the whole chipset. – Peter Cordes Jan 04 '21 at 06:59
  • 2
    @JhonnyS If that was a [microATX](https://en.wikipedia.org/wiki/MicroATX) board then it would have four PCIe slots. It has none. ASRock refer to it as "[Small APU Form Factor](https://www.asrock.com/nettop/AMD/DeskMini%20A300%20Series/)", and it may be unique to them. – Andrew Morton Jan 04 '21 at 11:23
  • 3
    @JhonnyS: If you zoom in you can see that a lot of it is actually unpopulated. Maybe the small board size also makes them appear more numerous. The back side is relatively empty: https://extreme.pcgameshardware.de/attachments/dsc_1558-jpg.1043292/ so maybe more components on the front side than usual. I just picked this board as example because it doesn’t even have a chipset, but is still a complete, modern AMD Ryzen computer/mainboard. – Michael Jan 04 '21 at 11:37
8

Sure, it’s possible. There’s several issues however:

  • Manufacturing yield
  • Process optimization
  • I/O and power connections
  • Memory

Silicon defects increase with area. The larger the chip, the more likely the die will have defects in it. Some of these can be worked around (such as mapping redundant memory cells) but in general the yield becomes unmanageably bad above a certain die size. Closely related to this is testability, which again becomes very challenging and time-consuming for large die.

Different parts of the system will require different types of process to best serve the function. A CPU will want to use a low voltage and low-threshold cells for speed, while a peripheral connection may need higher voltage for compatibility or to run mixed-signal blocks. These often work against each other, forcing trade offs in speed or cost.

I/O connections need to be brought out to be useful. This implies some kind of larger PCB to hold them. Power must also be brought to the die and distributed, which becomes increasingly difficult with larger die.

Finally, large memories (DRAM, flash) consume a lot of area, and need special process (such as trench capacitors for DRAM.) It’s more economical to build them on processes optimized for them than to wedge them onto the same die as the CPU.

This all said, in recent years large CPUs have moved away from being large single die, and instead are composed of ‘chiplets’ arranged together with other subsystems on a multchip package. The AMD EPYC is a great example of this approach, using a scalable group of compute units interconnected to each other by a high-performance datapath fabric. More about that here: https://www.nextplatform.com/2019/08/15/a-deep-dive-into-amds-rome-epyc-architecture/

hacktastical
  • 49,832
  • 2
  • 47
  • 138
  • 1
    Should we also add modularity? Putting the Bluetooth controller on a separate chip lets you make boards that have Zigbee instead, without having to make different chips. – user253751 Jan 04 '21 at 11:33
  • 1
    That’s a marketing consideration, not an economic one. Besides, I/O covers that. – hacktastical Jan 04 '21 at 16:38
6

I think the existing answers explain quite well why you still need a motherboard, even for SoCs that integrate as much as possible into the chip. Fundamentally you are constrained by the need to interface power and I/O to your computer.

With that said, it's certainly possible to find applications that minimize the amount of additional hardware required to put the silicon to work. The only place I can think of where a functional computer can be found in its final, working, application in the form of a single piece of silicon is an RFID tag.

enter image description here

This particular example uses the Impinj Monza R6 chip, whose manual gives a block diagram of its design :

enter image description here

The chip contains power management circuitry to collect RF energy from the antenna to power the chip, a microcontroller with ROM and writeable user memory, password protection, etc.

The final product requires no motherboard and the only power and I/O comes from the single wire loop used as an antenna. The wire antenna bonds directly to the IC on its only two contact pads :

enter image description here

You might consider the plastic sticker backing to be a "motherboard" of sorts, I suppose, but it otherwise requires no additional components to function.

Fundamentally, this is somewhat a question of semantics and where you want to draw the line between the computer, the box you put the computer into, and the perhipherals that you attach to the computer.

J...
  • 1,261
  • 8
  • 14
  • This is only one very narrow example of what is in fact a very broad category of single-chip systems. – Chris Stratton Jan 04 '21 at 20:16
  • 1
    @ChrisStratton Single chip systems that are not built on circuit boards and do not require additional components other than silicon to operate? Please, an example if you would. – J... Jan 04 '21 at 20:17
  • Sure, if you chose not to use a board (which ironically your example includes). But that's a misreading of the question to begin with; it didn't say there couldn't be a board, it said instead of *spreading* the computer out on one. So for example, if you just feed power with discrete wires to two pins of an ATtiny or simple PIC. – Chris Stratton Jan 04 '21 at 20:20
  • 1
    @ChrisStratton My example includes a board? Are we talking about the same thing? – J... Jan 04 '21 at 20:21
  • As you said yourself, the plastic sticker in your example with circuitry printed on it plays the role of a board. But it's quite easy to make a computer which has no such thing, unlike your example. – Chris Stratton Jan 04 '21 at 20:22
  • One was already provided. The point here is you've oddly chosen a very small obscure example from a _huge_ industry of possibilities, and falsely claimed that it is somehow unique. – Chris Stratton Jan 04 '21 at 20:23
  • 1
    @ChrisStratton Ok, but what does your ATtiny do? I defy you to find me a *single* application of an ATtiny where the chip is not connected to anything but power by two wires with no motherboard. OP explicitly said they were aware of SoCs and that that didn't count because ALL the components were not in the chip. A piece of silicon with a single wire loop for I/O is about as basic as I think a computer can get. – J... Jan 04 '21 at 20:24
  • "MCU" and "SOC" are fairly distinct categories, simple examples of the former can run without any support components; the latter almost always require lots of passives, and most also require external memory. As for what one could do, whatever you like - realize that a "motherboard" can't do a whole lot until you connect some signals to either. Though one could measure temperature or supply voltage and transmit that or an access code by generating short range radio signals (of the sort usually thought of as "interference") Your tag after all can't do anything without an external reader. – Chris Stratton Jan 04 '21 at 20:29
  • 1
    @ChrisStratton OP's constraint had nothing to do with an external reader - they were interested in computers that didn't need motherboards or extra components to work. In all cases, microcontrollers are constrained by the need for power and I/O to have motherboards. RFID tags, I think, offer the most minimal implementation for both. – J... Jan 04 '21 at 20:36
  • The RFID reader is quite literally the required power source and I/O you claim not to need, which makes such a claim nonsense. Back in the realm of sensibility, since a mother board doesn't include power or the actual I/O *peripherals* like keyboards, disks, screens, a non-motherboard system wouldn't be required to have those internally either. Ultimately it's also key to realize that the asker has been tossing around a lot of terms while demonstrating that they don't really understand their technical nature. – Chris Stratton Jan 04 '21 at 21:01
  • 1
    @ChrisStratton You're fighting a strawman. I did not claim that an RFID tag does not need power or I/O, only that it can achieve both in its final operating form without needing a motherboard and requiring only a minimum of extra hardware (ie: a wire loop) – J... Jan 04 '21 at 21:03
  • A claim which is true for many, many other chips as well - that's the original point, you've mentioned a tiny corner of a huge space and falsely claimed it unique. – Chris Stratton Jan 04 '21 at 21:06
  • 1
    @ChrisStratton A claim you have yet to defend successfully. You've cited one example of a chip that is only offered in physical packages explicitly designed to be soldered to a circuit board - the one component OP is looking to dispense with. Nobody actually *uses* a PIC or MCU without a circuit board. – J... Jan 04 '21 at 21:08
  • Just because it *can* be soldered to a board doesn't mean it *has* to be. It happens your component is *designed* to be stuck to a circuit board, itself, and actually far far harder to use without one. It just happens to be a less traditional sort of circuit board substrate and trace material. But plenty of examples of such materials used with other MCU types, too. – Chris Stratton Jan 04 '21 at 21:09
  • 1
    @ChrisStratton I'm really interested for some real examples... – J... Jan 04 '21 at 21:14
  • 1
    +1 from me. Agree with J that an RFID tag and a uC or SoC are two completely different classes of single-chip "computers" – Shamtam Jan 04 '21 at 23:56
4

The fabrication process used for a high-end CPU will be designed primarily for cache memory (SRAM) because modern CPU's are mostly cache, by area. The logic is only a small part of it.

The fabrication for SDRAM (usually just called "RAM") is totally different from SRAM, and very specialized. SDRAM fabs don't produce anything else.

This, alone, is enough reason to have different fabs for SDRAM and CPU's.

There is also a modularity issue. Some people want multiple disk controllers, some want only one, etc. So committing all those decisions to a silicon wafer would probably not be the best idea. There are hundreds of different motherboards out there all doing slightly different things. The resources required to design and test a motherboard are small compared to the resources required to design and test a CPU.

All of that said, there are nonetheless so-called system on chip (SOC) processors out there in the micro-controller realm. They have voltage regulators, flash memory, SRAM multiple peripherals all on one chip. So it IS possible. It just doesn't seem to fit the market needs to do the same thing with high-powered processors.

user57037
  • 28,915
  • 1
  • 28
  • 81
  • So in your opinion the difference in fabrication for SDRAM and SRAM is the reason we can't integrate the memory perhaps (SDRAM) directly into the same die that hosts cpu? – metron Jan 02 '21 at 20:19
  • 1
    Yes. Also, different people make different choices regarding how much SDRAM they want. So you would have to support more variants with different amounts of SDRAM. – user57037 Jan 02 '21 at 20:25
  • @metron I don't think it is strictly a matter of whether it is **possible** to integrate a modern CPU (such as in Intel i5, say) with DRAM...the real question is whether it makes economic sense. Based on what we see in the market, it does not. – Elliot Alderson Jan 02 '21 at 21:19
  • @metron It is possible to put DRAM on a logic die, just very expensive compared to putting it on a dedicated memory die. Some products do integrate DRAM and logic on the same dies however: https://semiengineering.com/the-power-of-edram/ – user1850479 Jan 03 '21 at 05:31
  • 1
    @user1850479: The graphics chip used in the 1980s Famicom/Nintendo Entertainment System includes about 256 bytes of dynamic memory on chip, though the layout is designed to fit the process used for the rest of the chip, rather than high-density DRAM processes. – supercat Jan 05 '21 at 23:58
4

Any wafer fabrication process calls for a (usually large) number of different steps, and these will be somewhat different for RAM, logic and specialised high-speed functions such as video and USB. Those functions could be manufactured on the same wafer with moderate compromises, as is done for microcontrollers, but power management functions such as high power DC-DC converters require significantly different fabrication and would be impractical. By analogy, you can make most parts of a motor car from sheet metal but you’d struggle to make an engine that way, not that it would be technically impossible but certainly not practical. Besides, much of the function of a motherboard is to house the connectors, so any monolithic solution would need something analogous to a motherboard even if it only had one chip on it.

Frog
  • 6,686
  • 1
  • 8
  • 12
  • Even in cases where one could combine different processes on different parts of a chip, the cost per unit area of such a chip would be much higher than the cost per unit area of chips using individual processes. If a system could use three chips which are fabricated by different processes, each of which uses up 1/300 of a wafer, it might be possible to instead put each system onto a single chip which combines all three processes and takes 1/100 of a wafer, but fabricating 3,000 systems' worth of chips would require subjecting 30 wafers to all of the processes... – supercat Jan 04 '21 at 22:30
  • ...instead of subjecting 10 wafers to each of the processes. – supercat Jan 04 '21 at 22:31
2

@user4574 and @hacktastical have covered the major points, but there are a few others worth mentioning:

  1. Suppose you are able to put all of the electronics on one chip. Congratulations, you are locked into a particular configuration of RAM, ROM, and peripherals. Any changes to these requires a completely different chip, which costs money every time you do that.

    This issue isn't so bad in the microcontroller market, because there will always be plenty of embedded applications that only need 8k of RAM and 32k of ROM. These chips only need to be designed once, and sell tens of billions of units every year, forever.

    Demand is much different in the PC/laptop/phone market. There is an expectation of more RAM and different peripherals every 2-3 years. So you are constantly designing new chips, to fit all of the various niches of the market and which last on the market for only a few years before being considered obsolete. You will be lucky if any given design sells one million units before becoming obsolete.

    Meanwhile, your competitor produces one general-purpose CPU design every 10 years which can be re-used in various memory and peripheral combinations. Your competitor has less frequent development costs, produces a smaller product line, and sells 1000x as many units as you (because of their chip's versatility). Who do you suppose is going to have a cheaper product?

  2. CPUs need an oscillator. Some microcontrollers have an on-chip RC oscillator, which produce at best 20 MHz. However, that's way too slow for the PC (2000+ MHz), laptop (500+ MHz), and cell phone (200+ MHz) markets. (These numbers are for the lowest end units; most units have clock rates much higher.) At these speeds, you need a real crystal for these, and it's way too expensive to put them on the same die as the CPU. So the crystal is put separately on the motherboard.

  3. Digital circuits need bypass (decoupling) capacitors. Instead of taking up valuable chip space, it is cheaper to put them externally on the motherboard.

  4. Often the power supply needs to be brought down to a lower voltage (e.g. 5V to 3.3V). The voltage regulators which do that dissipate a lot of heat. Better to have that on another chip on the motherboard, to keep that heat away from your CPU.

DrSheldon
  • 926
  • 1
  • 10
  • 18
  • 5
    On 4, it is more that a reasonably efficient power converter needs an inductor, and on chip magnetics are death on silicon area (Also, power mosfets are a very different process to logic or memory). Decoupling is actually often done on the interposer these days, too much inductance there if you try doing it just on the main board, and the quick stuff even eats the area cost of doing some decoupling on die. – Dan Mills Jan 03 '21 at 14:47
  • @DanMills, I was wondering that as well because inductor needs a coil and dont think we can simply cast it into a die. – user0193 Jan 03 '21 at 21:35
  • These are really good points, especially about the oscillator with real crystal! Didnt know embedded computers are using different oscillators then PC's. Also on point 3, are you saying electrolyte caps are cheaper then those that can be formed into a die, hence its cheaper to have them externally on the board instead on the package itself? – user0193 Jan 03 '21 at 21:39
  • 3
    The oscillator argument here is *simply wrong*. In actuality, all high speed clocks are generated internally via on-chip oscillators in the form of PLLs/DLLs etc. External inputs are merely low speed references which the fast clocks are divided down for comparison to. Apart from communications standards which have particularly rules, it's entirely possible to go fast off an internal oscillator as reference at least if you back off very slightly form the edge of capability; it's not typically *done*, but it's quite *possible*. – Chris Stratton Jan 03 '21 at 21:55
  • @JhonnyS You can do inductors on silicon, but it ties up a stupid amount of area and they are got very good (High resistance and low inductance), more useful for things like very small isolated supplies for iso amps and sometimes little bits of RF stuff then a few tens of amps at 1V for a core power supply. You CAN do almost anything, but for most applications what IS done is what is most cost effective. – Dan Mills Jan 04 '21 at 19:17
1

Depends what you mean by computer and “all components”. An Arduino chip has input, output, storage, memory, and central processor with ALU all on one chip (other SOCs have far more, including network and graphics units). Even the old Intel (major vendor) 8051 is a fairly complete system, and SOC variants are used as single chip computer solutions in perhaps billions of products.

But even an Arduino doesn’t include AC power supply, LEDs, and physical buttons on chip.

hotpaw2
  • 4,731
  • 4
  • 29
  • 44
1

As far as I know only one company produces a wafer scale product for AI neural network applications, which is Cerebras It contains over a trillion transistors and it's cost last time I checked was around $200k enter image description here

Dirk Bruere
  • 13,425
  • 9
  • 53
  • 111
0

It's called "SoC" and they're widely used in mobile devices. Beyond the main chip you just have a bunch of capacitors and resistors and things like charging chips to deal with electrical environment.

0

https://en.m.wikipedia.org/wiki/Texas_Instruments_TMS1000

This TMS1000 is the first computer on a chip and one of the first CPUs. It is from 1979.

< It combined a 4-bit central processor unit, read-only memory (ROM), read/write memory (RAM), and input/output (I/O) lines as a complete "computer on a chip". It was intended for embedded systems in automobiles, appliances, games, and measurement instruments

0

Maybe I'm missing something, but I'm looking at this Arduino with an Atmel ATmega328 chip and the only external (computer specific) part I see is the crystal. One advantage to putting the whole computer in one chip is that routing larger busses to connect all the parts becomes a problem on printed wiring boards as buss size increases and is more easily handled inside a chip. A trade off is accessibility of each functional component, which we largely solve with JTAG or something similar.

0

Yes, there's a technique that shows promise: it's called Silicon Interconnect Fabric.

Read all about it here:

https://spectrum.ieee.org/goodbye-motherboard-hello-siliconinterconnect-fabric#toggle-gdpr

Shashank V M
  • 2,279
  • 13
  • 47