1

My teacher told me that we are trying to make transistors smaller so that they require less electricity to operate, hence make them faster (because changing form 1 signal to 0 or vice versa need less energy, or so he said). Meanwhile, cryptocurrency miners and gamers overvoltage their GPUs for better performances. Why are 2 points contradict each others? What's happening under the hood when we overvoltage GPUs for better performance?

P/s: I had some thought whether i should ask this in SuperUser or ElectricalEngineering before making up my mind and ask this question here.

  • 1
    Each transistor in this case can be simplified as a capacitive load. Changing the state of one requires current. Your "driver" is another transistor which again can be over-simplified as a resistor when it's on. Now, you are left with a simple RC circuit. To get to a certain threshold value, you can increase your driving voltage to get there faster. Power consumption is proportional to U^2*f but maximum speed is proportional to just U, so the penalty of increasing the voltage is high. – winny Jul 06 '17 at 08:03
  • You are talking about two very different things. When designing a new transistor if you can make it smaller it will run at a lower voltage and the chip will run faster (both lower capacitance and lower power per clock both mean higher speeds before things stop working or hit thermal limits). When using an existing transistor design (e.g. what's in your GPU) then more voltage will allow it to run faster but with a lower safety margin in terms of temperature and part lifespan. – Andrew Jul 06 '17 at 15:10

2 Answers2

7

Very generally speaking -- the clock-speed of the device will directly drive it's performance (operations per second). Chips are designed (and extensively verified) for correct operation at some frequency determined by the manufacturer. Increasing the clock speed (overclocking) will increase performance, but potentially at the risk of instability / incorrect operation as it is running outside of manufacturer parameters. Often, this is not an issue for most users as they happen to never exercise the path in the chip that fails at the excess clock-speed, or they happened to get lucky with the specific die they got, and it can in fact operate stably at a higher frequency (process variation from die to die is a very real thing).

Increasing the voltage of a device on its own won't cause it to operate "faster" in the sense that you'll see more operations per second. Increasing it in conjunction with an overclock may increase the stability of said overclock by allowing the transistors to operate faster (shorter rise/fall times, if I remember correctly) -- at the expense of increased heat dissipation, and potential damage to the chip if the voltage is raised too high (modern chips often hover close to 1V these days); even 100-200mV can kill the chip / cause premature death (electromigration). By increasing transistor performance, you're allowing them to perform "better" to support the increased demands from the increased clock rate (shortened setup and hold times).

Summary -- most people over-voltage their devices to support an overclock.

Extra: In some occasions, increasing the voltage very slightly can help with stability at stock clocks, but you've arguably been sold a marginal device at that point. I have a habit (with honestly no real scientific backing) of bumping my DRAM voltage up by 50mV when I have 4 sticks installed -- for my small sample size, it seems to help stability wise supporting overclocking of the CPU (while keeping the stock DRAM frequency).

Krunal Desai
  • 6,246
  • 1
  • 21
  • 32
-2

Edit: This answer is not accurate so I've learned something from the comments and downvotes. I'll leave the post out of respect to those who took the time to comment as there is some useful info and linked material.


They don't "overvoltage" the CPU (which would destroy them) - they over-clock them (which only might destroy them). Clocking affects speed at which instructions are carried out.

Every time a transistor is switched there is a small but significant momentary increase in current. The average current depends on the number of switchings per second.

The power (heat) dissipated in the chip is volts x current so if the chips can work at lower voltages the power can be reduced. For this reason chips now often work on 3.3 V rather than the older 5 V standard.

Transistor
  • 168,990
  • 12
  • 186
  • 385
  • 3
    Usually you do increase the voltage as well to get a stable system at higher clocks, I guess some people go beyond the specification of the CPU / GPU. – Arsenal Jul 06 '17 at 07:39
  • Ye, I understand the part "Clocking affects speed at which instructions are carried out". Lemme confirm it, so when people talking about "overvoltaging GPU" they are actually "overclocking it" and the "overvoltaged GPU" is a consequence of that, right? – Nhu Thai Sanh Nguyen Jul 06 '17 at 07:40
  • @Arsenal, ye, that's the part I'm asking. the "to get a stable system at higher clocks". Could you further explain it ? – Nhu Thai Sanh Nguyen Jul 06 '17 at 07:41
  • 1
    @Transistor They do actually over-volt them! – AndrejaKo Jul 06 '17 at 07:54
  • 1
    And they also go to great lengths to cool the chips so that the over voltage and over clocking does not overheat the devices. Elaborate heat sinks are often large copper things with liquid cooling. There are some enthusiasts that run extreme conditions and use heatsinks that have a built in cup into which they pour liquid nitrogen. That can super cool the chip allowing them to run their performance tests until the liquid nitrogen boils away. – Michael Karas Jul 06 '17 at 09:25
  • For example - https://www.goldfries.com/uncategorized/gigabyte-h55-workshop-with-hicookie/ – Michael Karas Jul 06 '17 at 09:30
  • Cup heat sink example - http://photobucket.com/gallery/user/preeeezy/media/bWVkaWFJZDoyMTA4MDY3NA==/?ref= – Michael Karas Jul 06 '17 at 09:33