2

Computers calculate numbers in 0s and 1s. A bit can be either but not in between. So if you enter 3/2 into a calculator, it should return either 1 or 2, right? Wrong! It gives you 1.5, the correct answer. Even on problems with more complexity, the calculator answers with the right number. So my question is, how does all this work? If a computer can only use 1s and 0s, how is it able to interpret a number in between 1 and 0 correctly, and is there a way to build a schematic for a machine that understands decimal?

Nip Dip
  • 281
  • 2
  • 10
  • 1
    Things to look up: fixed point, floating point. – user110971 Jul 13 '20 at 02:04
  • 1
    A motivating question: If we can only use 0,1,2,3,4,5,6,7,8, and 9, then how do we interpret the notion of "one thousand"? Or "two thirds"? – nanofarad Jul 13 '20 at 02:06
  • Ok, so I've done some research and it looks like floating-point numbers have 3 parts: a sign, an exponent and a fraction. I know what the sign does, but not the exponent or fraction. – Nip Dip Jul 13 '20 at 02:09
  • expanding on what @nanofarad said ... decimal comes from a world where people have 10 fingers ... binary comes from a world where people have 10 fingers (binary 10) – jsotola Jul 13 '20 at 12:26
  • @NipDip To briefly (and handwavily) fill in that gap on floating-point: Floating-point is just like scientific notation. Consider the (human) scientific notation of -2.56*10^2. The sign is negative, the exponent is +2, and the fraction is 2.56. Floating-point numbers use a similar technique, just with binary numbers (e.g. 1.01101001 * 2^-3 is equal to 0.00101101001 (binary)), which is approximately equal to 0.176 (decimal) – nanofarad Jul 13 '20 at 14:07
  • Ok, so floating-point is scientific notation for binary? – Nip Dip Jul 13 '20 at 18:57
  • See this related question: [How are irrational numbers represented and processed by computers?](https://electronics.stackexchange.com/q/556879/238188) – Shashank V M Mar 28 '21 at 07:07

1 Answers1

6

Calculators generally work in BCD, whereas in programming languages usually (non-integer) numbers are represented in binary floating point format such as IEEE 754.

In the case of binary floating point, there is a number in 2's complement normalized so the most-significant bit is '1' (and since we know it's one, we can avoid storing it and just assume it is there). The exponent is usually a biased binary number that is always positive.

Doing division in BCD is not all that hard, you can do it with a 4-bit arithmetic logic unit (ALU) and a typical long division algorithm (which involves a number of subtracts until the result turns negative, and then one addition), then shift and repeat.

As far as the decimal or binary point, you can handle that separately as a kind of exponent.

Instead of 3/2, think of 30000000/20000000 = 15000000, then you figure out where to place the decimal point.

To add or subtract you have to right-shift the smaller number to make the exponents the same first. So 3 + 0.01 from 30000000 + 100000000 -> 30000000 + 01000000 = 30100000 and the decimal place is set to get 3.0100000

You could hard-wire logic to do this, but it would involve quite a few MSI level ICs for the registers, the ALU and the control logic, usually we'd want to use a microcontroller, an ASIC (as in a calculator) or an FPGA.

Spehro Pefhany
  • 376,485
  • 21
  • 320
  • 842