2

Floating point units are standard on CPUs today, and even desktops might use them today (3D effects). However, I wonder which applications have initially driven the development and mass adoption of floating point units in history.

Ten years ago, I think most uses of floating point arithmetics constituted in either

  1. Engineering and Science applications
  2. 3D graphics in computer games

I think for any application where decimal numbers might have appeared at those times, fixed point arithmetic has been sufficient (2D graphics) or even desirable (finance). Usage of integers would have been sufficient then.

I think these two applications have been the major motivation to establish floating point arithmetic in hardware as a standard. Can you name others, or is there a compelling reason to disagree?

shuhalo
  • 211
  • 1
  • 7

4 Answers4

7

I think the real driver was silicon processes. Once circuits were shrunk small enough that there was room for floating-point units, they were incorporated. Same for MMUs and memory controllers. Engineers abhor empty die space.

TMN
  • 11,313
  • 1
  • 21
  • 31
  • From what I recall, it got to the point that it took an extra step to disable the FPU. Intel provided the exact same processor in two models, one with and one without the FPU. To make the model without an FPU they took the standard model and severed the link. – Michael Brown Mar 07 '12 at 18:36
3

I can go back even farther than 1990. Oil and gas exploration companies used whatever minicomputers existed in the 1960's, plus the IBM 360 when it came out, to perform geophysical calculations.

The Electronic Numerical Integrator And Computer (ENIAC), developed in 1946, was used mainly to calculate artillery firing tables.

In 1822, Charles Babbage designed a difference engine. A difference engine is an automatic, mechanical calculator designed to tabulate polynomial functions. The London Science Museum constructed a working difference engine from 1989 to 1991.

The need for floating point calculations has been with us since the dawn of computing.

Gilbert Le Blanc
  • 2,819
  • 19
  • 18
  • The need for "real number arithmethic" does not imply that it must be floating point. Why don't we have more hardware today with fixed point units? – Doc Brown Mar 07 '12 at 17:15
  • @Doc Brown: That's a different question than the question the OP asked. Some reasons are given towards the bottom of this: http://en.wikipedia.org/wiki/Fixed-point_arithmetic – Gilbert Le Blanc Mar 07 '12 at 17:19
  • 2
    @Doc: Because fixed-point, by definition, can't give you arbitrary levels of precision the way floating-point can. (Yes, floating-point comes with its own tradeoffs to offer that, but it's generally seen as more useful in most contexts.) – Mason Wheeler Mar 07 '12 at 17:20
  • @Gilbert: I suspect that this was the question the OP really meant, since he was also mentioning fixed point in contrast to floating point. – Doc Brown Mar 07 '12 at 17:30
  • @Doc Brown: I don't know about the difference engine and ENIAC, but I worked for an oil and gas exploration company in the late 1960's - early 1970's. We used floating point, whether it was supported in the hardware (IBM 360) or not (most minicomputers). – Gilbert Le Blanc Mar 07 '12 at 17:34
  • @Mason Wheeler: but why are "arbitrary levels of precision" so much better than "a defined level of precision without the perils of floating-point arithmetic" (see http://www.lahey.com/float.htm)? – Doc Brown Mar 07 '12 at 17:36
  • @Gilbert: but does that explain anything? Because you (and many others) were used to work with floating point in the sixties, we today have lots of hardware with floating point units (and not fixed point units)? – Doc Brown Mar 07 '12 at 17:39
  • @Doc Brown: You can ask the question yourself on Programmers, and see what answers you get. It's been a while, but as I recall, even slide rule calculations were floating point. :-) – Gilbert Le Blanc Mar 07 '12 at 17:44
  • 5
    @Doc: Floating point numbers have approximately constant relative precision throughout their range, while fixed point numbers have constant absolute precision. In physical measurements, we are usually much more interested in relative precision. We measure machined parts to the micron, but measure driving distance in tenths of miles. – kevin cline Mar 07 '12 at 18:17
  • @kevincline: yes, you are absolutely right, and I know those things, but as the OP states, there are lots of both: applications (like business/financial) where an absolute precision is preferrable, and applications of the category "Engineering/Simulation/Games", where floating point is more appropriate. Nevertheless, mainstream hardware (and programing languages, too), seem to prefer floating point, not fixed point - any idea why? – Doc Brown Mar 07 '12 at 21:08
  • @DocBrown - We use floating-point because in most circumstances we need an arbitrary range more than we need slightly more precision (for a given register width). Overflow conditions are much *harder* with fixed-point arithmetic (they have a more immediate and damaging effect) so have to be handled carefully. Also, fixed-point can be calculated efficiently using integer arithmetic, so it doesn't need hardware acceleration. That's not to say that floating-point [is always easy](http://programmers.stackexchange.com/a/101197/22493). *8') – Mark Booth Mar 08 '12 at 18:47
3

I worked on PC's during a time where a floating point co-processor was an optional extra. You had to pay a significant extra cost to have an 80x87 chip added to an 80x86 system and few programs took advantage of it.

One exception was the first real killer-app for the IBM-PC, the ubiquitous spreadsheet program Lotus 1-2-3. This supported floating-point operations in hardware from relatively early on, substantially speeding up certain operations if you had an FPU.

When Intel got to the 80486, they started integrating the floating point unit onto the CPU, but even then they offered the 486SX variant with the FPU present but disabled. This was substantially cheaper than the 486DX chip and many people took that option to keep costs down.

By this point, the incremental cost in silicon terms must have been lower than the additional costs of the R&D and tooling to create separate 486SX, 487SX and 486DX chips. In fact, if you bought a 486SX system and later added a 487SX co-processor, you effectively had two whole 486DX CPU's both with different halves of the chip disabled!

By the time the Pentium came around, floating point units were expected, and it's infamous FDIV bug caused quite a storm, not just in the scientific community, but in the business community too.

Mark Booth
  • 14,214
  • 3
  • 40
  • 79
1

Computers are that computers. Scientific applications, starting from table computations, which need floating point (or carefully manual usage of a scale with fixed point) has always been an important aspect.

AFAIK, the first computer with floating point -- in hardware -- was the Zuse Z4 mid 40's. The first "common" machine with FP capability is probably the IBM 704 about mid 50's.

For the Intel x86 family, the 8087 co-processor was announced in 80 and until the FPU was integrated with the rest of the processor (which happened early 90's), there always has been co-processor available, even third-party one. At that time, serious scientific applications weren't done on PC, but spreadsheets were among the programs to benefit from having a math coprocessor.

AProgrammer
  • 10,404
  • 1
  • 30
  • 45