In other languages such as C, Java, etc. the standards for representing whole and fractional numbers are different.
In C and various programming languages, integers (INT() in SQL) are usually "packed" integer bit strings and floating point numbers (FLOAT()/DOUBLE() in SQL) are usually stored as IEEE-754/-854-compliant bit strings.
DECIMAL() and NUMERIC() typically use Binary Coded Decimal (BCD).
A packed, signed, 32-bit integer will store -2-billion-or-so to +2-billion-or-so. You can reliably use 9 digits, +/-, but not 10. For more than 10, you need the next step up in integer, which will likely be 64-bits.
IEEE-754 deals with storing floating point numbers in a 32-bit sequence. It stores:
- one bit for the sign
- multiple bits for a relative exponent
- multiple bits for a mantissa
In this fashion, it holds several significant digits and a range of exponents. The order of the bits is such that you can treat them as 32-bit signed integer, for comparison purposes, and you should get the right answer. This way, you don't have to have different ways for comparing integers vs floats. IEEE-854 takes this to 64-bits, allocating more bits to the relative exponent and the mantissa, allowing more digits of precision and a greater range of exponents.
The problem with these floating point values is that they are ALL approximations. And they (at least, the older, more commonly supported formats) are all based on exponents of 2, not exponents of 10. Ergo, there is no EXACT representation, in either one, for 1.1 or 1.01; there's no exponent of 2 which you can multiply either of those by and get an exact value. If you deal with financial transactions, you don't want an approximation. If you must round, you want to deal with exponents of 10, not 2, because that's how we denominate money.
DECIMAL() and NUMERIC() are different animals, using BCD. If you define a DECIMAL(7,2), that will store a total of 7 decimal digits, keeping track of the fact that 2 of them are behind the decimal point. That's 7 significant digits, no approximation. That will likely fit in 32-bits of space (4 bits for each digit + 4 bits for the total field, to store sign, null, etc.). If you multiply two DECIMAL(7,2) fields, you know PRECISELY, down to the penny, what you will get. No approximations.
And the aforementioned DECIMAL(10) will likely only need 48 bits of space to store. And you can use all 10 digits, positive and negative.
Perhaps databases do not need to perform numeric computations often. Perhaps digit width of a column is a natural extension of character width. Perhaps SQL was supposed to be a non-programmers language (because it sounds like English :) )? Perhaps history took a weird turn, such as one that allowed the inferior VHS format to prosper.
Afraid I have to disagree with you, there. The databases I deal with do a LOT of numeric calculations. Most of it, though, is dollars and cents, which largely REQUIRES the precision of BCD and the associated DECIMAL(x,y) formats. The fact that BCD chews up 4 bits / digit, while ASCII used 8, is mostly coincidental.
The invention of SQL is credited to E. F. Codd, who was working at IBM. No surprise, therefore, that the formatting for DECIMAL(x,y) is largely stolen from FORTRAN.