The basic arithmetic a computer chip can do only works on numbers(integers or floating point) of a fixed size.
There are many algorithms that could be extended to work on numbers of arbitrary size (and some require them). Therefore, a way to store and perform arithmetic with arbitrary-size numbers is good to have.
For arbitrary-size integers, the approach usually is to take a list of bytes (which each can store a value in the 0-255 range) and have the final 'big number' (often shorted as bignum) then be base-256, by using each of these bytes as 'digits' (ducentiquinquagintiquinquesexa-its?).
For any non-integer arbitrary-sized real number, there exist two different ways of representation:
- decimals, which consist of an arbitrary-size 'big number' mantissa and exponent. The number is represented by
sign(+1 or -1) * mantissa * pow(base, exponent)
(wherebase
is usually2
or sometimes10
) - rationals, which have an arbitrary size 'big number' numerator and denominator. The number is represented by
numerator / denominator
In practice, I've found many more languages and libraries to use/support decimal data types than rational data types. An example of this are (SQL) data stores, which have a native DECIMAL data type but not a rational one.
Why is this the case? Why are decimal data types preferred over rational ones?