There are numerous ways of storing fractional numbers, and each of them has advantages and disadvantages.
Floating-point is, by far, the most popular format. It works by encoding a sign, a mantissa, and a signed base-2 exponent into integers, and packing them into a bunch of bits. For example, you could have a 32-bit mantissa of 0.5
(encoded as 0x88888888
) and a 32-bit signed exponent of +3
(0x00000003
), which would decode to 4.0
(0.5 * 2 ^ 3
). Floating-point numbers are fast, because they are implemented in hardware, and their precision scales with absolute size, that is, the smaller the number, the better your absolute precision, so the relative rounding error stays constant with absolute size. Floats are excellent for values sampled from a continuous domain, such as lengths, sound pressure levels, light levels, etc., and because of that, they are commonly used in audio and image processing, as well as statistical analysis and physics simulations. Their biggest downside is that they are not exact, that is, they are prone to rounding errors, and they cannot accurately represent all decimal fractions. All the mainstream programming languages have a float point of some sort.
Fixed-point works by using sufficiently large integers and implicitly reserving a part of their bits for the fractional part. For example, a 24.8 bit fixed-point number reserves 24 bits for the integer part (including sign), and 8 bits for the fractional part. Right-shifting that number by 8 bits gives us the integer part. Fixed-point numbers used to be popular when hardware floating-point units were uncommon or at least much slower than their integer counterparts. While fixed-point numbers are somewhat easier to handle in terms of exactness (if only because they are easier to reason about), they are inferior to floats in pretty much every other regard - they have less precision, a smaller range, and because extra operations are needed to correct calculations for the implicit shift, fixed-point math today is often slower than floating-point math.
Decimal types work much like floats or fixed-point numbers, but they assume a decimal system, that is, their exponent (implicit or explicit) encodes power-of-10, not power-of-2. A decimal number could, for example, encode a mantissa of 23456
and an exponent of -2
, and this would expand to 234.56
. Decimals, because the arithmetic isn't hard-wired into the CPU, are slower than floats, but they are ideal for anything that involves decimal numbers and needs those numbers to be exact, with rounding occurring in well-defined spots - financial calculations, scoreboards, etc. Some programming languages have decimal types built into them (e.g. C#), others require libraries to implement them. Note that while decimals can accurately represent non-repeating decimal fractions, their precision isn't any better than that of floating-point numbers; choosing decimals merely means you get exact representations of numbers that can be represented exactly in a decimal system (just like floats can exactly represent binary fractions).
Rational numbers store a numerator and a denumerator, typically using some sort of bignum integer type (a numeric type that can grow as large as the computer's memory constraints allow). This is the only data type out of the bunch that can accurately model numbers like 1/3
or 3/17
, as well as operations on them - rationals, unlike the other data types, will produce correct results for things like 3 * 1/3
. The math is pretty straightforward, although coming up with an efficient factoring algorithm is rather challenging. Some programming languages have rational types built into them (e.g. Common Lisp). Downsides of rationals include that they are slow (many operations require reducing fractions and factoring their components), and that many common operations are hard or impossible to implement, and most implementations will degrade the rational to a float when this happens (e.g. when you call sin()
on a rational).
BCD (Binary Coded Decimal) uses "nibbles" (groups of 4 bits) to encode individual digits; since a nibble can hold 16 different values, but decimal numbers require only 10, there are 6 "illegal" values per nibble. Like decimals, BCD numbers are decimal-exact, that is, calculations performed on decimal numbers work out just like they would if you did them using pen and paper. Arithmetic rules for BCD are somewhat clumsy, but the upside is that converting them to strings is easier than with some of the other formats, which is especially interesting for low-resource environments like embedded systems.
Strings, yes, plain old strings, can also be used to represent fractional numbers. Technically, this is very similar to BCD, only that there's an explicit decimal dot, and you use one full byte per decimal digit. As such, the format is wasteful (only 11 out of 256 possible values are used), but it is easier to parse and generate than BCD. Additionally, because all the used values are "unsuspicious", harmless, and platform-neutral, string-encoded numbers can travel over networks without problems. It is uncommon to find arithmetic being done on strings directly, but it is possible, and when you do it, they are just as decimal-exact as the other decimal formats (decimals and BCD).