My question is closely related to this one: How do computers understand decimal numbers?
However, that question deals with rational numbers only. I was wondering if irrational numbers can be represented by computers.
Irrational numbers are numbers with non-repeating and non-terminating decimal expansion. They have an infinite non-repeating sequence of numbers after the decimal point.
A common example where irrational numbers are seen is when calculating the area of a circle. The area of a circle will be an irrational number in most cases.
If the decimal expansion of an irrational number is shortened then the accuracy would reduce. As a simple example, the natural logarithm of e
is 1, while the natural log of 2.72
or any approximation of e
, is not 1 but a number a close to 1. If I'm writing a program using floating point arithmetic and the computer uses this approximation of e
it will lead to errors.
Is it possible to avoid such errors and error propagation? Computations in science and engineering often involve irrational numbers. If this error propagates the final result will be very far from the correct one.