7

If I do .1 + .1 + .1 in Python, I get 0.30000000000000004. (I am not asking about Python in particular, and do not want Python specific answers.)

The only problem I can see with this is 0.30000000000000004 != 0.3, so I need to take care in how I compare floats. Are there any other problems I need to be aware of with float rounding?

I can't imagine that last digit being much of a problem in real life. For example, if I ask for a "0.30000000000000004 meter metal rod", I'm not going to later complain that "this rod is 0.00000000000000004 meters too long!" (I can just cut the extra off, right? :)

Even with a lot of float calculations going on, I can't imagine the rounding getting so far off that it matters. Am I missing something? When do floating point rounding errors really matter?

Buttons840
  • 1,856
  • 1
  • 18
  • 28
  • 5
    I spent months chasing through code in a register (point of sales) system tracking down single penny errors. I can assure you that to some people, errors in money (and the lack of confidence that the calculation is correct) is *very* important. Its not always the least significant digit. Rounding 1.499999999964 cents down rather than up because it should have been 1.5 occurs at many places. –  Jan 20 '14 at 22:13
  • 2
    Given the "right" calculations the rounding error can be arbitrarily large (even larger than the result). Subtraction and division can rather quickly amplify errors. – Patrick Jan 20 '14 at 22:43
  • 1
    I too spent far too much time at one point a long time ago trying to track down a cumulative 15 cents difference in a financial application (in Fortran!). Particularly when interest is charged or earned, care has to be taken. For straight addition, subtraction, and multiplication of dollars and cents, it is simple: just multiply by 100 so that you have integers, do the arithmetic, then scale back by the same amount at the end. For scientific applications I would just check for within a delta instead of equality. – Teresa Carrigan Jan 20 '14 at 23:43
  • 3
    Currently you don't appear to know know much about FP. read : http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html, before using floating point, even if you do not understand it all, at least you will be wise enough to know what you don't know. – mattnz Jan 21 '14 at 03:26
  • 1
    @TeresaCarrigan: With things like interest, is there any tracking of fractional values between billing or payment periods? If someone had $1.02 earning 6%, would it grow by a penny each month (effectively a 12% interest rate), or a penny every other month, or not at all? – supercat Jun 12 '14 at 15:44
  • @supercat I doubt that any financial institution tracks fractions of a penny in interest. I am not an expert though. – Teresa Carrigan Jun 12 '14 at 21:33
  • @supercat pretty interesting question. But for sub cents/pennies specifically, I believe they are considered insignificant and therefore discarded. – frostymarvelous Jun 17 '18 at 16:21

3 Answers3

15

Variances in the "least significant digit" can cause the entire number to be rounded in the wrong direction.

Lets take an item that costs $0.705 - the half cent is from discount, or tax or something, or this is a 10% discount on something that is $7.05. Whatever the case... three of them and we're computing the price (or discount). The total price is $2.115 and you need to round it appropriately to $2.12 (half up).

public class Round {
    public static void main(String[] args) {
        double item = 0.705;
        double subtotal = item * 3;
        System.out.println(subtotal);
        double rounded = Math.round(subtotal * 100.0)/100.0;
        System.out.println(rounded);

        subtotal = 2.115;   // what the subtotal *should* be
        rounded = Math.round(subtotal * 100.0)/100.0;
        System.out.println(rounded);
    }
}

However, the output of this program is:

2.1149999999999998
2.11
2.12

And there, we're off by a penny because of something that happened in the least significant digit of a floating point calculation.

Having previously worked on point of sales code that frighteningly had doubles scattered through them (I spent months converting them to an arbitrary fixed point system - I'm confident that I fixed that problem - at least in all the code I was looking at...), I can assure you that they are a real problem and the least significant digit being off by one is a big deal.

At first, this will be a "oh, its only a penny" but in this software, its about money... and people get very touchy about money being off. There are two parts to that - first is the confidence that the customer has when they see $0.705 * 3 = $2.11 when they know it should be $2.12. The other part is that money adds up and over the course of days or weeks or months, it adds up to possibly sizable amounts when it isn't being calculated properly. This often happens in tax calculations (thats an easy way to get awkward fractions) and the agencies that collect said taxes are much less sympathetic than the customer who is off by a cent... and can do nasty things like audits.

  • 1
    +1 - I've also run into issues where the rounding caused havoc when reading/writing to a device that doesn't have the same floating point encoding/precision as the host system. Reading the value rounded up and writing the read value (without modification) wrote a different set of bits back. – Telastyn Jan 21 '14 at 02:09
  • In this case you can fight the round-off error by rounding twice, first to three decimal places, then to two. For this computation you need an exact representation but all previous results can be computed and stored as doubles. This saves quite some memory and time. – maaartinus Apr 04 '14 at 13:38
  • @maaartinus when dealing with financial numbers, any inaccuracy can cause a problem no matter where it occurs. These errors can be compounded in multiple places along the way. The memory difference between working with a pair of ints vs a double isn't really there, and the cpu time when doing all integer math isn't much more costly than double precision... *and* you know you're doing it right. –  Apr 04 '14 at 14:01
  • Pair of `int`s? Are you using rational numbers? How do you ensure they don't overflow? This happens pretty fast when adding a couple of fractions. – maaartinus Apr 04 '14 at 14:04
  • @maaartinus long and int then. A long holds 18 digits. The national debt is 14 digits to the left of the decimal. The int shows where the decimal point is (see [Solutions for floating point rounding errors](http://programmers.stackexchange.com/a/202853/40980)). While you *could* still overflow that, note that the double only has about 16 decimal digits of accuracy and would fail there too (and have rounding errors). –  Apr 04 '14 at 14:10
  • I see, this is sort of home made [64-bit mantissa extended](http://en.wikipedia.org/wiki/Extended_precision#IEEE_754_extended_precision_formats). I actually agree that sometimes `double` doesn't suffice and that often using a more expensive type is the easiest and sometimes the only way. It's just that often `double` with some more thinking do the job. – maaartinus Apr 04 '14 at 14:17
  • @maaartinus consider adding up 100 items on a receipt - the rounding errors will accumulate. This really boils down to "for exact numbers (like financial amounts) use exact values". –  Apr 04 '14 at 14:26
  • @maaartinus: How do you handle division? Do you require that code specify the desired precision, use as much as possible but allow it to be dropped arbitrarily (so that 1/3 equals 0.333333333333333333 but 1/3+10000000000000000-10000000000000000 equals 0.33), or do you do something else? – supercat Jun 02 '14 at 20:51
  • @supercat I do nothing but final rounding when producing the output. In your example there's `1e16` and the total amount of money in the worlds is just a few trillions, i.e., such a huge precision loss never occurs. If it did, then I'd be out of luck. When both the number of decimal places and the rounding mode are prescribed, then I'd have to switch to `BigDecimal`, but only for this computation. The nice thing is that all the slight imprecise `double`s I'm having everywhere get exact again, when converted to `BigDecimal` by rounding. – maaartinus Jun 03 '14 at 01:48
6

When you are dealing with things that are measured (speed, distance, weight etc.) rounding is OK as the accuracy of the original measurement is probably less than the accuracy of you calculation, and, users accept that the final number is still an estimate even if its a good estimate.

When you are dealing with things that are counted (money, people , ICBMs) rounding errors are a disaster. There is always an arithmetically correct answer, and, often the accuracy of this answer is enshrined in regulation, treaties and tax law. Providing a slightly wrong number gets you a free ticket into the Kafkaesque world of auditors, tax collectors and government inspectors; you may never escape with you sanity intact.

James Anderson
  • 18,049
  • 1
  • 42
  • 72
5

Even with a lot of float calculations going on, I can't imagine the rounding getting so far off that it matters. Am I missing something? When do floating point rounding errors really matter?

The biggest problems arise with addition and subtraction. Given two numbers that are very close to one another in terms of magnitude, the difference or sum can lose precision (sometimes a lot), depending on whether they have the same or opposite signs. Given two numbers that are very far from one another in terms of magnitude, the smaller term can vanish into the bit noise.

This wreaks havoc with numerical techniques for solving initial value problems (aka numerical integration). Use a lousy technique such as Euler's method and those double precision truncation errors mean the best you can do is a relative error of 10-8 to 10-6 -- and that's only if the interval of interest is small. Use a very good technique and you are lucky if see a relative error of 10-14 to 10-12, and once again that's only if the interval of interest is small. Propagate the planets for millions of years and all bets are off, even with the very best of techniques. You need to use quad precision for those very long intervals of interest.

David Hammen
  • 8,194
  • 28
  • 37