6

I am writing numerical calculation software using .NET C#, which needs to be blazingly fast. There is a lot of fractional math. So using decimal type is pretty much out of the question, given its poor speed relative to using double. But of course double has its problems testing for equality, with floating point rounding issues.

My options seem to be subclassing double and overriding ==, < and >; versus creating extension methods for double equivalent to these. My tendency is to go with the latter - less code to change and maybe it will be less confusing to others reading the code later? Is there another option? What are other good reasons to choose one over the other?

Conrad
  • 288
  • 2
  • 8
  • 19
    How would you subclass `double` when it's a value type and, by definition, not eligible to be a base class? – Damien_The_Unbeliever Oct 21 '20 at 07:42
  • 12
    Note that "blazingly fast" software is unless if it gives completely wrong answers. I also question whether C# is an appropriate choice given your stated requirements. For maximal performance of correct maths, Fortran usually wins. – OrangeDog Oct 21 '20 at 08:18
  • 2
    @OrangeDog Fortran doesn't buy you anything over C++ or Rust unless perhaps if you're using arrays that are parallelized across a supercomputer cluster. And whilst garbage-collected languages like C#, Java and Haskell are indeed generally somewhat slower, it's not a _huge_ difference (whereas dynamic languages or decimal arithmetic are _much_ slower than floating-point in any of those languages). So, it can be a perfectly valid decision to use C# here. – leftaroundabout Oct 21 '20 at 09:02
  • 1
    @leftaroundabout the Fortran implementation of BLAS outperforms the C, for example (due to additional language restrictions allowing more advanced optimisations). The garbage collection has nothing to do with it, it's the JIT not making use of e.g. vectorisation. Also, if you are using floating point, you can disable denorm and NaN checks if you know they won't be an issue. – OrangeDog Oct 21 '20 at 09:26
  • 2
    @OrangeDog BLAS deals with an extremely simple, extremely homogeneous sort of problem, which thus lends itself ideally for Fortran. For linear algebra, the right thing to do is generally to _call that library_, regardless from what language you use yourself. (Also: neither C nor Fortran is best in BLAS – Cuda is.) – leftaroundabout Oct 21 '20 at 09:59
  • 3
    What kind of numerical calculations are you doing that equality tests are important? Generally algorithms in numerical analysis rely more on tests for inequality. E.g., you deem an algorithm converged when the error is less than some tolerance. About the only time you test for (approximate) equality in practice is when validating results against a known standard, and there it's easy enough to just use an approximate comparison subroutine, instead of pretending that you're really testing for "equality". – Nobody Oct 21 '20 at 13:26
  • 1
    The performance secret of Fortran is that it restricts what things may alias (overlap). Without knowing that two arrays do not have partial overlap in memory, many theoretically possible optimizations performed by a C optimizer can produce different results depending on whether partial overlap occurs in two arrays, if those two arrays are not used in a strictly read-only fashion. It is not an insurmountable problem; some C optimizers can issue explicit array range overlap check instructions, and branch to optimizing/non-optimizing routines that produce result consistent with the C specificatio – rwong Oct 22 '20 at 17:55
  • 1
    @rwong: From what I understand, a related difference between FORTRAN and C is that in FORTRAN, if one has two array references `a()` and `b()`, a compiler need not accommodate the possibility that `a(i)` might alias `b(i+ofs)` for any non-zero value of `ofs`, but would be required to accommodate the possibility that `a()` and `b()` might be the same array, and thus `a(i)` and `b(i)` might identify the same storage. – supercat Oct 22 '20 at 20:18

4 Answers4

47

"double has its problems testing for equality".

No, that is not true. "double" does not have such problems. Equality testing for double values is well defined and usually works as it should (which may sometimes not be what several programmers expect, of course).

Truth is: programmers have often problems with testing for equality correctly in numerical software. You cannot simply fix this by using another data type, or by providing some standard equality comparers with some standard precision for equality up-front. Though such approaches may be part of a solution, you first and foremost need to make sure the programmers in your team know how to do floating point comparisons correctly.

Before reading the rest of my answer, please have a look into "What Every Computer Scientist Should Know About Floating-Point Arithmetic". Now. No excuses.

So, since you read this paper, you now have learned that there are several alternatives on how comparisons can be done when using floating point numbers, and one has to pick the correct one for the specific case. For example, it may be necessary to take absolute or relative errors into account, to analyse the required precision for each individual comparison/quantity, or to take the specific operations and algorithms into account which will be used in the numerical software you are designing. Another thing which might be necessary is to adapt the scaling of some quantities, or other measures to keep rounding errors under control.

To find out what one really needs, I would recommend starting to implement some of the algorithms and determine precisely which kind of floating point comparisons are required there. When comparisons of the same kind occur more than two or three times, then it is time to refactor them into a reusable library (maybe using extension methods, which is a useful way in C# whenever it comes to adding some reuseable methods to an existing type one cannot change). It should be clear now why overloading an operator like == is not useful, since there is only one such operator per type, with no additional parameters like a precision.

Don't try this up-front until you have already written several of such numerical programs!

Pang
  • 313
  • 4
  • 7
Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • 12
    The floating point article is a classic and definitely recommended reading. But I do think it IS fair to say that "double has its problems testing for equality", since the results are different than their would mathematically. Sure it's up to programmers to work around it, but it's definitely a limitation of doubles, inevitable though it may be. – Mark Oct 21 '20 at 11:39
  • 6
    @Mark: Lots of iterative numerical algorithms require to stop when the value of some function goes below some "epsilon" - and the number of iterations depend on how epsilon is chosen deliberately. Choosing epsilon=zero would end up in an infinite loop, even if there would be something like a infinitely precise fractional data type available. – Doc Brown Oct 21 '20 at 12:05
  • 2
    ... where I agree is that sometimes the typical "64 bits" for a double are not enough, and there are problems where 128 bit floating point numbers can be more useful. But to find out when this is the case, one will need usually need a firm understanding of the requirements of the specific situation. – Doc Brown Oct 21 '20 at 12:10
  • 2
    Double tests for equality just fine, the problem comes when you try to apply exact mathematical equality to the results of approximate calculations. You can sometimes work around this by replacing equals with some form of "approximately equals" but that can cause problems of it's own (most notablly that "approximately equals" is not transitive) – Peter Green Oct 21 '20 at 23:07
  • @PeterGreen: exactly my point. – Doc Brown Oct 22 '20 at 04:28
  • 1
    @PeterGreen There is no need for approximate calculations. "3.0 == 3.0000000000000001" is as exact as it gets and still returns the wrong result without any error, exception or warning unlike say integer overflow. The fact that most calculation functions hide just how approximate they are is just an additional problem. – SilentAxe Oct 22 '20 at 09:01
  • 1
    @Mark In almost no application do "mathematical" results matter. Even if the calculations were exact, algorithms approximate & input is an approximation. – philipxy Oct 22 '20 at 10:03
  • @philipxy I don't know about that, maybe for 90% it does not, but numerical stability is definitely a problem with some algorithms. And equality would work better if doubles behaved identical to real numbers. – Mark Oct 22 '20 at 12:58
  • 6
    @Mark: the OP did not make the comparison "double" versus "real numbers", they made the comparsion "double" vs. "some higher precision type like `decimal`", and my point is that this is a misconception - higher precision types will always lead the the same problems. What you call a "limitation of double" I would only call "a limitation of having to work on real hardware where any fractional number will get only a small, finite number of bits for its representation, whatever type one choses to work with". – Doc Brown Oct 22 '20 at 13:06
  • 1
    @DocBrown That's not true, there are types that have exact equality, at the cost of performance and representing irrationals. Each type has its own problems, and for doubles precision/equality is a problem. – Mark Oct 22 '20 at 15:31
  • @Mark: feel free to suggest a data type which is *practically* (not just in theory!) utilizable for the numeric simulation the OP is going to implement, on a real machine, within performance and memory contraints, and where they have to teach their programmers less about the perils of floating point and how to handle equality testing than for the `double` type. – Doc Brown Oct 22 '20 at 17:21
  • 1
    @DocBrown I'm not saying double isn't the most practical data type, I'm saying double has a problem with equality. If you want to keep denying that, then let's agree to disagree. – Mark Oct 22 '20 at 18:05
  • 2
    @SilentAxe: You've still got an approximate calculation in your example - the conversion between the decimal `3.0000000000000001` source code text and the resulting double is an approximation. – user2357112 Oct 22 '20 at 20:39
  • 3
    @SilentAxe That expression does not give a wrong answer. It's only wrong if you make the assumption that "double precision float" and "real numbers" are the same. If you do write your model assuming they are the same, you will be surprised. That all being said, the argument typically made is that "3.0" and "3.0000000000000001" are different spellings of the same floating point number, just as "1.0" and "0.999..." are different spelling of the same real number. – Cort Ammon Oct 22 '20 at 20:43
  • Testing whether two floats are equal within their current precision is [so incredibly complicated](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/) and not generally offered by libraries that I think it is fair to say "double has its problems testing for equality". – AndreKR Oct 23 '20 at 09:50
  • @AndreKR: the problem I have with this sentence that is is misleading - it implies just choosing another data type could solve the typical problems which arise in this context. Not "double" has problems with testing for equality, but floating point math in general *causes* problems with equality testing (and fixed point math as well in numerical simulations, that is only a solution for some problems in financial calculation software). Solving the problems has to focus on teaching the programmers, not on the data type. – Doc Brown Oct 23 '20 at 10:44
  • @Mark It's not about double/floating point having problems _testing for equality_. It's about them possibly having problems _getting the same result_ for two calculations that are mathematically equivalent for real numbers (but not for floats). – ilkkachu Oct 23 '20 at 14:36
  • @SilentAxe `3.0000000000000001` is ***exactly*** `3.0`, because `3.0000000000000001` is just an unusual way of expressing the number `3`. If that expression was false, then the computer would be wrong, but it's not. It's true because equality testing works correctly and `3 == 3`. – Paul Oct 23 '20 at 17:09
  • 1
    @DocBrown Meanwhile on planet Earth, some people just need to get some work done. You don't need to complete a Math Analysis course before you calculate a few derivatives. You don't need to know how to change the oil on a car before you drive it. You don't need to be able to understand scenarios where ULPs are inadequate for floating point comparisons before starting with some absolute or relative tolerances. It's best to give some practical advice rather saying you need to start with the fundamentals and build up from first principles. – Eric Oct 23 '20 at 22:20
  • @Eric: I am not a fan of "car vs programming" comparisons, but since you started one: when you need to drive a car, sooner or later you have to refuel it or refill some oil, and then you will be better off knowing the exact sort of fuel or oil, otherwise you risk to wreck your car. But the OP seems to be one who is more likely to accept "wrecking his car" (or write faulty programs) than to learn the basics. See which answer the OP accepted: his own, with the nonsense of a default precision. – Doc Brown Oct 24 '20 at 06:37
26

financial calculation software

double has its problems testing for equality, with floating point rounding issues.

These two problems are not compatible. Do not write a == that isn't transitive, especially not for financial software. Depending on the calculations you are doing, you might not need floating point numbers at all, but rather do everything in fixed point (i.e. integer) arithmetic.

Caleth
  • 10,519
  • 2
  • 23
  • 35
  • I was unnecessarily specific - this is really only tangentially financial-related; see edits. Your points are well-taken. – Conrad Oct 20 '20 at 22:09
  • 1
    Unfortunately, programming languages nowadays are expected to define `==` and `!=` in a broken non-transitive fashion for floating-point types, rather than specifying e.g. that if x and y are both NaN, x==y, x>=y, and x<=y will be true (and other comparisons false), and if x is NaN and Y isn't or vice versa, x!=y will be true and all other comparisons false. That would have allowed testing for NaN via !(x>=0 || x<=0), while keeping == as an equivalence relation. – supercat Oct 21 '20 at 16:47
  • 2
    @supercat as far as I can tell, you're just proposing another kind of broken. There are different reasons for a number to be NaN, but the reason isn't saved in the object. So you really cannot assume NaN==NaN, because that could mean 0/0==sqrt(-1). – Eric Duminil Oct 22 '20 at 05:42
  • 1
    @supercat also, do you have an example of non transitive ==? The only half surprising fact I could find is that == isn't reflective with NaN.!= cannot be transitive, and isn't supposed to be, is it? – Eric Duminil Oct 22 '20 at 06:59
  • 3
    To combine `NaN` with `==`, you need `NaB` - not a Boolean. The fundamental idea of `NaN` is that it propagates through calculations, and that means it has to propagate through expressions involving booleans as well. – MSalters Oct 22 '20 at 09:09
  • @MSalters: I thought about that too. But `NaB` would basically break the [law of excluded middle](https://en.wikipedia.org/wiki/Law_of_excluded_middle), and it would make every boolean logic so much harder. It would be like checking for `null` before every boolean expression. – Eric Duminil Oct 22 '20 at 11:15
  • @EricDuminil: The most important aspect of an equivalence relation is reflexive identity: for any x, x==x. An implementation which uses different NaN bit patterns for different purposes could decide to partition them into equivalence sets in any convenient fashion without breaking the `==` equivalence relation, but treating all NaNs as lumping them together for purposes of equivalence would be no worse than the present situation which allows no means of distinguishing them. As for Transitivity, I guess there might not be any scenarios where the present scheme is non-transitive, but... – supercat Oct 22 '20 at 15:23
  • @EricDuminil: any comparison method that never reports anything in a set equal to anything--not even itself--will never be non-transitive with respect to members of that set. Transitivity is thus rather useless without reflexive identity. – supercat Oct 22 '20 at 15:26
  • 1
    @supercat == over { NaN } is vacuously transitive, because the relation is empty – Caleth Oct 22 '20 at 15:29
  • @MSalters: It may be useful to have a comparison operator that yields a true/false/unknown output, yielding the latter when fed a NaN, but the concept of a mapping collection is only meaningful for sets that have a defined equivalence relation. It may be reasonable to have a set of comparison operators where NaN==NaN would yield false but NaN!=NaN would *also* yield false and `x!=NaN` would yield true if `x` isn't NaN; code could then use either `x==y` or `!(x!=y)` depending upon what kind of comparison it needed (thus allowing the latter to be used as an equivalence test), but... – supercat Oct 22 '20 at 15:31
  • 1
    @MSalters: ...the IEEE spec mandates that neither `==` and `!=` can be used as an equivalence relation. BTW, given that many languages used `<>` as their non-equal operator, having the equality operator test equivalence and the inequality operator test for ordered difference might have been the most logical approach, so if x and y are the results of separate operations yielding NaN, `x==x` (and `y==y`) would be true, `x<>x` and `x<>y` would be false, and `x==y` could be either true and false, but would behave consistently for any particular `x` and `y`). BTW, I wonder if there... – supercat Oct 22 '20 at 15:32
  • @supercat my basic point was to not make `==` any worse. NaN is compromise. How would you make a total function out of `/`? – Caleth Oct 22 '20 at 15:37
  • @supercat comparisons involving IEEE 754 floating point numbers are "broken" in the same sense that arithmetic on the reals is "broken" – Caleth Oct 22 '20 at 15:41
  • ...would have been any difficulty replacing signed zeroes with infinitesimals ("tiny") which would be distinct from an additive-identity zero? So 1/+Inf would yield +tiny, and 1/-inf would yield -tiny, 1/+tiny would yield 1/+inf, and 1/-tiny would yield -inf, but 1/0 would yield NaN, and 1/x==1/(0+x)` would be true for all x If x==NaN were true only when x was the same NaN, and x<>NaN were false for all x, I'd define infinitesimal comparisons so +tiny==0, +tiny<0, +tiny>0, +tiny<>0, and +tiny<=0 would be false, but +tiny>=0 would be true. – supercat Oct 22 '20 at 15:43
  • @Caleth: If one wants infinitesimals to work properly, both positive and negative infinitesimals need to be distinct from zero, rather than having zero behave like positive infinitesimal. There is no fundamental reason why floating-point numbers can't behave as an algebraic construct with an equivalence relation; all of the floating-point formats I've used that aren't based on IEEE-754 do precisely that. The fact that it's possible for x+y==x+z to be true when y!=z isn't really a breakage means that + isn't an algebraic group operator, but not all algebraic constructs need to be groups. – supercat Oct 22 '20 at 15:50
  • @supercat your proposal is interesting, but I don't see how it could solve the underlying problem : you still need to somehow represent uncountably many numbers with a fixed amount of bits, no matter if 32, 64 or 128 bits. Also, `sqrt(-1)` and `sqrt(-2)` are clearly distinct, and shouldn't belong to the same NaN class, so a flag wouldn't do. The current system isn't perfect, but I suppose that no system could be perfect. And at least NaN works as a big "There be dragons. Forget equivalence classes" sign. – Eric Duminil Oct 22 '20 at 16:04
  • @supercat what do you propose for the infinity of operations that are only defined for a subset of `double`s? – Caleth Oct 22 '20 at 16:24
  • @Caleth: One specifies that if either operand to `+` is some kind of NaN, the result will be likewise; the result will also be NaN if the first operand is +Inf and the other is -Inf or vice versa. Otherwise, the operator `+` yields the member of the equivalence class which is arithmetically closest to the arithmetic sum of the nominal value of the operands. The notion of an equivalence class is fundamental to many programming tasks which would care nothing about the objects being compared beyond equivalence. For example, if one has a function which accepts a blob and yields... – supercat Oct 22 '20 at 16:39
  • ...some other kind of blob based purely on the contents of the former, and has a means of building an equivalence-mapped collection, one can build a cache of all of the distinct blobs that have been fed to the function along with their associated outputs, without having to know or care about what any of the data in the input or output blobs means. Such a collection will blow up when using floating-points if the inputs could be NaN, because every time the function is invoked with NaN would generate another entry in the collection. Note that IEEE floating-point is broken not only with... – supercat Oct 22 '20 at 16:42
  • ...respect to the fact that NaN!=NaN, but also with respect to the fact that x==y does not imply that 1/x==1/y. Two values should only be regarded as *equivalent* if they will behave identically in all circumstances; the values 1/+Inf and 1/-Inf will behave differently if each is divided into 1. – supercat Oct 22 '20 at 16:45
  • @supercat: AFAICT, your `+-tiny` idea could never work in practice. Did you every try to implement it? I'd be happy to be proved wrong. – Eric Duminil Oct 22 '20 at 20:16
  • @EricDuminil: The fact that existing hardware uses IEEE-754 would make it impractical without hardware designed for it, but I don't see that there would have been any particular difficulty supporting it had IEEE specified things that way. Use exponent bit pattern of 0001 rather than zero as the "denormal" indicator, all-bits-zero to indicate zero, and a combination of an all-bits-zero exponent and a certain bit set in the mantissa as NaN. What aspects would not have been practical to implement? – supercat Oct 22 '20 at 20:26
  • @supercat: What would `tiny+tiny` be? What would `tiny*inf` be? How many `tiny` would you need in order to get to `1.0`? – Eric Duminil Oct 22 '20 at 20:37
  • @EricDuminil: +Tiny+tiny yields +tiny. +tiny-tiny yields additive-identity zero. and tiny * inf or zero * inf yields NaN. The only "iffy" cases I can see would be +inf-x and -inf+x [for numerical x>0], which the present Standard handles by yielding an infinity of the same sign as the original--behavior which may sometimes yield results that are less meaningful than they would appear, and cases like 1/(+tiny+tiny-tiny-tiny), which would yield -inf, but could have yielded NaN or +Inf if the operations were performed in a different sequence. Those problems are small, however, compared... – supercat Oct 22 '20 at 20:51
  • ...with the asymmetric behavior of the IEEE-754 signed zeroes, which don't even guarantee things like equivalence between x-y and -1*(y-x). – supercat Oct 22 '20 at 20:54
  • @supercat tiny + tiny cannot be tiny, because if you keep adding tiny, you should be able to reach any number. See https://en.m.wikipedia.org/wiki/Archimedean_property – Eric Duminil Oct 22 '20 at 21:01
  • @EricDuminil: The operator "+" doesn't yield the arithmetic sum, but rather the representable value which is "nearest", according to rounding rules, to the arithmetic sum if the latter is defined and within range of the type. If the numerical value of +tiny is infinitesimally greater than zero, then +tiny+tiny would be closer to that than to the next larger value, and would thus be rounded back to +tiny. – supercat Oct 22 '20 at 21:11
  • 1
    @EricDuminil Note that even "regular" numbers in IEEE-754 don't satisfy the archimedean property; if you keep adding 1's you will eventually run out of precision (at which point you get n+1 = n), before hitting the largest representable finite value. – Mario Carneiro Oct 23 '20 at 07:52
  • @EricDuminil both `sqrt(-1)` and `sqrt(-2)` are conceptually (in reals) _error cases_ – arguably they should just raise an exception, which is kind of what you get with signalling NaNs. Quiet NaNs are used as a compromise, to avoid any branching overhead but still make sure errors never go unnoticed. I quite like the idea of extending that to booleans too. The law of excluded middle is overrated – not only are proof assistants like Coq doing away with it. also both safe floating-point usage and exception propagation can be seen in a similar light as non-classical logic. – leftaroundabout Oct 23 '20 at 19:29
  • To handle NaN, I’d recommend using =,< etc. according to IEEE; sorting an array by a floating-point value should put NaNs all at the beginning or all at the end of the array, and for dictionaries with a floating point key consider all NaNs equal to each other so _any_ object can be added to a dictionary, make sure that all NaNs have the same hash code, and +0 and -0 have the same hash code. Things get interesting when you have “optional” as a type, where nil and NaN are different. – gnasher729 Jan 17 '21 at 16:55
  • And for sorting, there is no reason not to sort by the exact floating point value but do something more complicated. – gnasher729 Jan 17 '21 at 17:00
13

Don’t do that. Think about what you really want and name functions accordingly. Given a number x, there is a range of numbers that are likely equal to x (those close to x). A number is definitely greater than x if it is greater, and not in the LikelyEqual range. A number is likely greater or equal x if it is greater or equal to x or in the LikelyEqual range etc. You want these functions:

LikelyEqual
LikelyNotEqual
DefinitelyGreater
LikelyGreaterOrEqual
DefinitelyLess
LikelyLessorEqual

This solution makes it absolutely clear what your code does. Your solution is clever. Clever is rarely a good idea.

PS. Feel free to use better names. Just do NOT call a function “equal” that doesn’t return whether the arguments are equal.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • 7
    I'd call it "nearly equal" or "mostly equal" not "likely". – Bergi Oct 21 '20 at 07:34
  • Agree with Bergi. Epsilon is definitely greater than zero. Also, this does not communicate whether the values have an small absolute or relative difference. – MSalters Oct 22 '20 at 09:11
  • Bergi, the name came from a use case where it was reasonably possible that two results were mathematically equal, but slightly different due to rounding errors, but where it was highly unlikely to have results that were mathematically very close but different. “NearlyEqual” would be better for example for an iteration, where results will never be mathematically equal, but floating point arithmetic is expected to give close or equal results. – gnasher729 Jan 17 '21 at 16:37
  • I think when I implemented it, I asked “what are the integer fractions with smallest denominator such that a/b != c/d mathematically but “LikelyEqual (a/b, c/d)” returns true. – gnasher729 Jan 17 '21 at 16:41
8

Extension methods seem to be the best way to go here, so we don't confuse "exactly equal" with "almost equal." An optional precision argument should be used (give a default that is best for the general situation; it can always be overridden). So for testing "equality", have something like

public static bool ApproxEqual(this double a, double b, double precision = 0.0000000001)
{
    return Math.Abs(a - b) <= precision;
}
Conrad
  • 288
  • 2
  • 8
  • Instead of 0.0000001 which is subject to rounding issues as well, why not [epsilon](https://www.johndcook.com/blog/2010/06/08/c-math-gotchas/) instead ? – Christophe Oct 20 '20 at 22:54
  • 7
    @Christophe because `Double.Epsilon` is ~5e-324, which is **way** beyond the floating point error ranges. Using that results in essentially overloading ==/>. – Conrad Oct 20 '20 at 23:12
  • @Christophe also using 0.0000001 is going to be the same as using any other constant. Comparing two doubles in C#/.NET is not "subject to rounding issues." – Conrad Oct 21 '20 at 00:29
  • 6
    In numerical simulation software, one should always analyse what precision is required when `ApproxEqual` is used, in the specific usage context. So I would heavily recommend against providing any default value for `precision` . Force the user of such a function always to **think** about the required precision, and make them choose a value for it **deliberately**. – Doc Brown Oct 21 '20 at 05:59
  • I develop a plugin for CAD software and the API exposes an `Equals` function that also takes 2 points / array of doubles that has a double `tolerance` as it's last parameter so I'd think this solution is not unreasonable. I don't understand the downvote(s). – Thomas Oct 21 '20 at 06:55
  • 1
    This function is not good for the general case. Better would be a absolute and a relative tolerance (with either no default argument (as DocBrown suggests) or default argument set to zero (with a better name for the function ) ). With only absolute precision you get arbitrarily large relative errors for small numbers (e.g. comparison with 0 or anything that is smaller than `precision`). For large numbers this even reduces to the exact comparison. You make here an implicit assumption on the magnitude of numbers you want to compare. For "numerical calculation" use-case this cannot be made. – Andreas H. Oct 21 '20 at 07:17
  • 8
    [To apply the best, most favorable interpretation of the answerer's intention](https://en.wikipedia.org/wiki/Principle_of_charity), I think this answer points out that the tolerance (whether absolute or relative or both) is something that needs to be [**injected**](https://stackoverflow.com/q/48516359/) into the relation comparison function (i.e. allowing customization at each use site), rather than baked into the user-defined type. That is it. It does not go into considering what would be considered an actual, "industrially robust" implementation. – rwong Oct 21 '20 at 09:06