75

Whenever I need division, for example, condition checking, I would like to refactor the expression of division into multiplication, for example:

Original version:

if(newValue / oldValue >= SOME_CONSTANT)

New version:

if(newValue >= oldValue * SOME_CONSTANT)

Because I think it can avoid:

  1. Division by zero

  2. Overflow when oldValue is very small

Is that right? Is there a problem for this habit?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
ocomfd
  • 5,652
  • 8
  • 29
  • 37
  • 1
    Interesting question. I never thought about this before, but unless anyone points out issues with this, this will be my preferred way – vikarjramun Jan 03 '18 at 02:05
  • 41
    Be careful that with negative numbers, the two versions check totally different things. Are you certain that `oldValue >= 0` ? – user2313067 Jan 03 '18 at 05:57
  • 37
    Depending on the language (but most notably with C), whatever optimization you can think of, the compiler can usually do it better, _-OR-_, has enough sense to not do it at all. – Mark Benningfield Jan 03 '18 at 06:29
  • 65
    It is never "a good practice" to **always** replace code X by code Y when X and Y are not semantically equivalent. But it is **always** a good idea to look at X and Y, switch on the brain, **think about what the requirements are**, and then make a decision which of the two alternatives is more correct. And after that, you should also think about which tests are required to verify you got the semantic differences right. – Doc Brown Jan 03 '18 at 06:41
  • 12
    @MarkBenningfield: Not whatever, the compiler cannot optimise away divide by zero. The "optimisation" you're thinking of is "speed optimisation". The OP is thinking about another kind of optimisation -- bug avoidance. – slebetman Jan 03 '18 at 07:54
  • 2
    @slebetman: I see your distinction, but it doesn't apply to OP''s situation. If you encounter div/0, you have a mathematically invalid state that should be checked. If you don't check before the div/0, thinking the multiplication will save you the hassle, then you have masked the problem by not checking the resulting product. If you're going to check the product, you might as well check before the div/0. – Mark Benningfield Jan 03 '18 at 08:16
  • 25
    Point 2 is bogus. The original version can overflow for small values, but the new version can overflow for big values, so neither is safer in the general case. – JacquesB Jan 03 '18 at 09:08
  • 1
    @MarkBenningfield I don't think we know enough about OP's situation to decide that `oldValue == 0` is an invalid state. – Lord Farquaad Jan 03 '18 at 14:15
  • 2
    note that multiply is mostly also faster, if you are executing code thousands of times per second and it is f/e javascript which is not transpiled it might increase performance even. But overal it's not worth it ;-). – Mathijs Segers Jan 03 '18 at 14:28
  • @LordFarquaad: I'm not saying that `oldValue == 0` is an invalid state in the problem domain. But, if `oldValue` is being used as a divisor, then it is mathematically invalid if it is 0. – Mark Benningfield Jan 03 '18 at 15:56
  • 2
    @MarkBenningfield I don't really see your point then; I don't think anyone's arguing that division by 0 is ok. OP's point 1 is that one benefit to switching to multiplication is that you don't risk division by 0. If `oldValue == 0`, the old conditional throws an error regardless of anything, while the new one just checks `newValue`'s sign. Whether or not that's ok depends on the expected behavior, but it most certainly does eliminate a potential div/0 exception. It has more than an optimization change; it changes the program domain, which is something the compiler wont try to optimize. – Lord Farquaad Jan 03 '18 at 16:12
  • 1
    If OP had chosen a different expression to begin with, such as `(newValue / SOME_CONSTANT >= oldValue)`, and if `SOME_CONSTANT` is indeed compile-time constant and positive, then the compiler will indeed convert that division into a multiplication-by-reciprocal. Otherwise, when neither is constant, this optimization doesn't apply. Knowing and relying on compiler optimization is good, but it doesn't save one from having to know that these compiler optimizations exist and the preconditions for the compiler to apply them. – rwong Jan 03 '18 at 16:20
  • 1
    Is old value always non negative? – copper.hat Jan 03 '18 at 16:20

10 Answers10

76

Two common cases to consider:

Integer arithmetic

Obviously if you are using integer arithmetic (which truncates) you will get a different result. Here's a small example in C#:

public static void TestIntegerArithmetic()
{
    int newValue = 101;
    int oldValue = 10;
    int SOME_CONSTANT = 10;

    if(newValue / oldValue > SOME_CONSTANT)
    {
        Console.WriteLine("First comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("First comparison says it's not bigger.");
    }

    if(newValue > oldValue * SOME_CONSTANT)
    {
        Console.WriteLine("Second comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("Second comparison says it's not bigger.");
    }
}

Output:

First comparison says it's not bigger.
Second comparison says it's bigger.

Floating point arithmetic

Aside from the fact that division can yield a different result when it divides by zero (it generates an exception, whereas multiplication does not), it can also result in slightly different rounding errors and a different outcome. Simple example in C#:

public static void TestFloatingPoint()
{
    double newValue = 1;
    double oldValue = 3;
    double SOME_CONSTANT = 0.33333333333333335;

    if(newValue / oldValue >= SOME_CONSTANT)
    {
        Console.WriteLine("First comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("First comparison says it's not bigger.");
    }

    if(newValue >= oldValue * SOME_CONSTANT)
    {
        Console.WriteLine("Second comparison says it's bigger.");
    }
    else
    {
        Console.WriteLine("Second comparison says it's not bigger.");
    }
}

Output:

First comparison says it's not bigger.
Second comparison says it's bigger.

In case you don't believe me, here is a Fiddle which you can execute and see for yourself.

Other languages may be different; bear in mind, however, that C#, like many languages, implements an IEEE standard (IEEE 754) floating point library, so you should get the same results in other standardized run times.

Conclusion

If you are working greenfield, you are probably OK.

If you are working on legacy code, and the application is a financial or other sensitive application that performs arithmetic and is required to provide consistent results, be very cautious when changing around operations. If you must, be sure that you have unit tests that will detect any subtle changes in the arithmetic.

If you are just doing things like counting elements in an array or other general computational functions, you will probably be OK. I am not sure the multiplication method makes your code any clearer, though.

If you are implementing an algorithm to a specification, I would not change anything at all, not just because of the problem of rounding errors, but so that developers can review the code and map each expression back to the specification to ensure there are no implementation flaws.

John Wu
  • 26,032
  • 10
  • 63
  • 84
  • 42
    Second the financial bit. This sort of switch is asking for accountants to be chasing you with pitchforks. I remember 5,000 lines where I had to put more effort into keeping the pitchforks at bay than in finding the "right" answer--which actually was generally slightly wrong. Being off by .01% didn't matter, absolutely consistent answers were mandatory. Thus I was forced to do the calculation in way that caused a systematic roundoff error. – Loren Pechtel Jan 03 '18 at 05:42
  • 9
    Think of buying 5 cent candy (not that any exists anymore.) Buy 20 pieces, the "correct" answer was no tax because there was no tax on 20 purchases of one piece. – Loren Pechtel Jan 03 '18 at 05:43
  • 24
    @LorenPechtel, that's because most tax systems include a rule (for obvious reasons) that tax is levied per transaction, and tax is due in increments no smaller than the smallest coin of the realm, and fractional amounts are rounded down in the taxpayer's favour. Those rules are "correct" because they are lawful and consistent. The accountants with pitchforks probably know what the rules actually are in a way that computer programmers will not (unless they are also experienced accountants). A 0.01% error will likely cause a balancing error, and it is unlawful to have a balancing error. – Steve Jan 03 '18 at 08:55
  • 9
    Because I've never heard the term _greenfield_ before, I looked it up. Wikipedia says it's "a project which lacks any constraints imposed by prior work". – Henrik Ripa Jan 03 '18 at 10:23
  • 4
    Perhaps interesting related to @Steve's comment: about 10 years ago a major grocery store in NL attempted to exploit EU tax rules. NL tax rules did specify tax had to be levied per transaction, but EU tax rules didn't. The store wanted to do it per item instead, so less tax would have to be paid. It went to court and they lost. The argument was basically that because the EU tax rules didn't specify this, the NL tax rules and EU tax rules didn't conflict, so the NL tax rules were valid and had to be followed. You can't make up your own rounding rules. – hvd Jan 03 '18 at 12:24
  • 3
    @HenrikRipa, I'm not quite sure why, but seeing that Wikipedia quote has totally cracked me up! There's something about a project "lacking any constraints" that conjures an image of legal rules and professional conventions being flouted, accepted solutions being ignored, and budgets being blown. – Steve Jan 03 '18 at 12:47
  • For integral math you can add `(SOME_CONSTANT - 1)` to one side as needed to acheive the same result. Just be careful you do it correctly. It is only necessary when moving the division from anything other than the < side and should be put on the same side as the multiplication. – Erroneous Jan 03 '18 at 13:30
  • 10
    @Steve: My boss recently contrasted "greenfield" to "brownfield". I remarked that certain projects are more like "blackfield"... :-D – DevSolar Jan 03 '18 at 14:21
  • 3
    If it's something financial, you shouldn't be using integer math or floating point math - you should be using a specialized money class. – corsiKa Jan 03 '18 at 18:37
  • @Steve Note that I said "think of"--this wasn't tax, but rather what an installer got paid for the work. The problem was there was an installation budget of which the installer got a percentage. – Loren Pechtel Jan 04 '18 at 01:11
  • @LorenPechtel, ah that's fair. Your accountants with pitchforks may have been right or wrong, but either way it's never acceptable as a principle to be inconsistent or to tolerate error in an unsystematic way, so any argument of that nature would go down like a lead balloon with an accountant. And anything that involves an existing financial relationship with an *external* party may well be the product of very contentious negotiation and/or subject to very close scrutiny, and *any unilateral* change may provoke a flurry of time-consuming queries from multiple parties if not outright arguments. – Steve Jan 04 '18 at 07:25
  • Saying that floating point multiplication does not generate an error might lead one to believe that something like `0.3*3.3 == 0.99` would be true, [which it's not (in C#)](https://ideone.com/EnsadS). – Bernhard Barker Jan 04 '18 at 13:41
  • @JaredSmith Having consulted for HFT's in the past, that's bologna. They need correctness even more because their margins are so thin that a rounding error might cost them millions. – corsiKa Jan 04 '18 at 15:58
  • @corsiKa I stand corrected. – Jared Smith Jan 04 '18 at 16:03
  • I remember hearing about a payroll system that was migrated to a new OS (or compiler, or something like that) and some results changed by a penny, resulting in outrage from the employee unions. – Barmar Jan 04 '18 at 16:54
  • @Steve As originally written it totalled up the numbers and applied the percentages--they only got a page with the job and the pay, they didn't see the inner workings. There were higher detail modes meant only for diagnostic use but they were occasionally used when there was a dispute about why this "identical" job paid less than that one. Oops, the roundoff was different and some installers saw it. (The point of the detail printouts was to understand where the numbers came from, roundoff differences were understood by the intended audience.) – Loren Pechtel Jan 04 '18 at 19:43
  • @LorenPechtel, I gather then that what befell you is exactly one of the risks I predicted in principle. Sometimes what is at stake is not the amount of money but the principle of who decides the calculation - you would not tolerate me taking from you and your colleagues a half-penny of your wages each week, not because of the amount involved but because you recognise the principle that I do not have the right to decide such a thing, and if you acquiesce to my mentality and my assertion of the right (even if it is an ignorant rather than an aggressive act against you), then I may go further. – Steve Jan 05 '18 at 14:34
  • @Steve Except the low-detail ones were the accepted pay calculation and the right values. The high detail ones had the the rounding errors due to calculating percentages on an item by item basis--note that the change actually favored the company, not the workers. – Loren Pechtel Jan 06 '18 at 02:49
25

I like your question as it potentially covers many ideas. On the whole, I suspect the answer is it depends, probably on the types involved and the possible range of values in your specific case.

My initial instinct is to reflect on the style, ie. your new version is less clear to the reader of your code. I imagine I would have to think for a second or two (or perhaps longer) to determine the intention of your new version, whereas your old version is immediately clear. Readability is an important attribute of code, so there is a cost in your new version.

Your are right that the new version avoids a division by zero. Certainly you do not need to add a guard (along the lines of if (oldValue != 0)). But does this make sense? Your old version reflects a ratio between two numbers. If the divisor is zero, then your ratio is undefined. This may be more meaningful in your situation, ie. you should not produce a result in this case.

Protection against overflow is debatable. If you know that newValue is always larger than oldValue, then perhaps you could make that argument. However there may be cases where (oldValue * SOME_CONSTANT) will also overflow. So I don't see much gain here.

There might be an argument that you get better performance because multiplication can be faster than division (on some processors). However, there would have to be many calculations such as these for this to a significant gain, ie. beware of premature optimisation.

Reflecting on all of the above, in general I don't think there's much to be gained with your new version as compared to the old version, particular given the reduction in clarity. However there may be specific cases where there is some benefit.

dave
  • 351
  • 2
  • 5
  • 16
    Ehm, arbitrary multiplication being more efficient than arbitrary division is not really processor-dependent, for real-world machines. – Deduplicator Jan 03 '18 at 03:15
  • 1
    There is also the issue of integer vs floating point arithmetic. If the ratio is fractional, the division needs to be performed in floating point, requiring a cast. Missing the cast will cause unintended mistake. If the fraction happens to be a ratio between two small integers, then rearranging them allows the comparison to be conducted in integer arithmetic. (At which point your arguments will apply.) – rwong Jan 03 '18 at 16:13
  • @rwong Not always. Several languages out there do integer division by dropping the decimal part, so no cast is necessary. – T. Sar Jan 04 '18 at 17:31
  • @T.Sar The technique you describe and the semantics described in the answer are different. Semantics is whether the programmer intends the answer to be a floating-point or fractional value; the technique you describe is the division by reciprocal multiplication, which is sometimes a perfect approximation (substitution) for an integer division. The latter technique is typically applied when the divisor is known in advance, because the derivation of the integer reciprocal (shifted by 2**32) can be done at compile-time. Doing that at runtime wouldn't be beneficial because it's more CPU-expensive. – rwong Jan 04 '18 at 22:24
24

No.

I'd probably call that premature optimization, in a broad sense, regardless of whether you're optimizing for performance, as the phrase generally refers to, or anything else that can be optimized, such as edge-count, lines of code, or even more broadly, things like "design."

Implementing that sort of optimization as a standard operating procedure puts the semantics of your code at risk and potentially hides the edges. The edge cases you see fit to silently eliminate may need to be explicitly addressed anyway. And, it is infinitely easier to debug problems around noisy edges (those that throw exceptions) over those that fail silently.

And, in some cases, it's even advantageous to "de-optimize" for the sake of readability, clarity, or explicitness. In most cases, your users won't notice that you've saved a few lines of code or CPU cycles to avoid edge-case handling or exception handling. Awkward or silently failing code, on the other hand, will affect people -- your coworkers at the very least. (And also, therefore, the cost to build and maintain the software.)

Default to whatever is more "natural" and readable with respect to the application's domain and the specific problem. Keep it simple, explicit, and idiomatic. Optimize as is necessary for significant gains or to achieve a legitimate usability threshold.

Also note: Compilers often optimize division for you anyway -- when it's safe to do so.

svidgen
  • 13,414
  • 2
  • 34
  • 60
  • 11
    -1 This answer doesn't really fit the question, which is about the potential pitfalls of division - nothing to do with optimisation – Ben Cottrell Jan 03 '18 at 09:39
  • 13
    @BenCottrell It fits perfectly well. The pitfall is in placing value in pointless performance optimisations at the cost of maintainability. From the question **"is there any problem for this habit?"** - yes. It will quickly lead to writing absolute gibberish. – Michael Jan 03 '18 at 11:44
  • 2
    @Michael Perhaps you could point out to me where in the question there is any mention whatsoever about performance or optimisation? – Ben Cottrell Jan 03 '18 at 12:32
  • 1
    @BenCottrell OP asked **"Is it good practice..."**. And good practice in software development incorporates many things - maintainability, scalability, extensibility, the list goes on. – Michael Jan 03 '18 at 12:48
  • 9
    @Michael the question is not asking about any of those things either - it is specifically asking about the **correctness** of two different expressions which each have different semantics and behaviour, but are both intended to fit the same requirement. – Ben Cottrell Jan 03 '18 at 12:52
  • 5
    @BenCottrell Perhaps you could point out to me where in the question there is any mention whatsoever about correctness? – Michael Jan 03 '18 at 12:53
  • 2
    @Michael Please see the two points in the question about overflow and divide-by-zero errors – Ben Cottrell Jan 03 '18 at 12:53
  • 5
    @BenCottrell You should have just said 'I can't' :) – Michael Jan 03 '18 at 13:06
  • 2
    @Michael Perhaps you should check the definition of 'correctness'? – Ben Cottrell Jan 03 '18 at 13:09
  • 2
    @Michael in case it helps, the first definition from googling "correctness" shows up with *"the quality or state of being free from error; accuracy"* - and I rather think that the specific issues of overflow and divide-by-zero are both about correctness – Ben Cottrell Jan 03 '18 at 13:16
  • @BenCottrell I'm curious as to whether you see any *other* reason the OP would ask whether division should be replaced with multiplication "when possible" ... Is multiplication inherently better than division for any other reason?? ... While I could certainly be wrong, *it does seem obvious* that this question was written with respect to performance optimization. – svidgen Jan 03 '18 at 18:02
  • 1
    @svidgen As I see it, the OP is looking to handle edge-cases where multiplication may yield a more correct result than division. i.e. when the old_value is zero division throws an error. Although multiplication still suffers from "overflow" in a different way, it may be that the OP isn't dealing with big enough numbers, but may be dealing with very small ones. Lastly, the interesting point raised by another answer about integer vs floating point division. On the other hand, I can't think of many scenarios where the difference in performance would ever be noticeable or measurable – Ben Cottrell Jan 04 '18 at 09:04
  • Another -1, the question makes no mention of optimization and this doesn't answer what it does discuss. This is a FGITW answer twitch-reacting to a presumed question that doesn't exist. – Alex Celeste Jan 04 '18 at 12:28
  • @Leushenko Well, you're 1/2 right. The OP didn't explicitly mention *performance* optimizations. But, this wasn't a FGITW answer, by any means. I interpreted the two bullets as ancillary causes, not primary optimizations -- it seemed a bit too ridiculous to me, honestly. ... Either way you slice it though, they're optimizations. Whether they're *performance* optimizations or exception or edge-case minimization optimizations, they're a form of optimization to be avoided addressing prematurely -- until the full nature of the problem is known. ... I'll update my answer to include both. – svidgen Jan 04 '18 at 15:08
  • Are you seriously suggesting that a) avoiding **logic errors** like overflow (what the OP actually mentions) is an "optimization", or b) that it should be avoided *because* it is potentially also an optimization regardless of the fact that OP is interested in correctness? Either of these is nonsense. The question is about using a pattern to reduce the chance of **errors**. Nothing to do with optimization on any level. – Alex Celeste Jan 04 '18 at 15:50
  • 2
    @Leushenko a) no. b) also no. ... I certainly read the question initially as being primarily motivated by the potential performance optimization -- as is *historically* why folks wanted to replace division with multiplication. Certainly, on an second read, I see that performance may not be the motivator behind the OP's desired refactor. But, logical correctness is *never* the reason for a refactor. Refactoring is LOC/design optimization. Logical correctness is cause to *hesitate* over a refactor. Refactoring is not a debugging strategy; to the contrary, it assumes already-working code. – svidgen Jan 04 '18 at 16:21
  • @svidgen, I agree. Changing code when debugging is not "refactoring" (i.e. reconfiguring the structure or placement of factors whilst maintaining the equivalence of their overall relationship to each other), it is *re-writing*. One may first refactor in order to identify and isolate problem code, or to merely prepare for the introduction of new factors (or the removal of old) in a clean and localised way, but any reconfiguration of code is then followed by actual changes to it. – Steve Jan 05 '18 at 14:59
13

Use whichever one is less buggy and makes more logical sense.

Usually, division by a variable is a bad idea anyway, since usually, the divisor can be zero.
Division by a constant is usually just dependent on what the logical meaning is.

Here's some examples to show it depends on the situation:

Division good:

if ((ptr2 - ptr1) >= n / 3)  // good: check if length of subarray is at least n/3
    ...

Multiplication bad:

if ((ptr2 - ptr1) * 3 >= n)  // bad: confusing!! what is the intention of this code?
    ...

Multiplication good:

if (j - i >= 2 * min_length)  // good: obviously checking for a minimum length
    ...

Division bad:

if ((j - i) / 2 >= min_length)  // bad: confusing!! what is the intention of this code?
    ...

Multiplication good:

if (new_length >= old_length * 1.5)  // good: is the new size at least 50% bigger?
    ...

Division bad:

if (new_length / old_length >= 2)  // bad: BUGGY!! will fail if old_length = 0!
    ...
user541686
  • 8,074
  • 8
  • 38
  • 49
  • 2
    I agree that it depends on context, but your first two pairs of examples are extremely poor. I wouldn't prefer one over the other in either case. – Michael Jan 03 '18 at 11:53
  • 6
    @Michael: Uhm... you find `(ptr2 - ptr1) * 3 >= n` to be just as easy to understand as the expression `ptr2 - ptr1 >= n / 3`? It doesn't make your brain trip over and get back up trying to decipher the meaning of tripling the difference between two pointers? If it's really obvious to you and your team, then more power to you I guess; I must just be in the slow minority. – user541686 Jan 03 '18 at 12:56
  • 2
    A variable called `n` and an arbitrary number 3 are confusing in both cases but, replaced with reasonable names, no I don't find either one more confusing than the other. – Michael Jan 03 '18 at 12:59
  • 1
    These examples aren't really poor.. definitely not 'extremely poor' - even if you sub in 'reasonable names' they still make less sense when you swap them for the bad cases. If I were new to a project I would much rather see the 'good' cases listed in this answer when I went to fix some production code. – John-M Jan 04 '18 at 07:31
3

Doing anything “whenever possible” is very rarely a good idea.

Your number one priority should be correctness, followed by readability and maintainability. Blindly replacing division with multiplication whenever possible will often fail in the correctness department, sometimes only in rare and therefore hard to find cases.

Do what’s correct and most readable. If you have solid evidence that writing code in the most readable way causes a performance problem, then you can consider changing it. Care, maths and code reviews are your friends.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
1

Regarding the readability of the code, I think the multiplication is actually more readable in some cases. For example, if there is something that you must check if newValue has increased 5 percent or more above oldValue, then 1.05 * oldValue is a threshold against which to test newValue, and it is natural to write

    if (newValue >= 1.05 * oldValue)

But beware of negative numbers when you refactor things this way (either replacing the division with multiplication, or replacing multiplication with division). The two conditions you considered are equivalent if oldValue is guaranteed not to be negative; but suppose newValue is actually -13.5 and oldValue is -10.1. Then

newValue/oldValue >= 1.05

evaluates to true, but

newValue >= 1.05 * oldValue

evaluates to false.

David K
  • 475
  • 2
  • 8
1

Note the famous paper Division by Invariant Integers using Multiplication.

The compiler is actually doing multiplication, if the integer is invariant! Not a division. This happens even for non power of 2 values. Power of 2 divisions use obviously bit shifts and are therefore even faster.

However, for non-invariant integers, it is your responsibility to optimize the code. Be sure before optimizing that you're really optimizing a genuine bottleneck, and that correctness is not sacrified. Beware of integer overflow.

I care about micro-optimization, so I would probably take a look at the optimization possibilities.

Think also about the architectures your code runs on. Especially ARM has extremely slow division; you need to call a function to divide, there is no division instruction in ARM.

Also, on 32-bit architectures, 64-bit division is not optimized, as I found out.

juhist
  • 2,579
  • 10
  • 14
1

Picking up on your point 2, it will indeed prevent overflow for a very small oldValue. However if SOME_CONSTANT is also very small then your alternative method will end up with underflow, where the value cannot be accurately represented.

And conversely, what happens if oldValue is very large? You have the same problems, just the opposite way round.

If you want to avoid (or minimise) the risk of overflow/underflow, the best way is to check whether newValue is closest in magnitude to oldValue or to SOME_CONSTANT. You can then choose the appropriate divide operation, either

    if(newValue / oldValue >= SOME_CONSTANT)

or

    if(newValue / SOME_CONSTANT >= oldValue)

and the result will be most accurate.

For divide-by-zero, in my experience this is almost never appropriate to be "solved" in the maths. If you have a divide-by-zero in your continuous checks, then almost certainly you have a situation which requires some analysis and any calculations based on this data are meaningless. An explicit divide-by-zero check is almost always the appropriate move. (Note that I do say "almost" in here, because I don't claim to be infallible. I'll just note that I don't remember seeing a good reason for this in 20 years of writing embedded software, and move on.)

However if you have a real risk of overflow/underflow in your application then this probably isn't the right solution. More likely, you should generally check the numerical stability of your algorithm, or perhaps simply move to a higher precision representation.

And if you don't have a proven risk of overflow/underflow, then you're worrying about nothing. That does mean you literally need to prove you need it, with numbers, in comments next to the code which explain to a maintainer why it's necessary. As a principal engineer reviewing other people's code, if I ran into someone taking extra effort over this, I personally would not accept anything less. This is kind of the opposite of premature optimisation, but it would generally have the same root cause - obsession with detail which makes no functional difference.

Graham
  • 1,996
  • 1
  • 12
  • 11
0

Encapsulate the conditional arithmetic in meaningful methods and properties. Not only will good naming tell you what "A/B" means, parameter checking & error handling can neatly hide in there too.

Importantly, as these methods are composed into more complex logic, the extrinsic complexity stays very manageable.

I'd say multiplication substitution seems a reasonable solution because the problem is ill-defined.

radarbob
  • 5,808
  • 18
  • 31
0

I think it could not be good idea to replace multiplications with divisions because CPU's ALU (Arithmetic-Logic Unit) executes algorithms, though they are implemented in hardware. More sophisticated techniques are available in newer processors. Generally, processors strive to parallelize bit-pairs operations in order the minimize the clock cycles required. Multiplication algorithms can be parallelized quite effectively (though more transistors are required). Division algorithms can't be parallelized as efficiently. The most efficient division algorithms are quite complex. Generally, they requires more clock cycles per bit.

Ishan Shah
  • 339
  • 1
  • 9