It depends a lot (on the semantics of -
and of >
).
First, you did not mention the type of a
and of b
. But let's assume for simplicity they both have the same integral type (e.g. C int
or long
).
Then, a - b
might be erroneous (think of weird cases like overflow) or undefined behavior (think of pointer arithmetic, in some cases computing difference of unrelated pointers is UB). And in some programming languages and with some types, a-b
and compare with 0 could be defined, but a>b
might not be. Sometimes in C++ a-b
, a>b
, x>0
would be three different user defined operators of some user specified class
(imagine some bignum library) with different behavior and performance.
Also a
and b
could have some weird types (perhaps matrixes...) and comparing them could make sense when computing their differences is unreliable, or time consuming, etc..
Read about the as-if rule, and the precise specification of your particular programming language (and its semantics).
At last, in many cases a - b > 0
is more readable than a > b
(e.g. when a
and b
are time instants, their difference is a duration). So it really depends.
But replacing a-b>0
by a>b
(or vice versa, when useful) -for integers- is a micro-optimization that most optimizing compilers would do better than you. So don't bother !
Is it a good practice as a coding habit?
Not in 2018, unless it improves readability. You should care a lot to make your code readable.
BTW, IMHO 0
is usually not a magic number. It is quite special (in some rare cases it could be a magic number, but choosing 0 as magic is poor taste and error prone...).