17

I have come across this programming idiom recently:

const float Zero = 0.0;

which is then used in comparisons:

if (x > Zero) {..}

Can anyone explain if this is really any more efficient or readable or maintainable than:

if (x > 0.0) {..}

NOTE: I can think of other reasons to define this constant, Im just wondering about its use in this context.

Shirish11
  • 1,469
  • 10
  • 22
NWS
  • 1,319
  • 8
  • 17
  • 37
    The developers are planning on porting the code to a universe where the laws of mathematics are different? – vaughandroid Jul 03 '12 at 10:47
  • 8
    Seriously though, I can't think of a single good reason for this. The only explanations I can come up with are over-zealous coding standards, or some devs who have heard "magic numbers are bad" but don't understand why (or what would constitute a magic number)... – vaughandroid Jul 03 '12 at 10:49
  • 1
    @Baqueta -An alternate universe? I think they already lived there! As for magic numbers, I agree, however I use the rule of thumb that Everything _Except_ 0 & 1 should be made constant. – NWS Jul 03 '12 at 10:56
  • 1
    If `x` has type `float`, then `x > 0.0` forces promotion to `double`, which might be less efficient. That's not a good reason for using a named constant though, just for making sure your constants have the correct type (e.g. `0f`, `float(0)` or `decltype(x)(0)`). – Mike Seymour Jul 03 '12 at 14:32
  • That's as hilarious as my tutor in C++ project claiming I should not `#define MAGIC_NUM 13.37` but rather `static const float MAGIC_NUM = 13.37` because it would be "type-safe". (I guess that's also part of Google Style Guide). And therefore we do semantically wrong stuff? O tempora, o mores! – Jo So Jul 03 '12 at 14:34
  • 1
    @JoSo: to be fair, the type of `13.37` isn't `float`, it's `double`. So *if* you wanted a `float` then it's conceivable your tutor was correct. In some contexts (e.g. assignment to a float) `13.37` will be implicitly converted to the `float` that you wanted, and in other contexts (e.g. template type deduction) it won't be, whereas the `static const float` always starts as the type you intended. Hence, more type-safe. Mind you, so would be `13.37f`! There are other reasons for avoiding the macro than "type-safety", though, so it's just as likely the tutor was giving you a poor argument. – Steve Jessop Jan 10 '18 at 02:12
  • @Steve Jessop, there is a semantic difference in that the `static const` version reads from memory, which is way slower than an immediate load. I'm not sure under which circumstances compilers are allowed to optimize out the load. But I think it's cleaner to not do it this way from the start. Also, the `static const` version has semantically a memory location (= identity) attached to it. Even though it's technically `const` and might even be protected by the MMU, it feels just wrong to me. For example, you can take a pointer from it, which doesn't make sense at all for a VALUE definition. – Jo So Jan 10 '18 at 20:33
  • Concerning double vs float, I always thought it was misguided that we should make a distinction on the number literal level. (If I remember correctly, MSVC takes the difference between `13.37` and `13.37f` very seriously by default, but I don't like that). Anyway why not define the macro like `((float) 13.37)`? – Jo So Jan 10 '18 at 20:34
  • @JoSo "I'm not sure under which circumstances compilers are allowed to optimize out the load" As the variable is static (and thus never used outside the translation unit it was declared in), most circumstances. It might even be able to optimize calculating the memory location of the variable, if for some reason you need a pointer to the constant. – JAB Jan 10 '18 at 21:09
  • @JAB, yep, but that's still a bit vague, and already complicated. I'd prefer to just not think about it. The semantics of macros are much more suitable to what we want to achieve (give a descriptive name to a particular numeric value), IMHO. Concerning pointers to constants, that makes just no sense to me. It sounds like a contradiction. (It might make sense to take the address of a very large constant value, like an array, in some circumstances where the architecture isn't thought through and one takes a quick and dirty approach. But meaningful constants tend to be primitive values). – Jo So Jan 10 '18 at 22:27
  • `Zero` is easier to type than `0.0`, at least for me. – GregT Jan 16 '18 at 09:43
  • @Jo So: "I'm not sure under which circumstances" -- all circumstances, since compilers are allowed to assume that objects defined as `const` but not `volatile` don't change value. It's not even *necessarily* the case that loading from memory is slower than an immediate load, which is why on some architectures you'd see "constant pools", where the compiler has taken code like `long long foo = 12345678900;` and implemented it with a memory load! I remember that on ARM, albeit a long time ago. – Steve Jessop Mar 02 '18 at 12:29
  • @SteveJessop, where is that specified? I was under the impression that it's mainly a syntactic displine that prevents writes *through that pointer*. For example, many functions receive const pointer arguments to data that is mutable but should not be written by those functions. Even more, it's totally valid to pass a non-const pointer as a const-pointer argument (it's implicitly constified). What if such a function calls another function that mutates that same data? – Jo So Mar 02 '18 at 16:05
  • @JoSo: a const-qualified pointer is not the same as the name of an object defined as const. You're quite right that the compiler cannot assume that the referand of a const-qualified pointer never changes. I don't remember my way around the C++ standard as well as I used to, but the relevant text says that it's undefined behavior to write to any of the bytes that constitute a const-qualified object. Since it's UB, the compiler can proceed on the assumption that it doesn't happen. – Steve Jessop Mar 02 '18 at 17:06
  • @JoSo: Consider also that volatile reads/writes are observable behaviour, whereas others are not and hence can be omitted under the as-if rule once you know what the result will be. Which is why I had to say "`const` but not `volatile`". – Steve Jessop Mar 02 '18 at 17:24
  • @SteveJessop: yes, I agree with that. Sorry, I completely misread your previous comment. Also, const global objects end up in .rodata. (I only have experience with linux amd64) – Jo So Mar 03 '18 at 10:22

7 Answers7

30

Possible reasons are caching, naming or forcing type

Caching (not applicable)

You want to avoid the cost of creating an object during the act of comparison. In Java an example would be

BigDecimal zero = new BigDecimal ("0.0");

this involves a fairly heavy creation process and is better served using the provided static method:

BigDecimal zero = BigDecimal.ZERO;

This allows comparisons without incurring a repeated cost of creation since the BigDecimal is pre-cached by the JVM during initialisation.

In the case of what you have described, a primitive is performing the same job. This is largely redundant in terms of caching and performance.

Naming (unlikely)

The original developer is attempting to provide a uniform naming convention for common values throughout the system. This has some merit, especially with uncommon values but something as basic as zero is only worth it in the case of the caching case earlier.

Forcing type (most likely)

The original developer is attempting to force a particular primitive type to ensure that comparisons are cast to their correct type and possibly to a particular scale (number of decimal places). This is OK, but the simple name "zero" is probably insufficient detail for this use case with ZERO_1DP being a more appropriate expression of the intent.

Gary
  • 24,420
  • 9
  • 63
  • 108
  • 2
    +1 for forcing type. I'll add that in languages like C++ that allow operator overloading, defining a constant and use of `typedef` would keep the type of the variable to exactly one place and enable changing it without having to alter the code. – Blrfl Jul 03 '12 at 11:32
  • 3
    Forcing type is most likely _not_ what they were trying for, however this is the best explanation of _why_ it could be done! – NWS Jul 03 '12 at 12:46
  • 9
    For forcing type, I'd probably rather just use `0.0f`. – Svish Jul 03 '12 at 13:17
  • Forcing type can sometimes be useful in vb.net, where performing bitwise operators on bytes yields a byte result. Saying `byteVar1 = byteVar2 Or CB128` seems a little nicer than `byteVar1 = byteVar2 Or CByte(128)`. Of course, having a proper numeric suffix for bytes would be better yet. Since C# promotes the operands of bitwise to operators to `int` even when the result would be guaranteed to fit in a `byte`, the issue isn't so relevant there. – supercat Jul 11 '12 at 16:56
  • I am not sure about having the constant name as Zero for '0' but sometimes it helps in code readability; for example, having this constant for zero - "ROOT_TYPE_ID = 0" will help to write a statement like if(id != ROOT_TYPE_ID) {..} – Venkatesh Laguduva Dec 04 '18 at 16:54
7

This might make sense since it explicitly defines Zero to be of type float.

At least in C and C++ the value 0.0 is of type double, while the equivalent float is 0.0f. So assuming the x you compare against is also always a float saying

x > 0.0

while actually promote x to double to match the type of 0.0 which might lead to issues (with equality tests especially). The comparison without conversion would of course be

x > 0.0f

which does the same as

float Zero = 0.0; // double 0.0 converted to float  
x > Zero

Nevertheless, I think it would be much more useful to enable warnings of conversions in the compiler instead of having users write awkward code.

Benjamin Bannier
  • 1,212
  • 8
  • 15
7

It's Because of "Tooling Nagging"

A possible reason I don't see listed here is because a lot of quality tools flag the use of magic numbers. It's often a bad practice to have magic numbers thrown into an algorithm without making them clearly visible for change later, especially if they are duplicated in multiple places in the code.

So, while these tools are right about flagging such issues, they often generate false positives for situations where these values are harmless and most likely to be static, or to just be initialization values.

And when that happens, sometimes you face the choice of:

  • marking them as false positives, if the tool allows it (usually with a specially formatted comment, which is annoying for people NOT using the tool)
  • or extracting these values to constants, whether it matters or not.

About Performance

It depends on the language I guess, but this is fairly common in Java and has no performance impact, as values are inlined at compile time if they are real constants static final. It wouldn't have an impact in C or C++ if they are declared as constants or even as pre-processor macros either.

haylem
  • 28,856
  • 10
  • 103
  • 119
1

First of all, here zero is defined as float, not int. Of course, this doesn't affect anything in the comparison, but in other cases when this constant is used, it might make difference.

I see no other reason why Zero is declared a constant here. It's just some coding style, and it's better to follow the style if it is used everywhere else in that certain program.

superM
  • 7,363
  • 4
  • 29
  • 38
1

It's almost certainly exactly as efficient during execution (unless your compiler is very primitive) and very slightly less efficient during compilation.

As to whether that's more readable than x > 0... remember that there are people who honestly, genuinely, think that COBOL was a great idea and a pleasure to work with - and then there are people who think exactly the same about C. (Rumor has it that there even exist some programmers with the same opinion about C++!) In other words, you are not going to get general agreement on this point, and it's probably not worth fighting over.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
0

[is] this is really any more efficient or readable or maintainable than:

if (x > 0.0) {..}

If you were writing generic (i.e. non-type-specific) code, then very possibly. A zero() function could apply to any algebraic type, or any type that is a group w.r.t. addition. It could be an integer, it could be a floating-point value, it could even be a function if your variable is, say, itself a function within some linear space (e.g. x is a linear function of the form z -> a_x * z + b_x) and then zero() provides the function with a and b both being zero() of the underlying type.

So you would expect such code in, say, C++ possibly (although a zero() is not very common AFAIK), or in Julia, and maybe other languages.

einpoklum
  • 2,478
  • 1
  • 13
  • 30
-1

Possible reason: Documentation
Using a named variable shows clear intent and explicitly states that this is indeed what was meant by the one implementing it. E.g.

bool IsFalling => Speed.y < 0f;
// vs
bool IsFalling => Speed.y < Zero;

The latter should prevent anyone reading the code (possibly even the one who wrote it) from even having to start to wonder whether the number is correct or if maybe it was mistyped and should instead be 10f or something.

If this value is used in multiple different classes it might be worth extracting it to a new static class which would then also help avoid ambiguity between different types, e.g. Float.Zero, Double.Zero, Int.Zero, etc.

RC_dev
  • 1