There is a colleague of mine who constantly writes:
if (someBool == true)
It drives me up the wall! Should I make a big deal of it or just drop it?
There is a colleague of mine who constantly writes:
if (someBool == true)
It drives me up the wall! Should I make a big deal of it or just drop it?
It's only redundant code, not life or death. However....
If it's happening a lot, it could be a problem with how someBool
is being named. A good name can go a long way towards eliminating the need for the ==true
if(IsSomeCondition)
or
if(hasCondition)
or
if(somethingExists)
for example.
When I see someBool == true
, I can't help but feel like the programmer hasn't internalized the idea of evaluation, which is a pretty fundamental deficiency.
However, my perspective is skewed because I spent several summers in college teaching programming to kids, who frequently wrote expressions like this because they genuinely hadn't mastered the mental exercise of evaluating an expression. Once they grokked the concept, the redundancy became obvious.
For an otherwise competent professional programmer, this is probably not the case. It's probably just a bad habit they developed in their early days of programming and never quite shook. But it would still scare me a bit if it was the first thing I saw someone do in an interview.
That drives me crazy too, but I'd say mention the redundancy to them in a constructive way and then drop it even if they don't concur.
Alternatively you could try this approach:
You: Can you start using the following code convention for evaluating Booleans?
if (someBool==true)&&(true==true) {}
Them: Why would we do that? The second half of that statement is redundant, it would always evaluate to true.
You: By George, you are right. Silly me. Let's just go with a version without all the redundancy then. How about?
if (someBool) {}
I think that, if something so trivial is your biggest problem with your co-workers, you should consider yourself pretty lucky.
You should definitely stop this bad habit. Gently...
It's easy to forget to write the double equal signs, turning the code into:
if (someBool = true)
In C# for example this will only produce a warning, not an error. So unless you treat warnings as errors the code will run, set the variable to true and always enter the condition.
I agree with you, but I'm going to play devil's advocate here:
Depending on the language and the variable's name, x==true is appropriate:
Consider the following situation in a language with static typing and type coercion for integral types:
if (actionsAllowed){
//do actions, since they are allowed
//...
}
Someone reading this section of the code might not realize immediately that actionsAllowed is a boolean variable - it could also be an integer, representing the number of allowed actions. So by adding == true, it becomes clear that x is a boolean, and not an integer being coerced to a boolean:
if (actionsAllowed == true){
//do actions, since they are allowed
//...
}
What about nullable bools?
bool? MyFlag = null;
if (MyFlag == true)
;
if (MyFlag) // error, doesn't compile!
;
Typically, you don't want to make a big deal out of coding conventions unless said convention is somehow impeding the project in a significant way. I've seen many a heated argument escalate over things as small as code regions and leading underscores.
That being said, I see no issue with adding == true
to a conditional statement. As a matter of fact, I am in the habit of using == false
as opposed to a leading exclamation point when testing for negative conditions. I think it's more readable.
If there is an established convention, I say follow it unless there is reason to change. However, it's not really worth raising a big stink about.
Ack. I'm that guy. The shame, the shame. It's how I learned, and it's how I "auto-format" in my head. The only time I use Joel's prefered syntax is when the bool variable has a verb prefix like "is." I need the verb, be it "is," "can," "did," or else I need the == to provide the verb "equals." I might never break that habit, so I'll understand if you don't say 'hello' to me on the street.
remind me of "boolean madness code", its like these
if(someBool == true)
otherBool = false;
else
otherBool = true
Instead of:
otherBool = !someBool
Personally, I strongly dislike the way to say "not" in C based languages. That little exclamation mark is too easy to overlook.
Hence I write it out in full:
if (someCondition == false) {
After reading that for a while, I want symmetry too with
if (someCondition == true) {
So consider it an artifact of C using !
instead of not
.
It depends on the language, but it's usually a bad idea...
In C, never do this. It's too easy to find a situation where the value you are testing is not false (non-zero), but also not equal to the single value defined as "true".
In Ruby, do this only if you are absolutely certain that you want to fail on everything except Boolean true.
In C++ and other static languages with a bool type, it's redundant, and can lead to programming errors when you mis-type =
instead of ==
, or promotion errors as mentioned in the comments.
I prefer
if (bVal)
or
if (!bVal)
too, but I'm afraid that bringing it up would piss people off, so my advice is to forget it. Sorry!
you should tell him that he is doing it wrong.
its
if (true == someBool) {
}
if he ever forget one = he is in big trouble in his writing style.
You think that's bad? How about:
if(someCondition) {
return true;
} else {
return false;
}
How about
if (x == "true")
WHY IS THAT A STRING?!
I write code like that!
Here is why:
"if bla == true" reads like a sentence, where as "if bla" does not in many cases. It just sounds wrong, when READING actual code.
Also the compiler warns about assignments in if blocks, so there is really no danger in using == true. (confusing it with =)
Also do guys that don't write "== true", use that "!()" for "== false"? I find it really ugly. And if you use "== false", it is only very consistent to also use "== true", instead of having two distinct ways of verifying truth.
Generally will omit the '== true', but it's hardly worth even a minute discussing it unless you have it included in your team's coding standards.
While I agree as a mainly C# developer, I can't say this is always the case. For instance, in Javascript, the === will perform type coalescence. So assuming var x = 3 then:
if(x) --> true
while
if (x === true) --> false
I guess that's different than == since even in JS I wouldn't use if(x == true) but just something to think about.
This sort of touches on another point though which has come up in my office:
bool b = false;
In C#, bool b; would be enough and would initalize b to false. However, it is more explicit to write the above line and anyway should be ripped out by the compiler during optimization.
So I guess my point is it's not always so obvious what is and isn't good practice and a lot of it boils down to preference as well as language features/quirks.
Ah yes, but what if the variable is nullable? (bool?)
Some languages (C#) will require and cast or comparison with 'true'.
bool? isAccepted = true;
if((bool)isAccepted)
{...}
if(isAccepted == true)
{...}
Remember you are working as part of a team, so you need to work these things out together. "Plays nice with others" is still an important personality trait even after elementary school :)
The young know the rules, but the old know the exceptions ;)
In latest C#
, if you are dealing with a null-able bool
, then you have to:
bool? x = null;
bool? y = true;
bool? z = false;
if (x == true || y == true || z == true) {
// That was the only way that is reasonably readable that I know of
// to accomplish this expression.
}
If tristate is not a problem, then there usually should not be a reason to compare something to true
/True
. However, in Python
and several other languages such as C/C++
you can perform an if
on a non-bool expression. These languages have unique rules for interpreting integers, pointers, lists, etc. as either true or false. Sometime you do not want that. For example, in this Python snippet:
x = True
y = 'abcdef'
z1 = x and y
z2 = (x == True) and (y == True)
Here z
should be True
, but z2
should be False
. Now, a Clojure
language approaches this in yet another way - there and
function does not necessarily evaluate to a bool
, but the if
can handle that.
Regardless of the language, any time you find yourself comparing something to True
or False
, it is probably worth commenting.
Such coding would have rubbed me the wrong way before too. Although your example identifier is named "someBool", using that coding style inadvertently on a variable which wasn't guaranteed to be a boolean could have unintended behavior. If the value of "someBool" isn't exactly "true" or "false", the result will be false.
I encountered a very subtle bug this past year caused by such a coding style which was difficult to identify because one's eyes gloss over such constructs. You'd think, "How could it be wrong?" The same holds true for such well-understood expressions as "(var)" or "(!var)", that you read or code them without verifying their behavior.
So I've introduced a couple of coding standards to reduce the existence of such bugs in the codebase and the likelihood that such subtle bugs will accidentally creep in sometime in the future.
By cleaning up code not conforming to the new style, I've identified and corrected a few more instances of such subtle bugs.
Had to use this all the time in ActionScript 2 (admittedly now a dead language), because:
var something:Boolean = org.black.box.unknown.ThisMightNotExistYet();
// this is slightly ambiguous
if(something)
{
}
// because if you allow undefined you might actually mean
if(false != something)
{
}
// which can mean something different than
if(true == something)
{
}
// and comparing against what you actually MEAN is less script than
if(undefined != value && value)
{
}
So it was almost always best to be specific.
I agree. It's a redundant construction, specially in strong typed languages.
To add another misuse of booleans, I have found this kind of construction a bunch of times in Javascript, (specially at some spaghetti-like monster functions, as in 100+ lines):
//create the variable, not initializing it
var flag;
//...
//assing a value to the var, by the example
flag=$("mycheckbox").selected;
//...
//and at the moment of use it:
if(flag!=true) {
//code to execute when the checkbox is unchecked
}
It seems, that due to the lack of an strict type definition in this language, some programmers prefer not have to be messing around with the false|undefined
values.
I have a colleague who will have some code like this:
if(something == true)
And then, for some sort of test/debugging, he will wish to not call this block so he'll change it to:
if(something == true && false)
And then occasionally he'll change it to:
if(false)
The worst thing is, this type of debugging has rubbed off on me on occasion and is really bad for other developers to read!
I'd say consistency is king in a codebase. So you should use the style, that is mostly used in your organization. Try to make your preferred style part of official company coding guidelines. If it's already in the guidelines, just follow it.
(That being said, it would also really annoy me - but probably not enough to make a big deal out of it).
I prefer not placing the extra == true, but sometimes I accidentally include it and don't notice it. The person may not have noticed and accidentally placed them. I reread my code and sometimes notice that I placed the extra == true so I remove that == true. I sometimes don't notice it and would happily welcome someone telling me I placed it redundantly.
It's a code smell for a novice developer. They have yet to master/internalize the boolean condition and must explicitly evaluate to the true or false condition.
In JavaScript, I prefer if(bool)
when I have sufficient documentation explaining what the method expects... but when evaluation of true can mean the variable exists, equals the number 1 or the boolean true, it's tougher.
Strongly typing in JS also means less flexibility. Tough call. Strongly typed languages, lay down the hammer. Otherwise, ask why he does it?
Actually, there's your answer: ask him why. Then correct him.
use it as an opportunity to teach him. he will respect you more if you do. wouldn't you want someone to do the same for you?
Research shows that code with many redundancies correlates strongly with errors. Here's an interesting paper on the matter (PDF alert).
Yes, This is a kind of programmers mind set. All the time they considers "if condition" has no existence with out an expression with symbols =,<,>... I've seen situations people really get struggle as like "function return"
if(RowsAffected > 0) { return true; } else { return false; }
Programmers need to watch their code from the real time angle.. :)
Yes, it's a big deal because the name of the boolean should tell that it is boolean. Says if you have this piece of code:
bool arrayIsEmpty = array.length == 0;
if (arrayIsEmpty) { ... }
It is already easy to read so you won't need == true
, even for the person or programmer with less sense of evaluation.
Programming conventions are not necessarily "right" or "wrong". They exist to give the program a familiar structure to those reading it, so that possible mistakes will stand out more clearly -- it is difficult to "see" a problem if everything is slightly wrong-looking to you.
The important thing is that conventions be clearly defined and agreed upon by all participants (even if the agreement is like "I agree to use this convention while I work here" :), otherwise it does more harm than good.
Of course, the particularly case of comparing a boolean to true is plain wrong and to be avoided at all costs. ;-)
I write if(var==false) because after writing var i dont feel like backtracking and write ! behind it. Also i like how ==false makes it look more clear.
However == true is just weird. I dont know if there is a point of making a big deal. I wouldnt unless he is pushing others to use it or to make it a coding standard.
You can try approaching your colleague with the following example:
if ((x == 3) == true) {
...
}
He should say that the == true
is redundant and should be rewritten as
if (x == 3) {
...
}
since x == 3
is already a boolean expression.
Now present him the someBool == true
example, but slightly rewritten:
if ((someBool) == true) {
...
}
Since someBool
is already a boolean, by comparison with the previous example, it should now be clear that the == true
is redundant in this example, too. Thus, it should be rewritten as:
if (someBool) {
...
}
This may get some of your nickers in a wad.
We have extension methods for .IsTrue and .IsFalse for the bool type.
So we write
if(someBool.IsTrue())
-or-
if(someBool.IsFalse())
Yep! And I love them. When reading code it just becomes more obvious to me.
Comparing booleans can actually cause major problems. I recently ran into some intern code that read:
if(foo == false && bar == 2) {...}
Now, this seems pretty straight-forward, right? Just simplify it down to:
if(!foo && bar == 2) {...}
Wrong. In this case, order of operations dictates that the &&
gets evaluated first, meaning that this really evaluates to:
if(foo == (false && bar == 2))
also known as
if(!foo)
That extra thing we were supposed to be checking for, bar == 2
, is totally gone. That is why I will never, ever approve a code review that contains someBool == [true|false]
.
(You can see how the reverse statement, == true
and ||
, would also cause problems.)
Hopefully, you are using a sensible language, like VB.NET with Option Strict On, which doesn't allow coercion of other types to Boolean inside an If statement. Then there's no need to ever add the "== true" (or "= True" in VB), since the coercion case can't happen. If you want to check for non-zero then write that explicitly (ie., "If x <> 0").
Funny because I always replace code like:
if(foo) {
With:
if(true == foo) {
if(false == foo) {
But I got this habit because most of the time the code I worked with didn't have clear variable names.
One issue in C is that there's a tradition of using values other than 0 and 1 for booleans.
typedef enum { WHATEVER = 1 << 4 } MyFlags;
if (flags & WHATEVER) {
}
or even relying on -1 being true.
The problem is say you're writing an API:
struct foo {
unsigned int bar : 1;
};
void myapi_foo_set_bar(struct foo *foo, int bar) {
foo->bar = bar;
}
Here if bar isn't canonicalized to 0 or 1 it won't fit in the bitfield and it breaks:
myapi_foo_set_bar(foo, flags & WHATEVER); /* broken */
You can find other ways non-canonical bools break as well, such as comparing them to TRUE or FALSE as in the question!
Anyway what I'm coming around to is that it often makes sense to canonicalize a bool:
foo->bar = bar != FALSE;
This is a C-specific weirdness, of course.
One can develop this habit when do a lot of programming in WPF. It uses lots of bool? (nullable bool) variables to you have to write either if (someBool == true)
or if (someBool ?? false)
Consider yourself lucky, my half-monkey colleague writes
if (true)
I found it many times in his code. And that's not the worst thing I found. I found GOTOs. And we use C#. Now calm down and hold back your homicidal instincts.