0

I understand that 0 is false because math established that a long time ago and C established it in the programming world, as talked about here. However, other than following established conventions, is there any reason a new programming language shouldn't make 0 equal to true? It seems like it would solve a lot of problems, such as in situations where 0 is a valid return value of a function (like when returning an index).

For example, assume I'm working with a grid where a user can select rows:

function getSelectedRowIndex()
{
    //return the index of the selected row - assume the first row is selected, returning 0
}

var selectedRowIndex = getSelectedRowIndex();  //returned 0

//the following will be evaluated as true, despite a valid row being selected
if (!selectedRowIndex)
{
    alert("Please select a row before continuing.");
}

I see this bug over and over again in multiple people's code. It's just much more convenient to evaluate a variable by itself. Making 0 equal to true would eliminate this potential bug. This is just one example, and I'm sure most people could think of others.

So we've established there is some value in making 0 equal to true, but what is the value in keeping it equal to false past sticking to what's always been done?

dallin
  • 412
  • 4
  • 10
  • from my answer "... i suspect that ALU design has not trancended this fundamental design optimization, even in the newest processsors, and as such, people writing compilers in assembly or close to assembly languages are probably still faced with the 'dirtiness' of equating zero with false. And you should know about all this too. just because. ..." – Andyz Smith Oct 05 '13 at 04:19
  • If you want `function getSelectedRowIndex` to return a boolean value, that's fine. You shouldn't care what the underlying representation happens to be. If you expect to get back, say, an integer where some magic value has _truthiness_, then you are in for a lot of disappointment. I'd like to take this opportunity to nominate 0x2a as The Answer. (IIRC, VB was one of the languages that just checked the LSB. You had to _normalize_ any value that might have come from the outside world to ensure that it didn't muck up. Sigh.) – HABO Oct 05 '13 at 13:19
  • 2
    This is NOT a duplicate. I was actually referencing that question in the beginning of my question! (I've added a link to it to clarify that). That question is essentially why was 0 made to be false. My question is, is there still value in keeping it false or is there now more value in today's programming environment to switch it to true, as the title states. If necessary, I will edit it further, but I think the distinction is pretty clear, but regardless, please remove the duplicate status (and the negative vote if that was the reason why). – dallin Oct 06 '13 at 03:39
  • @dallin Yeah, I agree.. Not sure why it was closed. One suggestion is to edit the title to something like, "Why do new languages still treat 0 as false?" instead of using "modern programming language". – Izkata Oct 06 '13 at 03:58
  • One question: If `0` became truthy, what would false be? Are you thinking about languages like Python, that actually have True and False, or like C which just use 1 and 0 in the background? – Izkata Oct 06 '13 at 04:04
  • @Izkata I'm thinking about languages that have True and False - more particularly, I've been thinking about Javascript. I read something saying Javascript is the most popular language on the planet, so I thought if someone was to make a Javascript version 2, perhaps this could be a beneficial change, seeing as most of the reasons 0 is false seem not as relevant in a language like Javascript today and it would solve some problems with the language. – dallin Oct 06 '13 at 04:10
  • Even that small code fragment has bigger fundamental logic flaws than if false converts to the integer `0` or not. I'm not seeing a case in either code quality or efficiency to break boolean algebra or reverse the logic between code and hardware just for coders who are using poor practice anyway. Just look at that code sample again - it's a classic example of why you should not use numbers for 'signalling'. `0` should not a be a valid return of the `getSelectedRowIndex` if there are no rows and code that relies on row 0 being there will fail. – James Snell Oct 06 '13 at 10:29
  • @JamesSnell I think you misread something. First off, 0 is not a valid return of getSelectedRowIndex if there are no rows - false is what it returns - but false and zero are the same, which is the problem. And you shouldn't have code that relies on any rows being in the table. Second, this comes directly from SlickGrid, probably the best written JavaScript grid in existence. Third, how is this poor practice? It's INCREDIBLY standard practice to return an index of an array from a function. Last, nowadays we separate logic from hardware all the time to create better programming languages. – dallin Oct 06 '13 at 22:40
  • @JamesSnell I'm not saying there's not some validity to your statements, I'm just trying to understand. But why downvote my question anyway? I asked a valid question. I wasn't saying one way was better than the other - I was just looking for reasons so I didn't make a mistake if I designed a language. It's a valid and good question. – dallin Oct 06 '13 at 22:56
  • With a value to return you're relying on an undefined or unreliable behaviour resulting from casting a number to a boolean and then comparing it. The refactoring to address that ambiguity renders the whole question moot because the situation can only arise in bad code, hence the downvote. – James Snell Oct 07 '13 at 21:50
  • @JamesSnell So basically you're saying you're going to downvote any question that uses an example somewhere in it that includes implicit conversion, even though the question itself has nothing to do with implicit conversion? Even though a large percentage of the most popular programming languages in the world use implicit conversion? Do you think perhaps that implicit conversion being bad is just one opinion? And implicit conversion does not have unreliable behavior. It has its rules you can rely on, just like any other programming paradigm. It just changes when you have to check for values. – dallin Oct 07 '13 at 23:48
  • The question relies on a situation that could be easily avoided by writing clear code. It's as straightforward as that. – James Snell Oct 08 '13 at 10:08
  • @dallin: Statically typed languages like Java or C# does not implicitly convert other values to booleans, so the answer to your question is no. But *if* a language supports implicit conversion of integer to boolean, it wouldn't have any use if *all* numbers converted to true. So the only thing that really make sense in such a language is to have 0 convert to false. – JacquesB Aug 04 '15 at 16:29

1 Answers1

2

My best answer, from the mass of others discussed.

This convention comes from Assembly.

When you do a comparison, e.g. if a > b or if a = b

what actually happens ( at least in motorola 6502 ) is the processor will SUBTRACT the two values! then then result is easily branchable, because it is either zero, or something other than zero.

the branch instruction is made to work with zero being an optimized value because typically you want to branch when an array index reaches zero.

thus, branchng is optimized for zeros, and thus branching based on comparison has evolved to use zero to indicate true of false of a boolean operation like = or >.

PS

Also because a bit by bit comparison of two numbers is basically the same thing as a bit by bit subtraction using the [Adder] http://en.wikipedia.org/wiki/Adder–subtractor in the ALU. so this way, you don't have extra instructions laying around. But if you use subtraction for boolean comparison you have to allow that the result of the integer subtraction being zero will mean that something is true or false. thus false is zero.

SOOOOOOOOOOO

i guess the reason for keeping it this way is that because originally, there were good reasons for doing this besides ' we felt like it'. those good reasons i suspect are highly irrelevant now, but, OTOH deep down, i also suspect the same optimizations occur with booleans in modern languages

Wikipedia notes that:

Zero Flag

Determining whether two values are equal requires the ALU to determine whether the result is zero. This can be accomplished by feeding each bit of the result into a NOR gate. The beauty of this is that a single multi-port NOR gate requires less hardware than an entire array of equivalent 2-port gates.

http://en.wikibooks.org/wiki/Microprocessor_Design/ALU_Flags

So, zero has a special meaning for CPUs and ALUs. That is why zero is false.

i suspect that ALU design has not trancended this fundamental design optimization, even in the newest processsors, and as such, people writing compilers in assembly or close to assembly languages are probably still faced with the 'dirtiness' of equating zero with false. And you should know about all this too. just because.

http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html

"A lot of programmers that you might interview these days are apt to consider recursion, pointers, and even data structures to be a silly implementation detail which has been abstracted away by today’s many happy programming languages. “When was the last time you had to write a sorting algorithm?” they snicker.

Still, I don’t really care. I want my ER doctor to understand anatomy, even if all she has to do is put the computerized defibrillator nodes on my chest and push the big red button, and I want programmers to know programming down to the CPU level, even if Ruby on Rails does read your mind and build a complete Web 2.0 social collaborative networking site for you with three clicks of the mouse."

http://www.joelonsoftware.com/articles/GuerrillaInterviewing3.html

Andyz Smith
  • 853
  • 5
  • 12
  • It goes deeper than assembly - digital logic (i.e. hardware) uses 0 and 1 for false and true - usually 0 is a low voltage (close to 0V) and 1 is a high voltage (close to Vcc, or whatever the supply voltage is). Logic design uses truth tables, Karnaugh maps and various other tools and even low level languages such as VHDL, all of which use the basic convention that 0 = false and 1 = true. – Paul R Oct 05 '13 at 06:33
  • @PaulR although true, this does not in any way address the question of why this conventione exists or whether it is still appropriate today. especially in the circumstances you describe, it is quite easy to invert the logic and have it make just as much sense. I suggest that the ALU is the first basic unit where the conceptual conversion between a boolean value and an integer value is introduced with a specific reason why a ZERO is false, because of the ALU CPU subtraction/comparison fungibility. – Andyz Smith Oct 05 '13 at 07:10
  • I disagree - in the early days of computing there was a much closer link between the programming model and the underlying hardware (there was virtually no distinction between the two originally). With such a long heritage, and its continued use today in logic design, and furthermore the fact that there is no obvious reason to change a convention that has always worked perfectly well, it is easy to see why we still use this convention today, at all conceptual levels. – Paul R Oct 05 '13 at 08:43
  • @PaulR but i'm sayng there's an actual, tangible, design. optimization based reason for the convention. – Andyz Smith Oct 05 '13 at 14:05
  • Well I'd say that there are a lot of reasons, some more significant than others, but it all boils down to common sense and efficiency. That was true when Boolean logic was first invented, when the first digital logic was implemented with relays and valves (tubes), and it's still true today, even though we've abstracted ourselves a long way from the underlying hardware. – Paul R Oct 05 '13 at 15:26