29

I have been reading about the (un)convenience of having null instead of (for example) Maybe. After reading this article, I am convinced that it would be much better to use Maybe (or something similar). However, I am surprised to see that all the "well-known" imperative or object oriented programming languages still use null (which allows unchecked access to types that can represent a 'nothing' value), and that Maybe is mostly used in functional programming languages.

As an example, look at the following C# code:

void doSomething(string username)
{
    // Check that username is not null
    // Do something
}

Something smells bad here... Why should we be checking if the argument is null? Shouldn't we assume that every variable contains a reference to an object? As you can see, the problem is that by definition almost all variables can contain a null reference. What if we could decide which variables are "nullable" and which not? That would save us a lot of effort while debugging and looking for a "NullReferenceException". Imagine that, by default, no types could contain a null reference. Instead of that, you would state explicitly that a variable can contain a null reference, only if you really need it. That is the idea behind Maybe. If you have a function that in some cases fails (for example division by zero), you could return a Maybe<int>, stating explicitly that the result may be an int, but also nothing! This is one reason to prefer Maybe instead of null. If you are interested in more examples, then I suggest to read this article.

The facts are that, despite the disadvantages of making most types nullable by default, most of the OO programming languages actually do it. That is why I wonder about:

  • What kind of arguments would you have to implement null in your programming language instead of Maybe? Are there reasons at all or is it just "historical baggage"?

Please ensure you understand the difference between null and Maybe before answering this question.

aochagavia
  • 524
  • 3
  • 11
  • 3
    I suggest reading up about [Tony Hoare](http://en.wikipedia.org/wiki/Tony_Hoare), in particular his billion dollar mistake. – Oded Dec 15 '13 at 14:38
  • I have, but I suppose that there are also more "rational" arguments, rather than just the fact that "it was so easy to implement". – aochagavia Dec 15 '13 at 14:50
  • 2
    Well, that was **his** reason. And the unfortunate result is that it was a mistake copied to most languages that followed, up to today. There **are** languages where `null` or its concept do not exist (IIRC Haskell is one such example). – Oded Dec 15 '13 at 14:53
  • You are right in that Haskell uses Maybe instead of null, but that is a functional language. I wonder if the creators of Java or C# used the null reference because it was easy... it sounds so mediocre... That is why I think that some people have good arguments for it. – aochagavia Dec 15 '13 at 15:00
  • 9
    Historical baggage is not something to underestimate. And remember that the OSes that these build on were written in languages that have had `null`s in them for a long time. It isn't easy to just drop that. – Oded Dec 15 '13 at 15:01
  • So do you think dat they just didn't see it as an issue? That they thought it was simply "normal"? – aochagavia Dec 15 '13 at 15:02
  • Frankly, I don't know **when** Tony Hoare made that statement and many languages probably predate the press that got and therefore didn't consider it. – Oded Dec 15 '13 at 15:04
  • As `null` is a value and `Maybe` a type, you are comparing apples with oranges. Perhaps you should ask the question why there are languages that allow unchecked access to types that can represent a 'nothing' value (called `null` or `Nothing` or whatwver). – Bart van Ingen Schenau Dec 15 '13 at 15:18
  • That is actually the question... I am going to edit the title – aochagavia Dec 15 '13 at 15:19
  • 3
    I think you should shed some light on the concept of Maybe instead of posting a link. – JeffO Dec 15 '13 at 15:21
  • I doubt that I can explain it better that the author of the linked article, but I will try it – aochagavia Dec 15 '13 at 15:27
  • 1) please don't significantly change your answer after you have already received meaningful answers. Your edits partially invalidate the answers that have been provided. 2) I believe you should be checking for empty string not a null string in your example. `string` and `int` are primitives, not reference values. You would need to explicitly declare those as nullable in your example. –  Dec 15 '13 at 15:50
  • 1
    @GlenH7 Strings in C# are reference values (they can be null). I know that int is a primitive value, but it was useful to show the use of maybe. – aochagavia Dec 15 '13 at 15:55
  • @GlenH7 Java `String`s are also reference values, as are `Integer` (although not `int`) – Izkata Dec 15 '13 at 15:56
  • Null reference may not be a mistake: https://yinwang0.wordpress.com/2013/06/03/null/ – Daniel Little Feb 18 '14 at 06:28
  • There is also the side point that to work effectively with "null" you need identity checks and branching. To work effectively with Maybe (That is, idiomatically, not just the bare minimum) you need parameterised types, higher order functions, pattern matching, some kind of concept of a monadic functor, so on. Maybe is a much better solution, but it's also a vastly more complex (conceptually) one. – Phoshi Feb 18 '14 at 11:38
  • Why do you assert that very common languages allow unchecked reading of a null? Most don't. Java and C# in particular _guarantee_ that you'll get an exception if you try; the access _is_ formally checked. – Donal Fellows May 03 '14 at 18:01
  • When I say "unchecked" I mean mainly that the type system will not force you to check them explicitly before using them. – aochagavia May 03 '14 at 20:07
  • 1
    Eric Lippert says that the choice of having a null value in C# was in part motivated for interop convenience, and - even with 20/20 hindsight - not a mistake, but a convenience feature. I disagree with this point of view, but I'm nowhere near Lippert in terms of experience, insight and expertise. – Martijn Nov 10 '14 at 11:35

9 Answers9

17

I believe it is primarily historical baggage.

The most prominent and oldest language with null is C and C++. But here, null does makes sense. Pointers are still quite numerical and low-level concept. And how someone else said, in the mindset of C and C++ programmers, having to explicitly tell that pointer can be null doesn't make sense.

Second in line comes Java. Considering Java developers were trying to get closest to C++, so they can make transition from C++ to Java simpler, they probably didn't want to mess with such core concept of the language. Also, implementing explicit null would require much more effort, because you have to check if non-null reference is actually set properly after initialization.

All other languages are same as Java. They usually copy the way C++ or Java does it, and considering how core concept of implicit null of reference types is, it becomes really hard to design a language that is using explicit null.

Euphoric
  • 36,735
  • 6
  • 78
  • 110
  • “they probably didn't want to mess with such core concept of the language” They already completely removed pointers, also removing `null` wouldn't be that big of a change, I think. – svick Dec 15 '13 at 16:04
  • 2
    @svick For Java, references are replacement for pointers. And in many cases, pointers in C++ were used in a same way as what Java references do. Some people even claim that Java does have pointers (http://programmers.stackexchange.com/questions/207196/do-pointers-really-exist-in-java) – Euphoric Dec 15 '13 at 16:05
  • I have marked this as the correct answer. I would like to upvote it but I have not enough reputation. – aochagavia Dec 15 '13 at 16:25
  • 1
    Agree. Note that in C++ (unlike Java and C), being null-able is the exception. `std::string` cannot be `null`. `int&` can't be `null`. An `int*` can, and C++ allows unchecked access to it for two reasons: 1. because C did and 2. because you're supposed to understand what you're doing when using raw pointers in C++. – MSalters Dec 16 '13 at 10:21
  • @MSalters: If a type does not have a bit-copyable default value, then creating an array of that type will require calling a constructor for every element thereof before allow access to the array itself. This may require useless work (if some or all elements will be overwritten before they are read), may introduce complications if the constructor for a later element fails after an earlier element has been built, and may end up not really accomplishing much anyway (if a proper value for some array elements could not be determined without reading others). – supercat Jan 28 '14 at 18:22
  • @supercat This could be easily solved by having arrays default to nullable type. – Euphoric Jan 28 '14 at 21:19
  • Actually not quite correct. ALGOL 68 and PL/I had pointers before C existed. PASCAL had pointers in 1970 or so. Also, linked data structures, with pointers and NIL values, were being used in assembly language well before C was invented. LISP, of course, had pointers long before any of these guys (LISP 1, 1960; LISP 1.5, 1962). – John R. Strohm Jan 30 '14 at 17:17
16

Actually, null is a great idea. Given a pointer, we want to designate that this pointer does not reference a valid value. So we take one memory location, declare it invalid, and stick to that convention (a convention sometimes enforced with segfaults). Now whenever I have a pointer I can check if it contains Nothing (ptr == null) or Some(value) (ptr != null, value = *ptr). I want you to understand that this is equivalent to a Maybe type.

The problems with this are:

  1. In many language the type system does not assist here to guarantee a non-null reference.

    This is historical baggage, as many mainstream imperative or OOP languages have only had incremental advances in their type systems when compared to predecessors. Small changes have the advantage that new languages are easier to learn. C# is a mainstream language that has introduced language-level tools to better handle nulls.

  2. API designers might return null on failure, but not a reference to the actual thing itself on success. Often, the thing (without a reference) is returned directly. This flattening of one pointer level makes it impossible to use null as a value.

    This is just laziness on the designer's side and can't be helped without enforcing proper nesting with a proper type system. Some people might also try to justify this with performance considerations, or with the existence of optional checks (a collection might return null or the item itself, but also provide an contains method).

  3. In Haskell there is a neat view onto the Maybe type as a monad. This makes it easier to compose transformations on the contained value.

    On the other hand, low-level languages like C barely treat arrays as a separate type, so I'm not sure what we're expecting. In OOP languages with parameterized polymorphism, a runtime-checked Maybe type is rather trivial to implement.

amon
  • 132,749
  • 27
  • 279
  • 375
  • “C# is taking steps away from null references.” What steps are those? I haven't seen anything like that in the changes to the language. Or do you mean just that the common libraries are using `null` less than they did in the past? – svick Dec 15 '13 at 15:40
  • @svick Bad phrasing on my part. “*C# is a mainstream language that has introduced language-level tools to better handle `null`s*” – I am talking about [`Nullable`](http://msdn.microsoft.com/en-gb/library/1t3y8s4s(v=vs.90).aspx) types and the `??` default operator. It doesn't solve the problem right now in the presence of legacy code, but it *is* a step towards a better future. – amon Dec 15 '13 at 15:46
  • I aggree with you. I would vote up your answer but I don't have enought reputation :( However, Nullable works only for primitive types. So it is just a little step. – aochagavia Dec 15 '13 at 15:54
  • 1
    @svick Nullable has nothing to do with this. We are talking about all reference types implicitly allowing null value, instead of having programmer define it explicitly. And Nullable can only be used on value types. – Euphoric Dec 15 '13 at 15:57
  • @Euphoric I think your comment was meant as a reply to amon, I didn't mention `Nullable`. – svick Dec 15 '13 at 16:00
  • @svick Yeah, right sorry. Trying to think about too many things at one time. Getting little confused. – Euphoric Dec 15 '13 at 16:03
  • Do you have any idea how non-nullable types could work in languages or frameworks which use "real" generics, given that the only way to have a non-nullable array of a reference type would be to pass an element factory to its constructor? – supercat Feb 03 '14 at 23:28
  • @supercat (1) A factory or a lambda is the most flexible solution. (2) Each type could have a default value which is used for initialization. This solution is currently being used in most languages, e.g. integers are initalized as zero, strings as the empty string, …. Compare the Null Object pattern. (3) The data structure could be immutable, thus bypassing the whole initialization issue. This works surprisingly well in many functional languages. – amon Feb 03 '14 at 23:43
  • @amon: How would one go about creating a `List` where `T` might be a non-nullable type *with no sensible default value*, but could also be an ordinary reference for which null would be a valid value. I suppose maybe a `NonNullableType[]` could throw on an attempt to read null, and all array types could provide an `IsValidItemAt(int index)` which would return false if index was out of bounds or if the item in question was null *and* the type was a non-nullable type. – supercat Feb 03 '14 at 23:54
  • @supercat see my suggestion #3: immutability, which solves all your problems (but also brings a few new ones). Throwing exceptions on a read is effectively another NullPointerException, which we'd generally want to avoid (also, this can't be statically checked). The same holds for your idea of a method that checks which items have been assigned. Oh, I forgot a fourth solution, which is the most important and best one: (4) Use an Option type (aka. Maybe), which is either `Nothing()`, or `Some(value)`. This is basically equivalent to a `null`, but has the advantage of being statically checkable. – amon Feb 04 '14 at 00:04
  • Mutable lists are useful, and are by definition not immutable, and performance demands that backing array space be pre-allocated before there's any meaningful value to go into it. A properly designed `List` will never read an unwritten array slot in the absence of illegitimate multi-thread usage, and in that scenario having the read of an array of non-nullable include an implicit assertion that the item will have been previously written and throw a `ReadOfUnassignedElementException` if it hasn't been would be better than requiring code to test for something that should never occur. – supercat Feb 04 '14 at 00:23
9

My understanding is that null was a necessary construct in order to abstract programming languages out of assembly.1 Programmers needed the ability to indicate that a pointer or register value was not a valid value and null became the common term for that meaning.

Reinforcing the point that null is just a convention to represent a concept, the actual value for null used to be able to / can vary based upon programming language and platform.

If you were designing a new language and wanted to avoid null but use maybe instead, then I would encourage a more descriptive term such as not a valid value or navv to indicate the intent. But the name of that non-value is a separate concept from whether you should allow non-values to even exist in your language.

Before you can decide on either of those two points, you need to define what the meaning of maybe would mean for your system. You may find it's just a rename of null's meaning of not a valid value or you may find it has a different semantic for your language.

Likewise, the decision on whether to check access against null access or reference is another design decision of your language.

To provide a bit of history, C had an implicit assumption that programmers understood what they were attempting to do when manipulating memory. As it was a superior abstraction to assembly and the imperative languages that preceded it, I would venture that the thought of safe-guarding the programmer from an incorrect reference hadn't come across their mind.

I believe that some compilers or their additional tooling can provide a measure of checking against invalid pointer access. So others have noted this potential issue and taken measures to protect against it.

Whether or not you should allow it depends upon what you want your language to accomplish and what degree of responsibility you want to push to users of your language. It also depends upon your ability to craft a compiler to restrict that type of behavior.

So to answer your questions:

  1. "What kind of arguments…" - Well, it depends upon what you want the language to do. If you want to simulate bare-metal access then you may want to allow it.

  2. "is it just historical baggage?" Perhaps, perhaps not. null certainly had / has meaning for a number of languages and helps drive the expression of those languages. Historical precedent may have affected more recent languages and their allowing of null but it's a bit much to wave your hands and declare the concept useless historical baggage.


1 See this Wikipedia article although credit is given for to Hoare for null values and object-oriented languages. I believe the imperative languages progressed along a different family tree than Algol did.

  • The point is that most variables in, say, C# or Java, can be assigned a null reference. It looks like it would be much better to assign null references only to objects that explicitly indicate that "Maybe" there is no reference. So, my question is about the "concept" null, and not the word. – aochagavia Dec 15 '13 at 15:10
  • 2
    “Null pointer references can show up as errors during the compile” Which compilers can do that? – svick Dec 15 '13 at 15:35
  • To be completely fair, you never assign a null reference to an object... the reference to the object just doesn't exist (the reference points to nothing (0x000000000), which is by definition `null`). – mgw854 Dec 15 '13 at 15:36
  • The quote of C99 spec talks about the *null character*, not the *null pointer*, those are two very different concepts. – svick Dec 15 '13 at 15:38
  • @svick - The C# compiler will complain if you try to access a reference value that has been assigned to null. I will also grant there is a degree of semantic difference between a reference value and a C style pointer. –  Dec 15 '13 at 15:44
  • 2
    @GlenH7 It doesn't do that for me. The code `object o = null; o.ToString();` compiles just fine for me, without errors or warnings in VS2012. ReSharper does complain about that, but that's not the compiler. – svick Dec 15 '13 at 15:51
7

If you look at the examples in the article you cited, most of the time using Maybe doesn't shorten the code. It doesn't obviate the need to check for Nothing. The only difference is it reminds you to do so via the type system.

Note, I say "remind," not force. Programmers are lazy. If a programmer is convinced a value can't possibly be Nothing, they're going to dereference the Maybe without checking it, just like they dereference a null pointer now. The end result is you convert a null pointer exception into an "dereferenced empty maybe" exception.

The same principle of human nature applies in other areas where programming languages try to force programmers to do something. For example, the Java designers tried to force people to handle most exceptions, which resulted in a lot of boilerplate that either silently ignores or blindly propagates exceptions.

What makes Maybe is nice is when a lot of decisions are made via pattern matching and polymorphism instead of explicit checks. For example, you could create separate functions processData(Some<T>) and processData(Nothing<T>), which you can't do with null. You automatically move your error handling to a separate function, which is very desirable in functional programming where functions are passed around and evaluated lazily rather than always being called in a top-down manner. In OOP, the preferred way to decouple your error handling code is with exceptions.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • Do you think it would be a desireable feature in a new OO language? – aochagavia Dec 15 '13 at 22:17
  • You can implement it yourself if you want to get the polymorphism benefits. The only thing you need language support for is non-nullability. I would love to see a way to specify that for yourself, similar to `const`, but make it optional. Some kinds of lower-level code, like linked lists for example, would be very annoying to implement with non-nullable objects. – Karl Bielefeldt Dec 15 '13 at 22:28
  • 2
    You don't have to check with the Maybe type though. The Maybe type is a monad so it should have the functions `map :: Maybe a -> (a -> b)` and `bind :: Maybe a -> (a -> Maybe b)` defined on it, so you can continue and thread further computations for the most part with recasting using an if else statement. And `getValueOrDefault :: Maybe a -> (() -> a) -> a` allows you to handle the nullable case. It's far more elegant than pattern matching on the `Maybe a` explicitly. – DetriusXii Feb 19 '14 at 16:57
1

Maybe is a very functional way of thinking of a problem--there is a thing, and it may or may not have a value that is defined. In an object-oriented sense, however, we replace that idea of a thing (no matter if it has a value or not) with an object. Clearly, an object has a value. If it doesn't, we say the object is null, but what we really mean is that there isn't any object at all. The reference we have to the object points to nothing. Translating Maybe into an OO concept does nothing novel--in fact, it just makes for more greatly cluttered code. You still have to have a null reference for the value of the Maybe<T>. You still have to do null checks (in fact, you have to do a lot more null checks, cluttering your code), even if they are now called "maybe checks". Sure, you'll write more robust code as the author claims, but I'd argue that it is only the case because you've made the language far more abstract and obtuse, requiring a level of work that is unnecessary in most cases. I'd willingly take a NullReferenceException once in a while than deal with spaghetti code doing a Maybe check every time I access a new variable.

mgw854
  • 1,818
  • 10
  • 11
  • 2
    I think that it would lead to less null checks, because you only have to check if you see a Maybe and don't have to worry about the rest of the types. – aochagavia Dec 15 '13 at 15:47
  • “You still have to have a null reference for the value of the `Maybe`” Not if `Maybe` didn't allow `null` as a value. – svick Dec 15 '13 at 15:48
  • @aochagavia You introduce a lot more headaches that way... you wouldn't be able to declare a variable and then instantiate it later unless you made it a `Maybe`, and that would require doing a check against it. – mgw854 Dec 15 '13 at 15:50
  • 1
    @svick `Maybe` has to allow `null` as a value, because the value may not exist. If I have `Maybe`, and it doesn't have a value, the value field must contain a null reference. There's nothing else for it to be that is verifiably safe. – mgw854 Dec 15 '13 at 15:52
  • The idea is precisely that the programmer knows that the variable may contain a null reference. If you instantiate it later there is a chance that you try to access the object while it is not instantiated. – aochagavia Dec 15 '13 at 15:58
  • 1
    @mgw854 Sure there is. In OO language, `Maybe` could be an abstract class, with two classes that inherit from it: `Some` (which does have a field for the value) and `None` (which doesn't have that field). That way, the value is never `null`. – svick Dec 15 '13 at 15:58
  • @aochagavia This is an inherent assumption that you have any time you are using any variable that isn't a value type. Nothing changes by using `Maybe` except the verbosity. – mgw854 Dec 15 '13 at 16:01
  • @svick If you go that route, `Maybe` becomes a worthless wrapper class--you'd need to cast to `Some` before you could see the value, which adds a lot more overhead and code to every call. – mgw854 Dec 15 '13 at 16:02
  • 6
    With "not-nullability" by default, and Maybe you could be certain that some variables always contain objects. Namely, all variables that are not Maybe – aochagavia Dec 15 '13 at 16:03
  • @mgw854 Or you could add a virtual method `T GetValue()`, which throws an exception for `None`. If you also added `bool HasValue()`, then it would be reasonably performant, I think. – svick Dec 15 '13 at 16:09
  • @svick Hmmm... sounds oddly to me like a NullReferenceException under a different name? :-) What's being suggested here is a fundamental restructuring of the way a language works to be different than the way the hardware works, only to end up having the exact same issues with different names. – mgw854 Dec 15 '13 at 16:13
  • 3
    @mgw854 The point of this change would be increased expressive power for developer. Right now, developer always has to assume, that the reference or pointer can be null and needs to do check to ensure there is usable value. By implementing this change, you give developer power to say that he really needs a valid value and have compiler check to ensure valid value was passed. But still giving developer option, to opt-in and have not-a-value passed around. – Euphoric Dec 15 '13 at 16:21
  • @mgw854 I think you are focusing too much in `Maybe`. Look at the benefits: you could use variables with the security that they cannot be null! That means never checking for those variables. – aochagavia Dec 15 '13 at 16:22
  • I'm sorry, @Euphoric, but I really don't see any increased expressive power. It's just a different construct to do a null check (or Assert). You can't expect the compiler to do all the heavy lifting, though, as proving you can never encounter a null value is equivalent to the halting problem. It can only get close. As the benefits, I don't see them. As a programmer, I should know that a variable can never be null because I set it to some value--which is true 95% of the time. I check what needs checked. If I mess up, I get an exception--no different than what you are proposing. – mgw854 Dec 15 '13 at 17:40
  • In a language with a `Maybe` type you disallow nulls completely, so you never need to check for it. The benefit of `Maybe` is when you don't have it, since you don't need to check for the validity of values of a non-maybe type. – Lee Jan 28 '14 at 23:02
1

The concept of null can easily be traced back to C but that's not where the problem lies.

My everyday language of choice is C# and I would keep null with one difference. C# has two kinds of types, values and references. Values can never be null, but there are times I'd like to be able to express that no value is perfectly fine. To do this C# uses Nullable types so int would be the value and int? the nullable value. This is how I think reference types should work as well.

Also see: Null reference may not be a mistake:

Null references are helpful and sometimes indispensable (consider how much trouble if you may or may not return a string in C++). The mistake really is not in the existence of the null pointers, but in how the type system handles them. Unfortunately most languages (C++, Java, C#) don’t handle them correctly.

Daniel Little
  • 1,402
  • 12
  • 16
0

I think this is because functional programming is much concerned about types, especially types that combine other types (tuples, functions as first class types, monads, etc.) than object-oriented programming does (or at least initially did).

Modern versions of the programming languages I think you're talking about (C++, C#, Java) are all based on languages that didn't have any form of generic programming (C, C# 1.0, Java 1). Without that, you still can bake some kind of difference between nullable and non-nullable objects into the language (like C++ references, which can't be null, but are also limited), but it's much less natural.

svick
  • 9,999
  • 1
  • 37
  • 51
  • I think in case of functional programming, it is case of FP not having reference or pointer types. In FP everything is a value. And if you have a pointer type, it becomes easy to say "pointer to nothing". – Euphoric Dec 15 '13 at 16:23
0

I think the fundamental reason is that relatively few null checks are required to make a program "safe" against data corruption. If a program tries to use the contents of an array element or other storage location which is supposed to have been written with a valid reference but wasn't, the best-case outcome is for an exception to be thrown. Ideally, the exception will indicate exactly where the problem occurred, but what matters is that some kind of exception gets thrown before the null reference gets stored somewhere that could cause data corruption. Unless a method stores an object without trying to use it in some fashion first, an attempt to use an object will--in and of itself--constitute a "null check" of sorts.

If one wants to ensure that an null reference which appears where it shouldn't will cause a particular exception other than NullReferenceException, it will often be necessary to include null checks all over the place. On the other hand, merely ensuring that some exception will occur before a null reference can cause "damage" beyond any that has already been done will often require relatively few tests--testing would generally only be required in cases where an object would store a reference without trying to use it, and either the null reference would overwrite a valid one, or it would cause other code to misinterpret other aspects of program state. Such situations exist, but aren't all that common; most accidental null references will get caught very quickly whether one checks for them or not.

supercat
  • 8,335
  • 22
  • 28
0

"Maybe," as written, is a higher level construct than null. Using more words to define it, Maybe is, "a pointer to either a thing, or a pointer to nothing, but the compiler has not yet been given enough information to determine which one." This forces you to explicitly check each value constantly, unless you build a compiler specification that is smart enough to keep up with the code you write.

You can make an implementation of Maybe with a language that has nulls easily. C++ has one in the form of boost::optional<T>. Making the equivalent of null with Maybe is very difficult. In particular, if I have a Maybe<Just<T>>, I cannot assign it to null (because such a concept does not exist), while a T** in a language with null is very easy to assign to null. This forces one to use Maybe<Maybe<T>>, which is totally valid, but will force you to do many more checks to use that object.

Some functional languages use Maybe because null requires either undefined behavior or exception handling, neither of which is an easy concept to map into functional language syntaxes. Maybe fills the role much better in such functional situations, but in procedural languages, null is king. It's not a matter of right and wrong, just a matter of what makes it easier to tell the computer to do what you want it to do.

Cort Ammon
  • 10,840
  • 3
  • 23
  • 32