More generally an option type. Maybe being a monad has other advantages (lifting for instance, though being a monad isn't actually required: C# 3.0 lifts operations into Nullable when you're e.g. adding two Nullable<int> [aka int?] values) but is not a requirement for a "nullable" living in the types.
> The problem is that unless your language has support baked in (a la Haskell) the syntax has way too much overhead.
Not really. The problem is more around the way your generics work: Putting Nothing in a Maybe Int is nice, putting a new Nothing<Int>() in a Maybe<Int> isn't as much.
And if anything, I think having overhead in pushing null values in the system is a good thing: it avoids people running to it right off the bat, and may make them pause and think about better solutions (a command-type pattern, a Null/SpecialCase object, …)
Is he really suggesting to replace all attempts to dereference null with no-ops? Weird.
I think there are two ways to deal with this. Either it is known at compile time that a particular variable must never be null. In that case the compiler should catch it like it does with C++ references.
If the decision is left until runtime, the only way to deal with it is to specify or compute a default value, which is very much complicated by the special cased 'this' parameter in OO languages.
It's kind of funny that basically everything Java has removed from C++ turns out to be important for a statically typed language. Java should have been a C++ virtual machine with garbage collection.
> Is he really suggesting to replace all attempts to dereference null with no-ops? Weird.
It is. Some languages work that way though, Objective-C for instance: `nil` is a "message sink", any message can be sent to `nil` without generating any error, it simply returns nil.
The problem with Objective C is, after they have this wonderful message-eating nil, they added this NSNull that brings the problem back. And since it's a superset of C, you get NULL for free too.
I don't think you understand what NSNull actually is. It's designed to behave as an empty placeholder object in collections that don't allow nil, nothing more.
C's NULL isn't an issue either, nil == NULL == 0, and isn't encountered in any of the contexts described by the article.
So, given a collection, how do you identify if an element is an instance of NSNull? You do what is essentially a type-check. To me, this negates the purpose of the message-eating nil. Not everywhere, as you seems to be reading, but where NSNull is used, which well, actually, can be anywhere, since we use lots of collections.
Strange. Nulls, as the author describes them, are incredibly useful. Being able to test for existence and branch accordingly saves us a ton of extra ".IsNull" properties that we'd otherwise need to tack on to every class so that we could tell whether we were looking at a real instance, or just a dummy that the compiler initialized without asking.
His 'solution' to the Null Pointer Exception trades that easy to identify and fix runtime error for an entire class of subtle logic errors that are pretty much guaranteed to arise when you have a bunch of improperly-initialized-yet-alive objects sprouting up every time you declare them.
No thank you.
My suggestion is to use an IDE that notices potential Null Pointer Exceptions for you and lets you know about them. VS.NET does this, as does every Java IDE worth its salt. This is simply not an issue if you use modern tools. Please don't go adding language features to "fix" a problem that went away ten years ago.
Part of the problem is that NULL is used to convey meaning. e.g. "login" returns NULL if the user isn't logged in.
So NULL can mean dozens of things. You can return something more valid (e.g. "Guest", or even "NotApplicable"), but doesn't get rid of the checks... Can result in more readable code though.
I almost never have null pointer exceptions. I adhere fairly strongly to Design By Contract, which means that I check for boundary conditions (null being the most comon) at method and constructor entry points and throw exceptions there and then. This means nulls are caught early rather than late (as parameters, rather than as state) and so the cause can be found right then and there.
NPE as a "problem" is vastly overrated. Just apply DBC wisely.
I get that and thats how I handle it too, however, you now have an IllegalArgumentException to handle for the null pointer when IMHO having null pointers should be something out of the ordinary and therefore not possible in MOST of the code.
Things that are not possible should be asserted to be so. Leaving an assumption in the mind of the programmer only is dangerous and almost worthless. Any assumption reasonably expressible in code should be expressed in code. This goes beyond null checks, it counts for any state.
For instance, in a tool that uses graphs as data structures, some transformations might impose that your graph at input and output is acyclic. Don't put this in comments, assert it in code!
The exception is an exception in the strictest sense: something that should not happen. I consider IllegalArgumentException to be an exception you should never catch (except in unit tests, perhaps), but instead fix the root cause. Unlike IOException for instance.
I fail to see the problem: A NPE is thrown if the code tries to access an object, but there's no object.Using a standard behavior here would conceal that there's a flaw in the program logic.
Agreed. The flaw in program logic is that there is no object. Very often, it makes no sense to not pass an object somewhere where an object should be passed. Having null pointers makes it easy to live with this logic bug, when it should never have happened in the first place.
Nulls are often used to signify that an object can't be returned for some reason, often not actually worth an exception. (Hence null is actually a valid result).
Sadly, that's not true. They do know lisp which has much better ways of dealing with nil. Lisp methods can even declare a nil handler function. Other languages allow defining special functions to handle null, it's not rocket science, but it would complicate java a lot to fix it now.
> They do know lisp which has much better ways of dealing with nil.
Lisp being mostly a class of dynamically-typed languages, their solution to the issue doesn't really apply to Java. For statically typed languages, the solution adopted by the ML family (as well as haskell) is far superior: encode the notion of "optional" values in an abstract type.
The problem is that unless your language has support baked in (a la Haskell) the syntax has way too much overhead.
If you haven't played with Haskell, I thought this was a pretty good explanation: http://paulbarry.com/articles/2009/07/17/emulating-haskells-...