“I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.”
However, I do see the value of NULL in a database context even though it makes database interfaces harder -- especially in Go, where the standard marshaling paradigm means anything NULLable has to be a reference and thus have a nil-check every time it's used.
The conceptual match is so awkward that when I write anything database-related for Go, if I have the option then at the same time I make everything in the database NOT NULL; even though that screws with the database.
Ah, NULL. When I think about the pain it causes, balanced against its utility, I sometimes wish I'd never heard of it.
And I'm sure learned people said the same thing about Zero, once upon a time.
Considering Rob Pike is a huge fan of Tony Hoare, and the inevitable mountain of pain caused by null references, that's kinda surprising.
But I guess "Worse is Better" in the sense of "simplicity of implementation is valued over all else" is still the guiding principle of Go. As Tony Hoare himself said: "But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement". It seems like we're doomed to repeating this mistake again and again.
The way it works in Rust is that you can't have a reference without the thing you're referring to. There isn't a default value and because of that it's an error to try to have a reference without something to refer to.
The way Rust works with this is to have a type called Option (I think this is a monad in haskell?) that lets you say this can be None or Some, so you have to explicitly handle the None case whenever you use it (either by panicing, or matching or some other method).
data Maybe a = Just a | Nothing
deriving (Eq, Ord)
return :: a -> Maybe a
return x = Just x
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
(>>=) m g = case m of
Nothing -> Nothing
Just x -> g x
Yes. Ish. In languages that have a null/nil/None then almost anything you have passed to any function could be null. Every function you write could have a null passed into it and you either have to do checks on everything or trust that other people will never pass those values in.
That's pretty OK until something unexpectedly receives one and hands it off to another function that doesn't deal with it gracefully.
In Haskell (and I assume others) this can only happen to Maybe types, they're the only ones that can have the value of Nothing. So the compiler knows which things can and cannot be Nothing, and therefore can throw an error on compilation if you are trying to pass a Maybe String (something that can be either a "Just String" that you can get a String from or a Nothing) into a function which only understands how to process a String.
This feels like it might be restrictive, but there are generic functions you can use to take functions which only understand how to process a String and turn it into one that handles a Maybe.
It's quite a nice system, even though I get flashbacks to java complaining I haven't dealt with all the exceptions.
Haskell doesn't completely stop you from shooting yourself in the foot though, you can still ask for an item from an empty list and break things. There are interesting approaches to problems like that however: http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/
Finally, it's worth pointing out that Maybe and Nothing and Just and all that aren't built into the language, they're defined using the code that gbacon wrote. So in a way, Haskell doesn't have a Maybe type, people have written types that work really well for this kind of problem and everyone uses it because it's so useful.
[disclaimer: I've probably written 'function', 'type', 'value' and other terms with quite specific meanings in a very general way. Apologies if this hurts the understanding, and I would appreciate corrections if that's the case, but just assume I'm not being too precise if things don't make sense]
Yep, and I believe it's the same with with Rust, it's been put into the standard library but it's not some special thing that only the compiler can make.
And like you said, the whole idea is that the normal case is that you can't have null values. If you need them for something you declare that need explicitly and have to handle it explicitly or it's a compile error. That way it can be statically checked that you've handled things.
Swift's Optional is a library type: https://github.com/apple/swift/blob/master/stdlib/public/cor...
Though the compiler is aware of it and it does have language-level support (nil, !, ?, if let)
It is in fact quite different. String/String? is not an optional annotation requiring the use of a linter and integration within an entire ecosystem which mostly doesn't use or care for it, it's a core part of the language's type system.
> If you don't use this in Java it is just the same as having String? as your type everywhere.
Except using String? everywhere is less convenient than using String everywhere, whereas not using @Nullable and a linter is much easier than doing so.,
IIRC this was pre-1.0 Rust, so there's probably a better way to do it now.
1) assign default value to variable
2) check if pointer parameter is nil
3) if pointer is not nil assign pointer value to variable
4) do work with variable
Am I overlooking a simpler way to do this? I feel like lib.LibraryFuncion(nil, nil) is nicer then lib.LibraryFunction(lib.LibraryDefaultOne, lib.LibraryDefaultTwo), though admittedly the explicitness of the second option is appealing.
Disclaimer: Both libraries are in the pre-alpha stages and are extremely light on tests and/or broken.
While you can do pointer arithmatic there will be null pointers and while memory is addressible there will be pointer arithmatic.
And while I'm a programmer, I want access to all the functionality the CPU offers.
There is no reason why the concept of a null pointer has to exist. If there were no special null pointers it would be perfectly okay for the operating system to actually allocate memory at 0x00000000. With unrestricted pointer arithmetic you can of course make invalid pointers, but a reasonable restriction is to only allow `pointer+integer -> pointer` and `pointer-pointer -> integer`. You can't make a null pointer with just those.
Good code is generally agreed upon to always check whether function inputs or the results of function calls are null (if they are nullable). Why not make it a compile-time error if you don't check rather than hoping that the fallible programmer is vigilant about checking and delaying potential crashes until runtime?
Go is extremely pedantic about a number of things like unused library imports, trailing commas, etc. which have absolutely no bearing on the actual compiled code, but it intentionally leaves things like this up to programmers who have shown that they can't be trusted to deal with it properly.
Having to manually deal with null is much more complicated than having an Option/Optional type in my opinion. We've also seen that it's far less safe.
Trailing commas, agreed. But unused library imports have the (probably unintended) side effect that their `func init()` will execute. Which is also why there is an idiomatic way to import a module without giving it an identifier, just to have this side effect.
The lesson/mitigation was to add a NOT NULL attribute on top of a DEFAULT 0.
NULL in SQL is a very different and largely unrelated concept. Though probably a mistake just as bad - not the concept itself even, but the name. Why did they ever think that "NULL" was a smart name to give to an unknown value? For developers, by the time SQL was a thing, "null" was already a fairly established way to describe null pointers/references. For non-developers, whom SQL presumably targeted, "null" just means zero, which is emphatically not the same thing as "unknown". They really should have called it "UNKNOWN" - then its semantics would make sense, and people wouldn't use it to denote a value that is missing (but known).