
NULL: The worst mistake of computer science? (2015) - BerislavLopac
https://www.lucidchart.com/techblog/2015/08/31/the-worst-mistake-of-computer-science/
======
porpoisely
NULL can mean and be different things in different domains of computer
science. NULL in the database world isn't the same thing in the programming
world. In the programming world, null is a result of the system architecture,
systems programming, etc. In SQL, NULL is a result "lack of data". There have
been debates on whether there should be different types of NULL. A NULL type
for "data that is available but we don't have it yet" \- like car make and
model for a car owner. A NULL type for "data that does not apply" \- like car
make and model for an adult who doesn't own a car. A NULL type for "data that
never applies" \- like car make and model for a child. Then you get into the
philosophical debate on whether a NULL can ever equal a NULL. Does it even
make sense to even think of NULL in terms of equality. How can an unknown
entity ever equal another equality? But what if you are just asking "are they
both unknown"? Then you probably can think of two NULLs as "equal".

In higher level programming, the consensus seems to be the less nulls the
better. Which is why languages like C++, C#, etc are introducing Option-like
syntax ( mostly to accommodate the database world and their NULLs ).

NULL exists to solve particular problems in computer science. It can also
cause a lot of problems. You can argue it's the best solution and worst
mistake depending on the situation.

~~~
davemp
This is hard to reconcile with type theory for me.

NULL, to me, implies and uninhabited type, i.e. there can never be a value
with a NULL type. Using null for a "data isn't there, apply, available, etc"
seems like an abuse of the type system. I see no reason that the former needs
to be supported at the type level. These properties are just responses to
queries, not some mystical, uninhabitable oblivion. Unnecessary type features
just make verification and learning a language much more difficult.

~~~
giornogiovanna
NULL isn't the uninhabited type, that's the bottom type. NULL is a _value_
that inhabits _every_ type.

~~~
yxhuvud
> NULL is a value that inhabits every type.

Not in all type systems. Particularly, type systems which has union types may
chose to simply create a separate type for NULL. There are also variants that
have separate NULL types for different base types, which also invalidate your
claim.

~~~
ArchTypical
> Not in all type systems

I think this is inaccurate. We are talking about computer science, which is
important and a constraint around the general type theory. A type system is
different than how you interact with it, so dispensing with language-specific
symbolic representation further normalizes the discussion.

Fundamentally, a (computer science) type is a representation of binary, for
the most part, data. That representation has to inhabit some part of bounded
memory. When that memory is initialized (empty), it's some form of NULL, for a
lack of another term. It exists for every type system in computer science.

> There are also variants that have separate NULL types for different base
> types, which also invalidate your claim

That's not the same thing. Different NULL types make sense for different sized
discrete (fixed bounds) memory allocation. A unicode character has a fixed
size allocation, while a string might be unbounded allocation (it grows in
some fashion, as needed).

Edit: Kneejerk downvoting, classy.

~~~
Sharlin
This does not make sense. Null is not the same as zero and neither is the same
as an all-zeros bit pattern. A memory word interpreted as an integer has no
meaningful null value. A memory word interpreted as a pointer may or may not
have a special bit pattern (which may or may not be the same as integer zero)
that represents a pointer to nowhere; it all depends on language semantics.
The address 0x0 can be perfectly valid on some architectures. Even though in C
the literal 0 denotes a null pointer constant, it does _not_ mean that the
value of a null pointer is literally zero.

~~~
ArchTypical
> A memory word interpreted as an integer has no meaningful null value.

What a type is, depends on what the runtime operates on. You can make a
runtime that just grabs random bits of data as a type and say "that's an
integer" but it's not a useful construct/example. A runtime keeps track of
types in some way external to the data itself. So I'll disagree that an all-
zeros is not the same as a null, because it's a common way to initialize the
data that is identified with a type (like in a pointer table). It's not a 1:1
but it's common. There's not always a formalized name, but it (an
uninitialized state) always exists as part of the type system (when not
reusing existing memory allocation, which is an initialized state). Always.

------
bunderbunder
You know who works on a platform with NULL but doesn't have quite so many
problems with it? DBAs.

There's some need to draw a distinction between the basic idea of NULL, and
the way that NULL has been implemented in most high-level programming
languages.

In most RDBMSes, values can't be null unless you say they are. Sometimes
explicitly, as in table definitions, sometimes implicitly, when you select a
JOIN type. Either way, though, the fact that the developer is in control of
when it can and cannot happen means that it always has a knowable meaning. (Or
should, anyway.)

The problem with many programming languages is, you're given it whether you
want it or not. In a low-level language like C, that's reasonable, because it
takes a sensible approach to how it works: Only pointers can be null, and all
pointers are nullable for obvious (especially in the 1970s) reasons.

More generally, I'm not going to fault languages from that era for trying it
out, because this stuff was new, and things were still being felt out. So I
don't really fault Tony Hoare for giving null references a try in ALGOL W.

What seems much more bothersome is high level languages like Java and C# cargo
culting this behavior. They _could_ have followed the lead from languages like
SQL and let the programmer be in control. They should have. They already throw
exceptions when a memory allocation fails, and they allow inline variable
initialization, and declaring variables at the point of usage, and composite
data types have constructors, so they lack all of (early) C's reasons why
ubiquitous nulls were a good idea. They could have, I think quite easily, made
nullability optional. At which point it'd have basically the same semantics as
optional types from functional programming, so I doubt we'd be worrying about
it anymore.

But they didn't.

~~~
nayuki
NULL in SQL really isn't great. For one, nullable table columns is a bad
default, and you have to explicitly write out "NOT NULL" to avoid this
behavior. I'd say that 90% of the time I want not-null table columns, and only
10% of the time do I want a nullable column.

Secondly, NULL has weird arithmetic. It turns out that NULL=NULL is false, and
NULL<>NULL is also false. (This is unlike C/Java/Python/etc. by the way.)

Thirdly, even if you design all your tables to have NOT NULL on all columns,
your queries can still synthesize NULL values in the results. For example,
LEFT JOIN, RIGHT JOIN, FULL OUTER JOIN, (but not INNER JOIN). For example,
computing max(column) on a table with zero rows.

~~~
dpcx
I can get behind your first statement. Having NULLable as a default on columns
is "probably" a bad idea.

I'm not so sure I can agree with the other two. NULL<>NULL (and NULL=NULL)
both return false for a very simple reason: truly missing data _can't_ be
equal to anything, including missing data... Because it's missing. You cannot
with certainty say that value1 is or is not equal to each other.

For the third point... What should max(column) return when there's no data?
You're telling the engine "give me the maximum value of something that doesn't
exist". That is, in my experience, "missing data."

~~~
smadge
For example, if it were the case that NULL = NULL, really counterintuitive
stuff would happen on joins because a null cell would match with every other
null cell you are joining on:

    
    
            person
       name      home_address
       ---------------------------
       "Alice"   NULL
       "Bob"     "123 Jump Street"
    
                    letter
       return_address     description
       ----------------------------------
       NULL               "Ransom Letter"
       NULL               "Spy Document"
       "123 Jump Street"  "Hello, from Bob"
       
    

Then

    
    
        SELECT name, description FROM person INNER JOIN letter ON home_address = return_address
    

would return

    
    
        name     description
        ------------
        "Alice"  "Ransom Letter"
        "Alice"  "Anonymous Spy Document"
        "Bob"    "Hello from Bob"
    

So now Alice is associated with a bunch of letters she didn't necessarily
write because she doesn't have a home address.

------
pdkl95
> NULL is a value that is not a value. And that’s a problem.

The problem isn't NULL, it's languages not enforcing the necessary checks for
the "no data" condition. Option can still be NULL ("None" in rust), wrapping
NULL in a struct doesn't provide any safety. The safety of Option wrapper
types is from the other language features (like rust's "match") and a stricter
compiler that forces the programmer to write the NULL check.

NULL would be fine if C _required_ you to write this:

    
    
        foo_t *maybe_get_foo(/*...*/) {
            if (/*foo_is_available*/) {
                return foo;
            } else {
               return NULL;
            }
        }
    
        foo_t *f = maybe_get_foo();
        if (!f) { /*...*/ }   // REQUIRED or compile error
        do_something(f->bar); // only allowed after NULL check
    

Obviously implementing that requirement would be difficult in C. Languages
like Rust were designed with enforcement features (match + None, much stronger
type/borrow checking), but lets your have "a value that is not a value".

~~~
masklinn
> The problem isn't NULL, it's languages not enforcing the necessary checks
> for the "no data" condition.

Talking about "NULL" pretty much implies that. When Tony Hoare talks about
null references, it's about every reference being nullable in languages like
Java or C#, not about the ability to conceptually wrap/opt non-nullable
references in a nullability thingie.

~~~
danesparza
Psst: I think the thingie you're referring to is called a 'monad':
[https://en.wikipedia.org/wiki/Monad_(functional_programming)](https://en.wikipedia.org/wiki/Monad_\(functional_programming\))

~~~
bfrydl
It isn't. There exists a common monad that solves this problem but a wrapper
type like Option or Maybe need not be a monad. For example, `Nullable<T>` in
C# is not a monad.

~~~
danesparza
Aha! You're right. I misremembered this excellent blog series from Eric
Lippert (a member of the c# design team):
[https://ericlippert.com/2013/02/25/monads-part-
two/](https://ericlippert.com/2013/02/25/monads-part-two/)

------
captainmuon
I've made my peace with null. Null is basically just an implicit

    
    
        assert(valid(x))
    

before every time you call a method on x. Similary, I think of exceptions as
explicit "crash-unless-caught" commands.

If you write your program with the "blow up early" mentality anway, or use
static checking tools and a bit of discipline, I've found that null looses
it's terror.

~~~
gre
“Looses its terror” here means the opposite of “loses its terror”.

~~~
piyh
Cry havoc and let loose the null pointer exceptions

------
doubletgl
Isn't there an inherent need in programming to express an explicit "nothing"
value? Coming from Python and JS, I never found None/null to be much of a
problem. I in fact like the distinction of null and undefined in JS. Using
null allows you to distinguish from the accidental undefined.

~~~
enriquto
> Isn't there an inherent need in programming to express an explicit "nothing"
> value?

In numerical calculus: yes, most certainly. What do you expect the result of
log(-1) to be? The alternative is to use specially tagged particular numbers
as "no-data", and pray so that they do not appear naturally as the result of
computations.

~~~
kazagistar
Technically, you could force the value to be one constrained to a valid range,
rather then augmenting the domain, but this is a lot of work and maybe not
worth it for pracial use.

------
jmfayard
Progress, it's now a solved problem in modern languages like Rust, Swift or
Kotlin See for example: [https://kotlinlang.org/docs/reference/null-
safety.html](https://kotlinlang.org/docs/reference/null-safety.html)

~~~
masklinn
I mean it's been a solved problem since the 70s if not earlier, the problem
has always been uptake. And _that_ is not solved e.g. C++ recently introduced
std::optional, which is not a type-safe version of a null pointer but is
instead a null pointer wrapper for value types.

------
prmph
It is not possible to have a NULL type that works for all situations and has
stable semantics.

The issue is, NULL should be a concept, not a value. I see no problem with
using sentinel values, so long as they are well designed, and such good design
comes with skill and experience, just as with all other aspects of
architecture. The quest to have a single value that can be used for all the
various possible meanings of NULL, to me, is the root of the problem.

~~~
lisper
> The quest to have a single value that can be used for all the various
> possible meanings of NULL, to me, is the root of the problem.

Exactly right. In particular, the conflation of nulls to indicate both error
and non-error conditions (e.g. out-of-memory vs end-of-linked list) makes it
impossible to distinguish errors from non-errors in many situations, and that
is obviously bad.

Ideally you want nulls/sentinels that carry information about where, when, and
why they were generated. You want separate nulls for numerical
overflow/underflow, end-of-linked-list, out-of-memory, timeout, suppressed
error/exception, unpecified/unknown value (preferably a separate one for each
type) yada yada yada.

------
nayuki
I agree with pretty much everything in the article. However, I would give Java
a lower score because no one uses java.lang.Optional in practice, and there is
too much legacy libraries and application code that cannot or will not be
changed. Also, the @NotNull annotation isn't in Java SE; it is made available
through various third-party libraries.

A language with a null value can dramatically simplify things for a language
designer, though. In the case of Java, we know that every array of objects is
initialized to null references. Thereafter, we can construct and assign
objects to each slot of the array. Otherwise we run into issues that C++ faces
- when we construct the array, the field of every object is uninitialized, so
they are potentially dangerous if read or destructed, and need the special
syntax of placement new to be initialized. The trick to avoiding null here is
to avoid pre-allocating an array, and instead to grow a vector one element at
a time. The C++ std::vector<E> is very accessible and performant, whereas Java
java.util.List<E> is very clunky to use compared to native arrays.

Another case that gets simplified is object construction. When the memory for
an object has been allocated but before the user's constructor code has run,
what values should the fields have, assuming that they are observable? In a
Java constructor, all fields are initially set to null/0, then you simply
assign values to fields in the body code of the constructor. In C++
constructors however, you should initialize fields in the initializer list,
and then you have still have the option to initialize fields in the body.

I still think pervasive null values are bad for the programmer (rather than
the language designer). Now that I have preliminary experience in Rust, I see
that its design is much safer and still practical, so I think this language
shows the way forward.

~~~
arcticbull
Re: Array Initialization

One approach you can take is the Rusty "hang up a technical difficulties sign"
(unsafe) while you mess around with potentially uninitialized memory, which is
valid, but places the burden on you as the library writer. Another would be to
initialize your array of pointers as an array of Option<Box<T>> pre-filled
with None. Due to pointer alignment you can actually optimize Option<Box<T>>
by turning it into a tagged pointer (which I believe is what Rust does) so
that None == null at the machine level, while the language exposes a safe
interface on top. [1]

Re: Object Construction

With object construction in Rust you can either (a) create all fields in
advance and specify them at construction [best] (b) use mem::uninitialized()
[bad] or (c) create a builder which has optional fields for everything and
yields a constructed option via 'a' later [most work].

[1] [https://doc.rust-lang.org/std/option/](https://doc.rust-
lang.org/std/option/)

------
beardyw
That 'nothing' is inconvenient applies the same in mathematics with zero. Why
do we have to have a number we can't divide by? What is 0 to the power of 0?
It's a special case we always need to worry about. But its inclusion in the
number system is not in question.

And I remember my distress using a financial package being told that my unused
zero value still MUST have a currency! My pocket is empty, how can it have a
currency? If a farmers field is empty - must I say what it is empty of - cows,
sheep, aardvarks?

I think worrying about inconsistency here is worrying about the inconsistency
of the world we live in. 'Nothing' is a mysterious thing we need to accept and
respect.

~~~
billpg
I recall having a friendly argument with a friend who insisted that 30°C was
exactly twice as hot as 15°C.

~~~
beardyw
It looks as if even in temperature zero is a bit problematic.
[https://en.m.wikipedia.org/wiki/Zero-
point_energy](https://en.m.wikipedia.org/wiki/Zero-point_energy)

------
gambler
Rich Hickey's "Maybe Not" should be watched by anyone who thinks
nulls/nils/undefineds are okay. It should also be watched by anyone who think
that Optional/Maybe/Nullable and good enough:

[https://www.youtube.com/watch?v=YR5WdGrpoug](https://www.youtube.com/watch?v=YR5WdGrpoug)

------
userbinator
To someone who has been using Asm and C for decades, these arguments just make
no sense. Reading this article reminds me of the arguments against pointers,
another thing that's frequently criticised by those who don't actually
understand how computers work and try to "solve" problems by merely slathering
everything in thicker and thicker layers of leaky abstraction. It's not far
from "goto considered harmful" either.

 _any reference can be null, and calling a method on null produces a
NullPointerException._

...which immediately tells you to go fix the code.

 _There are many times when it doesn’t make sense to have a null.
Unfortunately, if the language permits anything to be null, well, anything can
be null._

That's not an argument. See above.

 _3\. NULL is a special-case_

...because it indicates the absence of a value, which _is_ a special case.

 _though it throws a NullPointerException when run._

...and the cause is obvious. I'm not even a regular Java user (and don't much
like the language myself, but for other reasons) and I know the difference
between the Boxed types and the regular ones.

 _NULL is difficult to debug_

Seriously? A "nullpo crash" is one of the more trivial things to debug,
because it's very distinctive and makes it easy to trace the value back (0
stands out; other addresses, not so much.) What's actually hard to debug?
Extraneous null checks that silently cause failures elsewhere.

The proposed "solution" is straightforward, but if you reserve the special
null value to indicate absence then you can make do with just _one_ value
instead of a pair, of which half the time half of the value is completely
useless. If you can check for absence/null, you will have no problems using
Maybe/Optional. If you can't, Maybe/Optional won't help you anyway --- because
it's ultimately the same thing, using a value without checking for its
absence.

~~~
TickleSteve
Completely agree, this is CS theory gone off the deep end...

~~~
cultus
A simple, safe way of tracking nulls like option is "CS theory going off the
deep end?"

Implicitly allowing all code to return nothing, and manually trying to
remember what can return null and what can't, and checking that value, is
incredibly error prone. It's really crazy that this has been the dominant way
of handling the problem for many decades when their is a dead-simple way of
ensuring it can't happen.

Null pointer errors, contrary to many claims, show up in production code all
the time. Eliminating them is of huge value.

~~~
TickleSteve
The NULL pointer errors yo're referring to in most cases resource issues. i.e.
malloc returning NULL.

This is _not_ the source of the vast majority of pointer errors.

Checking for (and trapping) NULL pointer dereferences is trivial, what is more
difficult is the rest of the pointer range that doesn't get checked but is
equally invalid, i.e. the other 4-billion (32-bit) possibilities.

Non-NULL-pointer checks are _much_ more important than NULL checks.

The world of pointer issues is very much greater than "ASSERT(ptr!=NULL)".

...and as for correct error-recovery (not error-detection), well, don't get me
started.

~~~
cultus
>This is not the source of the vast majority of pointer errors. >Checking for
(and trapping) NULL pointer dereferences is trivial, what is more difficult is
the rest of the pointer range that doesn't get checked but is equally invalid,
i.e. the other 4-billion (32-bit) possibilities.

I think we write vastly different types of software. I can assure that that
null-related errors are extremely common in situations besides resource
issues. If it were just a resource-related problem, garbage-collected
languages would almost never have issues, yet Java is infamous for NPEs. In
Scala, where Options are ubiquitous, I've literally never had a single NPE.

It is very common for libraries to return null just to represent the absense
of a result (ex: a row returned from a SQL query has no value for a column).
That sort of thing means you have NPEs wholly unrelated to malloc or anything
similar. These nulls are _expected_ under normal program operation. They
aren't errors. So, it's crazy to not to let the type system assist you in
checking for nulls, so you don't forget and wind up with a NPE.

~~~
TickleSteve
Two things are getting conflated here.

Pointer issues (that I was referring to) and a failure indication.

The most trivial pointer issue is a NULL pointer. This is such a trivial issue
to catch its hardly even an error, yet people use that case as the exemplar
for NULL issues.

detecting (and handling) failures on the other hand is very much different and
more in the spirit of what the option-type arguments are about. In that case,
the difficulty is not in detecting the error (that option-types will help
with) but the application-level recovery. that is nothing that the language
aid you with, its system-design and architecture related.

Basically, its the wrong issue to be thinking about.

~~~
cultus
>The most trivial pointer issue is a NULL pointer. This is such a trivial
issue to catch its hardly even an error, yet people use that case as the
exemplar for NULL issues.

How can you claim that NPEs are "hardly ever an error." NPEs are the most
common error there is! They are indeed easy to catch, but you need to do so
nearly everywhere, obscuring the code and introducing potential for error.
There is no real, conceptual difference between something like a malloc
returning null or a database query result containing a null. It is the same
thing.

A null absolutely is an error if you don't catch it. By not using Options,
it's vastly easier for that to happen.

~~~
TickleSteve
"hardly _even_ an error"

not

"hardly _ever_ an error".

in other words, NULL pointer errors are a trivial error to deal with.

~~~
taco_emoji
They're even easier to deal with if your type system guarantees you can never
get them in the first place.

------
jeromebaek
At least C# has the syntactic sugar to easily check for null references which
lets you avoid the horrors of code like `(if s != null && s.length)`. Instead
you can type `s?.length`. Never have I appreciated syntactic sugar as much.

~~~
rusk
A lot of higher languages use the concept of _truthiness_ such that null, or
0-length both evaluate to false:

    
    
        if (s) then do stuff with s;

------
ohazi
Other candidates:

\- null terminated strings

\- machine dependent integer widths

~~~
saagarjha
> null terminated strings

This is mentioned.

> machine dependent integer widths

What exactly do you dislike about this?

~~~
rusk
_> What exactly do you dislike about this?_

Or what do you even do about it?

A handful of solutions already exist: \- Use a higher-level language \- Java

~~~
Dylan16807
> Or what do you even do about it?

All you need is a list of what width each type is.

> A handful of solutions already exist: - Use a higher-level language - Java

It doesn't require being "higher level". If anything it pushes code to a
slightly lower level.

~~~
rusk
_> All you need is a list of what width each type is._

This is exactly the kind of minutiae that GP was bemoaning.

 _> If anything it pushes code to a slightly lower level_

Yeah by way of higher-level abstractions ...

~~~
bfrydl
> This is exactly the kind of minutiae that GP was bemoaning.

How is this “minutiae”? You should always know the possible range of a numeric
variable or field when you create it, so why not just write what size it is?
In Rust the main numeric types look like this: i32, u64, u8. You just pick the
one you want.

~~~
rusk
I'm sorry, I misunderstood you.

These _typedefs_ as I'm used to them do address cross-platform issues.

Storage classes are _" minutiae"_ however when all you want is just a straight
up number.

Python gives me an Integer type when I want a whole number, or a Float when I
want to represent partials.

I don't really care to be honest how that gets represented in memory in this
case.

~~~
Dylan16807
A float in Python is an implementation-specific size, just like C. So I'm
really confused about using it as an example here.

> Storage classes are "minutiae" however when all you want is just a straight
> up number.

You can have a single "straight up number" _and_ mention the bit width in the
language spec. The mere act of writing it down doesn't force coders to deal
with any more minutiae than they already had to deal with.

> Yeah by way of higher-level abstractions ...

I strongly object to this. "float is at least x bits" and "float is exactly x
bits" are the same level of abstraction, and almost every language, high or
low level, picks one of those options.

~~~
rusk
You can strongly object all you want, but when I'm writing code in python, or
any other high-level language I don't care one jot about storage size.

~~~
Dylan16807
You could apply that same "who cares?" attitude to the size of "double" in C.
Whether you burden yourself with that knowledge is not a feature of the
language. More "C coders" care because they're micro-optimizing, but it's no
more needed in C than Python.

Also you named Java as being on the easy side and that has four different
integer sizes...

~~~
rusk
No ... not really.

Double doesn't behave like a whole number.

java only has a single int type, which is 32-bit regardless of machine
architecture.

~~~
Dylan16807
> Double doesn't behave like a whole number.

I was suggesting double for your partials, not your whole numbers.

> java only has a single int type, which is 32-bit regardless of machine
> architecture.

I'm so confused.

You said having a "list of what width each type is" is bad because it forces
the user to deal with "minutiae".

But that's exactly what Java does. int is 32 bits, short is 16, long is 64

And then you praise a type in Python that does the same thing as "double" in
C. It's _usually_ 64 bits, but it might be something else.

------
edoo
Nulls in strongly typed languages can get rather weird but from a C/C++
perspective it is the same as 0. nullptr is just a correctly casted 0.

~~~
saagarjha
nullptr is actually "a prvalue of type std::nullptr_t" or something like that.
Since C++11 NULL is the same as nullptr.

~~~
edoo
And it is directly convertible to a bool. The primary effectiveness seems to
be with operator overloading where a NULL could trigger an integer method
instead of a pointer method.

~~~
ape4
Yes that's the example given here.
[https://en.cppreference.com/w/cpp/language/nullptr](https://en.cppreference.com/w/cpp/language/nullptr)
So its an improvement but not exactly a game changer.

------
cmrdporcupine
NULL in 'relational' databases in particular is a disaster. Or at least
according to the notorious Fabian Pascal.

[http://www.dbdebunk.com/2017/04/null-value-is-
contradiction-...](http://www.dbdebunk.com/2017/04/null-value-is-
contradiction-in-terms.html)

Codd never proposed it in his original relational model. For good reason.

~~~
goto11
Disagree that it is a disaster. Nullability is explicit in the column type, so
it doesn't have the "billion dollar mistake".

Furthermore you need to represent missing values somehow if you perform a left
join.

------
pella
Julia Missing Values

 _" Julia provides support for representing missing values in the statistical
sense, that is for situations where no value is available for a variable in an
observation, but a valid value theoretically exists. Missing values are
represented via the missing object, which is the singleton instance of the
type Missing. missing is equivalent to NULL in SQL and NA in R, and behaves
like them in most situations."_

[https://docs.julialang.org/en/v1/manual/missing/index.html](https://docs.julialang.org/en/v1/manual/missing/index.html)

+

"First-Class Statistical Missing Values Support in Julia 0.7"

[https://julialang.org/blog/2018/06/missing](https://julialang.org/blog/2018/06/missing)

~~~
ChrisRackauckas
The nice thing about Julia is that it separates `nothing` from `missing`.
nothing<:Nothing is a null that does not propagate, i.e. `1+nothing` is an
error. It's an engineering null, providing a type of null where you don't want
to silently continue if doing something bad. On the other hand, missing is
propagates, so `1+missing` outputs a missing. This is a Data Scientist's null,
where you want to calculate as much as possible and see what results are
directly known given the data you have. The two are different concepts and
when conflated it makes computing more difficult. By separating the two, Julia
handles both domains quite elegantly.

------
axilmar
NULL is certainly a mistake, but even more of a mistake is not allowing
distinct states in variables.

NULL is just another case of a state of a variable. Other states are 1, 15,
0xffffffff, etc.

That mainstream languages don't handle this is the worst mistake of the
computer industry.

~~~
rusk
Most languages do. Java will always initialise a non-initialised value to NULL
for instance (EDIT or 0 or false for primitives).

It's simply a reality of how computers operate that when you allocate a piece
of memory (a variable) it will have something in it that you'll need to clear
or initialise.

In this respect, NULL is doing you a favour.

~~~
zxczxc111
Only for field variables. Local variables most definitely have to be
initialised by the programmer, otherwise it's a compiler error

------
tarkin2
I've been using kotlin and swift. They've partly removed null with the 'maybe'
feature.

So instead of calling methods on null objects the methods are just not called
if the object is null.

This helps when there's a race condition, and you attempt to call a method on
a null object, and then that solves itself by the same code being called again
without the race condition.

But a lot of the time if the object is null and the method is not called you
still have an error, but it's just not a null pointer error now.

This 'nullless' code is nice in some places, especially with UI lifecycles
calling code repeatedly, but other times it just changes the type of error you
debug.

~~~
jillesvangurp
Actually in Kotlin, nullability is part of the type system and denoted with a
?. So String and String? are two different types. Dereferencing a nullable
type is a compile error until you do the null check. Doing that triggers a
smart cast to the non nulled type. So the inferred type becomes non nullable
and you don't have to do any casts. You can force this cast by using the
operator !!, which you should avoid for obvious reasons but is useful with
some legacy code. If you get this wrong you still get an npe.

It also provides backward compatibility with java where java types are
considered nullable by default unless other wise annotated with a @NonNull.
Also you get nice warnings about redundant null checks.

This provides for a lot of extra compile time safety and it largely removes
the need for Maybe, Optional, and other kludges that people have been coming
up with to force programmers to replace null checks with empty checks.

------
sifoobar
Nah. There will always be missing values, no matter how many layers of safety
measures we wrap around the fact. Hitting a NULL in C is very unforgiving; but
that's just the spirit of C, there are plenty of ways to provide a less bumpy
ride in higher level languages.

My own baby, Snigl [0], uses the type system to trap runaway missing values
without wrapping. Which means that you get an error as soon as you pass a
NULL, rather than way down the call stack when it's used.

[https://gitlab.com/sifoo/snigl#types](https://gitlab.com/sifoo/snigl#types)

------
trophycase
I don't write business software that needs many 9s of uptime so I find null to
be fine. Yes, returning null is kind of throwing your hands up but I find that
to be kind of the point. It allows my software to fail fast if it does and
makes it incredibly obvious where things are going wrong. Generally if a
reference has a value of null where it shouldn't, I can pinpoint the location
of the bug within a few minutes or even seconds.

IMO it makes the program much easier to reason about compared to returning
some sort of empty value and then failing much much later in the program.

------
saagarjha
> it means that C-strings cannot be used for ASCII or extended ASCII. Instead,
> they can only be used for the unusual ASCIIZ.

This is a very pedantic quibble, and I'm not even sure it's correct. ASCII has
NUL as well, and ASCIIZ isn't a character set AFAIK.

> C++ NULL boost::optional, from Boost.Optional

First of all, nullptr, second, std::optional.

> Objective C nil, Nil, NULL, NSNull Maybe, from SVMaybe

Nil is not a thing in Objective-C, to my knowledge.

> Swift Optional

You're looking for nil. So it should be four stars?

> Swift’s UnsafePointer must be used with unsafeUnwrap or !

You're confusing Optional and UnsafePointer.

~~~
saurik
What the comment about ASCII means is that NUL is a valid character in an
ASCII string, but it can't be represented in C's null-terminated string
encoding as the format (sometimes called ASCIIZ, but yeah: not an encoding...
but I mean, come on... the article is clear here) terminates at the first NUL.

Also, Nil is absolutely a thing in Objective-C: it is a null pointer of type
Class (whereas nil is a null pointer of type id; you should avoid mixing them
up, though I will admit nothing much bad will happen as Class and id are
generally co-polymorphic due to the type system being kind of lame. I am not
sure they always have to be, though).

(And as someone who has been programming in C++ since before it was
standardized at all, I frankly think listing NULL and boost::optional is
totally acceptable and complaining about it as if C++11 is more canonical is
just being annoying.)

Doing a quick search for how nil works in Swift, it apparently isn't a null
pointer, so you are wrong there as well :(.

> nil means "no value" but is completely distinct in every other sense from
> Objective-C's nil.

> It is assignable only to optional variables. It works with both literals and
> structs (i.e. it works with stack-based items, not just heap-based items).

> Non-optional variables cannot be assigned nil even if they're classes (i.e.
> they live on the heap).

> So it's explicitly not a NULL pointer and not similar to one. It shares the
> name because it is intended to be used for the same semantic reason.

Given that I don't think any of the rest of your comment was legitimate
criticism, I am frankly betting that your comment about UnsafePointer is also
not useful, but I am kind of tired of having to analyze this comment at this
point (I stepped in due to the note about character sets and the floor kept
sinking).

~~~
saagarjha
> What the comment about ASCII means is that NUL is a valid character in an
> ASCII string, but it can't be represented in C's null-terminated string
> encoding as the format (sometimes called ASCIIZ, but yeah: not an
> encoding... but I mean, come on... the article is clear here) terminates at
> the first NUL.

I think ASCIIZ is the more common format, so I replied with all that in
response to ASCIIZ being called "unusual". Most popular languages that
actually allow NUL bytes in strings usually tend to support some encoding of
Unicode anyways…

> Also, Nil is absolutely a thing in Objective-C: it is a null pointer of type
> Class

You got me…in my defense, I didn't know about this to _my_ knowledge. It was
still stupid of me to assume that DuckDuckGo would be case-sensitive when I
searched that. I guess I should use this for Classes now instead of nil.

> I frankly think listing NULL and boost::optional is totally acceptable and
> complaining about it as if C++11 is more canonical is just being annoying.

One ships with C++, one doesn't; that's like saying Joda-Time is the canonical
date library for Java instead of java.time. Although, I should probably ask
you if you consider Joda-Time to be "more canonical" before listing it as an
example…

> Doing a quick search for how nil works in Swift, it apparently isn't a null
> pointer, so you are wrong there as well :(. > I am frankly betting that your
> comment about UnsafePointer is also not useful

I should have been more explicit, since these kind of run together when you
bring pointers into the mix, which I guess I should have realized once I read
the footnote. I'm not really satisfied with the explanation given in the
article, nor your rebuttal of my argument. Swift's nil is overloaded in a
sense: for native Swift structures, it's the whole "Optional-as-an-enum"
abstraction that we know about. For class types, and pointers, it's a bit more
complicated: you just cannot assign nil to a UnsafePointer or a SomeClass
unless it's "Optional", but the "Optional-ness" is completely in the type
system and under the hood, in order to facilitate interoperatability with C,
Objective-C et al. you need to actually have the type of the size be
sizeof(void *), have zeroes in it, etc. You cannot actually set either of
these types to nil if they are non-Optional unless you do illegal things. So
when you set a "pointer" (being an Optional<SomeClass>, UnsafePointer?) here
to nil, you are literally shoving a nil it in, which also happens to work well
with Swift's type system and Optional.none abstraction because there is no way
to subvert it legally. All of this was basically a long-winded way of saying
that yes, Swift's nil is actually NULL, but the type system makes sure that
you don't get a "bare NULL" which lets you pretend like the Optional
enumeration abstraction works but under the hood, and semantically, it's the
same thing as NULL.

------
jchw
This tidbit gets a ton of mileage but I think it's overrated. There are a lot
of unsafe shortcuts we take to get better ergonomics and NULL is one of them.

I think it's a bit unlikely we'll fully get rid of null, but we can get rid of
some of the pitfalls. TypeScript for example pretty much fixes the problem, by
enforcing you check for null when needed, though TypeScript takes a handful of
other soundness shortcuts. Go makes null less harmful by treating nil pointers
like empty values by convention.

~~~
virtualized
It's the worst mistake because it made you believe that its atrocious
ergonomics are actually superior to more sensible solutions. Implicit
nullability doesn't really save you any null checks. It just makes it possible
to forget necessary checks.

It was fine to design a language with nullable pointers in the 70s. It's
unacceptable nowadays. nil in Go is a major mistake.

~~~
jchw
Okay. So let's say we get rid of nil in Go. Now, structs with pointers have no
zero value. Slices and maps have no zero value. Funcs have no zero value.
Reflect can no longer create objects because it can't possibly enforce that
you initialize the pointers. Functions that return either an error or a value
now need a new pattern, probably requiring generics or another special type.
Map access needs to return this special type.

Did we win? Did that make Go better? Fuck no. Most people aren't frequently
hitting nil pointer errors in Go because unlike C the behavior is a lot more
reasonable and the conventions a lot simpler. And by the way, we didn't fix
all the runtime errors. Nil pointers are just one possible runtime error. How
about out of bounds array access, memory exhaustion, race conditions?

And yeah, I get that you can also fix all of those things, which is then
called Rust. But we don't need another Rust, Rust is a fine Rust. Go has, imo,
much better ergonomics and most of the time it's just fine for what I'm doing.
Like, writing small to medium size servers and utilities in Go has rarely been
a regretful experience. And, even if we had no runtime errors we would still
need unit testing to ensure our components are functioning correctly. So, most
of the time I'm aware of when my code has runtime errors anyways.

Getting rid of null is not magic. It does not get rid of all runtime errors.
And yes, it does impact ergonomics. I will take Go zero values at the cost of
nil pointers, every day.

~~~
orblivion
> Now, structs with pointers have no zero value.

A zero value is much better than undefined value, I'll grant you that. I
prefer the forced initialization approach (Haskell, presumably Rust and many
others). If I add a new field, I want to know where I need to populate it. Or
if you must, maybe a default value defined on the struct (perhaps that's also
"considered harmful" for reasons I can't think of at the moment).

But it seems you prefer the ergonomics of default-zero. I don't get it, but I
can't argue with preference.

~~~
jchw
Easy: default zero is simple. It's predictable behavior. It's consistent.

By convention, you should design your code to also treat zero values as empty.
In Go, the zero value of bytes.Buffer is a ready to use, empty buffer.

If you drop default zero, you lose a lot of convenience and gain a lot of
ceremony. It's not the end of the world, but neither is the null pointer
error. It's just another runtime error. Just like divide by zero.

------
btbuildem
Author forgot about Erlang, and it's wonderful lack of NULL

------
laichzeit0
So how does NaN in Python and NA in R relate to NULL? I know Python has the
None type, but it's not the same as NaN. One of the most annoying things in
Numpy is that there is no way to indicate that an integer value is "missing",
similar to NaN for floats. In R both integers and strings can be NA (if I
remember correctly). So for numeric types at least, there is definitely the
need to somehow indicate that a value is "missing".

~~~
hurrrrr
Numpy has masked arrays [1]. Though I can't say how well they work.

[1]
[https://docs.scipy.org/doc/numpy/reference/maskedarray.gener...](https://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html#rationale)

------
burfog
Linux does well with some macros: IS_ERR, IS_ERR_VALUE, PTR_ERR, ERR_PTR,
PTR_ERR_OR_ZERO, ERR_CAST, IS_ERR_OR_NULL

[https://elixir.bootlin.com/linux/latest/source/include/linux...](https://elixir.bootlin.com/linux/latest/source/include/linux/err.h)

No, it doesn't totally replace NULL, but it does solve some of the problems in
a high-performance way.

------
yxhuvud
Honestly, I feel that the problem isn't null but that type systems (at least
earlier on) tended to allow other types to be null, willy-nilly. Null is best
considered a separate type to non-null values, and is basically not a problem
if the type system handles that in some way. Be it option or union types -
both solve it and it mostly stops being an issue.

------
MichaelMoser123
Checking if the optional is present is very similar to checking for NULL
values. Now if you have a nice match statement like rust and lambda functions
for streams, that may make things a bit more readable.

You will still need analysis tools to check that all code paths check for none
before accessing that value ..

------
AJRF
Do people generally agree that Java Optional == Scala Maybe / Haskell's Maybe?

Java's Optional seems fundamentally flawed in that Java allows any reference
to hold a null value and Optional can still throw a NPE when calling isPresent
on it so it still gives people a footgun.

~~~
based2
Maybe Not - Rich Hickey (clojure), 29 nov. 2018

[https://www.youtube.com/watch?v=YR5WdGrpoug](https://www.youtube.com/watch?v=YR5WdGrpoug)

[https://dotty.epfl.ch/docs/reference/intersection-
types.html](https://dotty.epfl.ch/docs/reference/intersection-types.html)

~~~
kybernetikos
Yes, 'maybe not' is very relevant to this discussion, but few people seem to
agree with my understanding of what he says about the right solution:

Optionality doesn't fit in the type system / schema, because it's context
dependent. For some functions, one subset of the data is needed, for others a
different subset. Trying to mash it into the type system / schema is just
fundamentally misguided.

~~~
fmjrey
Yes, he's rather explicit in saying Maybe is a poor tool. I'll have to watch
the talk a second time to be sure, but I'm not sure he proposes any solution
at the level of type systems. Not using Maybe or using Union is not what he is
advocating. For him (and me too) types are the wrong thing to put data in
because, among other things, it forces you back into PLOP. His point is to
remove entirely the need to fill slots with nothing. Obviously the talk is
more about specs than types. While tactfully avoiding the debate around types,
he's still starting the talk with types to help those that are only there to
decomplect their thinking.

------
shittyadmin
Conflating NULL with NUL terminated strings seems a stretch... both have
problems, but separate problems. I suppose they're both related in that they
provide a "special" value rather than separating that information though.

------
rusk
NULL is a convenient way to map singularities in your model of the problem.

I have on a few occasions tried to write _NULL-less_ code, and it adds a good
bit of work.

\- model all possible states for a value

\- determine appropriate default actions for all types

\- meaningful place-holder values

It's a good exercise, and I think more code should be written this way, but -
as an Engineer I'm trying to model just enough of the problem to solve it. I'm
not trying to simulate every possible outcome in that domain.

Certain corners of your problem simply don't need to be modeled, and what's
more the effort needed to model them can just be too much.

NULL is a great way to just throw up your hands and go _" I don't know and I
don't care"_. Much as when modelling a physical system singularities typically
represent phenomena that the model doesn't take account of, so it goes with
NULL. It simply says _" Don't Go There"_.

~~~
knocte
Have you ever dealt with the Maybe(Haskell)/Option(F#) types? If not, then you
don't understand what's wrong with NULL and how to easily avoid it without
much work.

~~~
jstimpfle
I find Maybe a bad idea. It forces me to write denormalized code when I _know_
that something is not NULL. It's not possible to specify this knowledge as a
data structure since data structures are static but context is dynamic. I much
prefer the simple NULL sentinel that blows up like an assertion when I made a
mistake. That said, there's not very often a need for NULL at all if you
structure the code correctly.

~~~
cultus
If you _know_ something can't be null, then don't use an option. Simple as
that. For example, a SQL library can return a non-nullable column of String as
just a String, not an Option[String]. Thus, you actually get a solid
distinction that you don't get with null pointers.

There's no reason to include sentinels that will randomly blow up your
program.

~~~
jstimpfle
No. The point is that the data structure can't know if there's a NULL since
the data structure is static. Context is dynamic. Code is dynamic as well, and
it can know that some things must exist based on other dynamic conditions.

So this "solid" distinction often is just noise and actually blurs the
intention of the programmer: An explicit unwrap is required syntactically
while it should not be required semantically because really the option data is
not an option but a requirement in certain contexts.

~~~
cultus
If it is a requirement for something not to be null, unwrap the option before
you send to it to the part of the program that can't accept nulls, and deal
with the case of None in a sane way and in a predetermined place. Then you
don't have to worry about unwrapping in the rest of the code. You can escape
from Option. It's not like IO. You just have to check for None if you want to
get something out, as you should.

In this fashion, you have type safety everywhere, and you deal with the case
of a missing value in a predictible way, in a single spot.

------
tabtab
From my practical experience, Null has a use, but is over-used or misused. If
you concatenate a Null string to other non-Null strings, the Null should
usually be treated like a zero-length string instead of nullifying the ENTIRE
expression result. I know this differs from how numbers typically are treated,
but so be it. Strings are not numbers.

Without that behavior, one often has to write verbose statements such as:
denull(stringA,"") || denull(stringB,"") || denull(stringC,"") ||
denull(stringD,"") etc. ("denull" name varies per vendor. "||" is concatenate
here.)

------
revskill
The problem is in tooling. If all compilers/builders out there could detech
null for us, those kinds of error could be taken care with much more ease.

~~~
giornogiovanna
TypeScript can be easily configured to do this[0], and Kotlin always does
this[1]. The future is now!

[0]: [https://www.typescriptlang.org/docs/handbook/release-
notes/t...](https://www.typescriptlang.org/docs/handbook/release-
notes/typescript-2-0.html) [1]: [https://kotlinlang.org/docs/reference/null-
safety.html](https://kotlinlang.org/docs/reference/null-safety.html)

------
vectorEQ
in C NULL is just 0. a nullptr in c++ is just a pointer which points to 0. so
it's not an undefined value... it's set to 0 on purpose so you can check it.

consider this: char _ptr; ptr = (char_ )0xb8000;

before assigning ptr, ptr can be ANY value from 'random memory'. (compiler
trickery aside.. because it might initialise it to 0 anyway...)

so you want to have: char _ptr = NULL; ptr = (char_ )0xb8000;

So you can then do IF(ptr != NULL) { do_stuff(); } you could not check for
validity of the ptr value or it being present otherwise. an if(ptr) or
if(!ptr) would only work if it's initialised and reset to NULL each time
before assignment, so you can validate the assignment.

This is not mistake but a tool.

for a hardcoded offset like this it might be fair to say you could do if(ptr
== 0xb8000) {do_stuff()} but what if it was a ptr returned by a new allocation
or so? Or by taking the address of another variable or object? In that case
setting things to null and checking them is absolutely essential to assuring
your code works like you intended it to.

this whole article seems just a bunch of nonsense. for some languages it might
hold true, but i can't beleive it would do for any. perhaps this original
algol null... who knows.

~~~
pjc50
Proper typesafe systems wouldn't let you use C-style reinterpretation casts
either.

It's quite instructive to see how the low-level Rust people handle this.

~~~
AnimalMuppet
So if you have memory-mapped IO, how would you write to a specific address? In
C/C++, a reinterpret cast is _exactly_ what you need there. What would you use
in Rust?

~~~
pjc50
There are various alternatives, but generally the approach is to know what
address you need in advance and create an object for it at compile time that
can be accessed type-safely.

e.g.
[https://zinc.rs/apidocs/ioreg/index.html](https://zinc.rs/apidocs/ioreg/index.html)

or the longer but more detailed [http://blog.japaric.io/brave-new-
io/](http://blog.japaric.io/brave-new-io/) , which covers various approaches.
It even points out that you can use the type system to enforce that
peripherals are only accessed from multiple threads or parts of the program in
a safe manner, which you can't do if you can just reinterpret-cast into
anything.

------
cphoover
Option monad fixes null reference problems. Does not fix the ambiguity between
not found vs doesn't exist.

~~~
lmm
If you have multiple possible reasons for a value to be "absent", that sounds
like a case for an "either" or "result" type.

------
kekzzz
Why can't people just get over it and stop blaming the language for their own
sloppy code?

~~~
turbinerneiter
Because everybody is working on the limit of complexity they can comprehend,
so there is no headspace left for dealing with bad ergonomics. We need all the
help we can get.

~~~
jstimpfle
> everybody is working on the limit of complexity they can comprehend

That's the problem right there.

------
mynameishere
I don't care. java.lang.Optional is a PITA in a way null never is.

------
jillav
Whether null/NULL is a good a idea or not (I like it, just yesterday it saved
my ass) it saddens me that more and more articles I stumble upon are made to
criticize technologies but not to talk about solutions or innovations

~~~
saagarjha
The article talks about a solution, suggesting that optionals are a much
better way to handle this.

------
david04
Scala alternative to null: [https://www.scala-academy.com/tutorials/scala-
options-tutori...](https://www.scala-academy.com/tutorials/scala-options-
tutorial)

------
bryanrasmussen
previously:
[https://news.ycombinator.com/item?id=11798518](https://news.ycombinator.com/item?id=11798518)

~~~
merricksb
Also...

[https://news.ycombinator.com/item?id=10148972](https://news.ycombinator.com/item?id=10148972)
(150 points | Aug 31, 2015 | 143 comments)

~~~
bryanrasmussen
just about nothing is as popular as null is.

------
sfilargi
The problem is not the NULL but the lack of types.

------
zyxzevn
No. Undefined behaviour is.

------
jonstaab
Thanks for the article! I've often heard that null is bad, but haven't ever
seen such a thorough, readable explanation.

Just so I can think it fully through for myself, it seems that the problems
with null are:

1\. Its semantics are different from whatever type it is substituted for, so
can't be used as a value

2\. Superficially, it looks identical to a missing record value. This
difference might be something you want to ignore (isNullOrEmpty), or something
you care about (cache miss or hit with null)

3\. It is used both for missing data, and missing functionality, which
confuses two separate systems.

I agree that null as a type generally works better than null as a value, but I
don't know if you can always articulate it as a type, especially in dynamic
languages. A pragmatic solution seems to be a combination of:

\- A Maybe type or monad. This forces you to unpack the nullable semantics of
the thing, either in the type system or by unwrapping the value. A Maybe monad
is a well designed interface for dealing with the edge cases, but it doesn't
make the edge cases go away. This eliminates problem #1, and manages problem
#2.

\- Nil punning. (concat nil nil) yields an empty list in clojure. Same for +,
string/join, etc. This is really similar to Monads/Types, but switches the
responsibility for handling null intelligently from the data structure to the
standard library. Putting null in the type forces you to opt in to null; nil
punning forces you to opt out. This makes for more terse code, which is nice,
but probably has a slightly narrower scope of application than monads, since
it tackles problem #1 by making it make sense in most cases rather than
eliminating it entirely, and nil punning doesn't always make sense.
Incidentally, this seems closest to PHP's and javascript's strategy; their
real problem is that they extend nil punning to cases where nil isn't involved
(1 + '1' anyone?).

\- Key or attribute errors. This is sort of a fallback to compensate for
failing to handle the null case, but often works well when something just
"shouldn't be null". This is probably just a substitute for a lack of compiler
checks, but works well enough in the python world; sometimes failing hard is
the right thing.

\- Distinction between code and data. I like higher-order functions, so I'll
just say that "sometimes data includes functions". But in most cases, the
function you're calling should be resolved at compile time. Interfaces should
be fully implemented, and (as in python), there should be a distinction
between missing functionality (AttributeError) and missing data (KeyError).

Ultimately, it seems to be a question of language/api/user interface design:
there is a difference between present, present and empty, and absent.
Regardless of what strategy you use to manage the difference, there has to be
one.

------
bitL
If you have self-discipline and aren't lazy, NULL is totally fine. Maybe
language designers could have made it behaving more like a 0-dimensional
sentinel for beginners.

------
skgoa
I wonder whether the author also hates the 0 and 1 elements of natural
numbers. Since they have the same flaw of having weird, special semantics that
all other other numbers don't share. In fact 0 is not even a number, but a
placeholder for the concept of the absence of a number. Just like NULL.

~~~
giornogiovanna
Zero's behavior is totally consistent with the other numbers, though - it
doesn't break associativity, commutativity, or any of the other stuff you'd
expect. On the other hand, NULL takes every type I've ever written and adds an
instance whose behavior with every function is, at best, to crash my program,
and at worst, completely undefined. Its behavior is not at all consistent with
the other instances.

~~~
AnimalMuppet
0 breaks division, though...

------
sharpercoder
Yet `Maybe<T> | Option<T> | ...` is not an option (pun intended), as Rich
Hickey explains here:
[https://www.youtube.com/watch?v=YR5WdGrpoug](https://www.youtube.com/watch?v=YR5WdGrpoug).

In effect, his argument is: 1) You have `public X Do(Y y)` changed into
`public X Do(Option<Y> y)`. This will break your API. 2) You have `public X
Do(Y y)` changed into `Option<X> Do(Y y)`. This will break your API.

Thus, do not use Option<T> or equivalent in your API's. Only use a language-
supported construct such as C#8's upcoming `string?` and `string`.

~~~
bunderbunder
This is a spot where I've got to respectfully disagree with Mr. Hickey.

Changing a public API call that used to guarantee that it returned a value so
that it might now return nothing is a breaking change, and, as an API
consumer, I _want_ my APIs to broadcast that change loudly. Compiler errors
are a good (but not the only) way to do that.

Changing a public API member so that its arguments are now `Maybe[T]` is just
silly. There's no need to introduce a breaking change there. Just overload it
so that you now have versions that do and do not take the argument and get on
with life.

If there's an argument to be made here, it's that statically and dynamically
typed languages require different ways of doing things. In a statically typed
language, I expect the compiler to keep an eye on a lot of these things, and
I'm used to leaning on the compiler to catch things like a function's return
value changing. In a dynamic language, I'm not.

I'm also, when working in a dynamic language, used to having to deal with the
possibility that, at all times, any variable could contain data of literally
any type. Removing nullability there changes the set of possible "this
reference does not refer to what I expected" situations from (excuse the hand
waving) a set with infinite cardinality to a set whose cardinality is infinity
minus 1. If you think of NULL as effectively being a special type with a
single value (call it "void"), then eliminating it reduces the number of
_classes_ of errors I have to worry about in a dynamic language by 0. I'm hard
pressed to see any value there.

~~~
tatut
This is backwards. Rich did not advocate for changes that break promises.

The point in the talk is that "strengthening a promise" should not be a
breaking change. Changing return type from "T or NULL" to always returning T.
The case where you previously couldn't guarantee a result, but now you can.

The other case "relaxing a requirement" also should not be a problem. The case
where you previously had to give me a value, but now I don't need it and can
do my calculation without it.

~~~
bunderbunder
TBH, I'm happy with that being a breaking change, too. Just keep returning a
T? that happens to always have a value until the next major version #
increment (or whatever), and then make the breaking change, and then I get a
clear signal that I can delete some lines of code.

The alternative seems like a path that, in any decently complex software
project, ultimately leads to an accumulation of useless cruft that'll probably
continue to grow over time as people keep copy/pasting code that contained the
now-useless null-handling logic.

------
thwy12321
"The key point here is our programmers are Googlers, they’re not researchers.
They’re typically, fairly young, fresh out of school, probably learned Java,
maybe learned C or C++, probably learned Python. They’re not capable of
understanding a brilliant language but we want to use them to build good
software. So, the language that we give them has to be easy for them to
understand and easy to adopt. – Rob Pike"

When did computer science become about hand holding? has it always been this
way? Look at react. It was designed to force functional programming concepts
in an OOP manner. Is the future of programming the implementation of tightly
controlled interfaces with extreme type safety? I would argue thats where we
are going. Things are becoming less expressive, not more.

~~~
coldtea
> _When did computer science become about hand holding?_

When people with pragmatic goals want to get large teams of new programmers
productive fast, and can't expect everyone to be able to fend on their own or
can afford the cost of accumulated mistakes.

> _Look at react. It was designed to force functional programming concepts in
> an OOP manner._

Whatever that means, as React has little to do with "OOP manner".

~~~
thwy12321
OOP meaning React.Component, functional meaning immutable html state, property
inheritance, render(), etc

~~~
coldtea
Component hierarchies != OOP. They are an inevitable part of UI, which is
hierarchical.

React has move to stateless components and functions over classes.

