Hacker News new | past | comments | ask | show | jobs | submit login
The Brilliance of “nil” in Objective-C (collindonnell.com)
91 points by ingve on April 30, 2022 | hide | past | favorite | 151 comments



As someone who started writing Objective-C on macOS in 2003, the whole nil messaging syntax is pure hell compared to any modern language that disavows nulls entirely (like Rust, and to a large extent Swift).

You can literally never trust anything - ever - is not nil because `nonnull` is just a hint, and messaging nil will never crash. It'll just return 0 (or {0} or 0.0 depending on context), which is sometimes valid - and sometimes a horrible failure mode. Failing silently like that is the worst option.

As such you end up having a ton of boilerplate all over the place checking the validity of input parameters to methods and initializers for conditions which shouldn't be representable in the first place - even when the method signature warrants that it's not representable, because it still won't crash on ya.

Let's say you have a chain of property accessors, a.b.c.d, the where d is a float. They're all supposed to be nonnull. You get 0.0. Is that valid? Shrugasaurus rex. Time for the ol' NSParameterAssert(a.b.c != nil).

[edit] Also, canonically, there's 5+1 'null' values: nil, Nil, NULL, -[NSNull null] and 0 - plus nullptr in Objective-C++.


> Failing silently like that is the worst option.

> As such you end up having a ton of boilerplate all over the place checking the validity of input parameters to methods and initializers for conditions which shouldn't be representable in the first place

Dynamically typed languages (at least, JavaScript and Python) have been gradually learning this lesson.


Actually python fails hard and fast. This is very different from javascript. It actually makes python safer then C++ even though python is dynamically typed.

It is far easier to debug a error caused by out of bounds memory writing/reading in python then it is in a strictly typed language like C++.

This is because C++ doesn't actually fail hard and fast when you do this.

If you ever programmed extensively in both languages you will actually see that python is by far less error prone then C++ despite being untyped. Javascript is actually the same, though a little worse then python because how an undefined can get propagated deep into a program.

Still even though an undefined is hard to debug in javascript, it's still traceable through a step by step methodology. A C++ seg fault is not always so straight forward.

I would argue that it doesn't depend on whether the language is dynamically typed, but more on the traceability of crashes.


> It actually makes python safer then C++ even though python is dynamically typed.

Yes, Python is safer than C++. C++ makes no pretence of being a safe language. Static typing is not the same thing as safety.

> It is far easier to debug a error caused by out of bounds memory writing/reading in python then it is in a strictly typed language like C++. This is because C++ doesn't actually fail hard and fast when you do this.

Out-of-bounds array access is undefined behaviour in C++, but that isn't a result of C++ being statically typed. If you want a statically typed language with bounds-checked arrays, look at Java, Ada, or Rust.

More broadly, languages with static type systems aren't precluded from using runtime checks.

> python is by far less error prone then C++ despite being untyped.

Again this doesn't generalise to all statically typed languages.

Also, Python isn't untyped, it's dynamically typed. An untyped language is a language with no concept of type, such as assembly or Forth. [0]

> Javascript is actually the same, though a little worse then python because how an undefined can get propagated deep into a program.

How is JavaScript different from Python here?

> I would argue that it doesn't depend on whether the language is dynamically typed, but more on the traceability of crashes.

I'm not sure I follow, what doesn't depend?

I agree that for a language to score highly on debugging, there's more to it than being statically typed.

[0] https://en.wikipedia.org/wiki/Programming_language#Typed_ver...


> C++ makes no pretence of being a safe language

Yes it does. Throughout the entire history of C++ has been touted as as a "better C", where "better" includes "safer". This was not just empty marketing; the claims were justified.

For instance, std::string is a heck of a lot safer than manipulating character arrays. You can do str1 = str2 + str3 without worrying about memory allocation. Not to mention "return str1" as if it were a scalar value.

There are ways of using C++ that are entirely safe, along those lines. Not just modern C++ either, but old C++98. You carefully develop some classes that are nicely behaved, and then use only those classes. No C arrays, no C pointers.


> Yes it does. Throughout the entire history of C++ has been touted as as a "better C", where "better" includes "safer". This was not just empty marketing; the claims were justified.

You're arguing that C++ makes safety improvements over C. I broadly agree, but that's not the topic at hand. C++ is still quite plainly not a safe language.

Whether it's safer than C isn't the point, but this causes undefined behaviour in both C and C++:

    int i;
    int j = i;
as does this statement:

    int i = 1 / 0;
as does:

    int i = INT_MIN * -1;
as does:

    int i = INT_MAX + 1;
> There are ways of using C++ that are entirely safe, along those lines. [...] No C arrays, no C pointers.

Practically speaking, no, there are not. As I showed above, it's not just arrays and pointers that have safety issues in C and C++. I've rambled before about how there's no practical way to entirely avoid the unsafety pitfalls of C and C++ (short of formal verification). [0][1]

Compare this against the Safe Rust subset of Rust. Safe Rust really is a safe language, at least by their (very strong) definition of the term as guaranteed to be free of undefined behaviour. [2] It's not possible to define a practical subset of C or C++ with this property.

[0] https://news.ycombinator.com/item?id=26307709

[1] https://news.ycombinator.com/item?id=30597750

[2] https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html


> You're arguing that C++ makes safety improvements over C. I broadly agree, but that's not the topic at hand. C++ is still quite plainly not a safe language.

The topic at hand is not what your purporting it to be either. You seem to be thinking that the topic at hand is C++ is a language that touts itself to be safer then python. And you are responding to me as if I made that claim. I never did.

Obviously the person you're responding to is making the claim that C++ does tout itself as "safer" then C but not necessarily "safer" then python. But he is responding to an Actual claim you made saying that C++ doesn't tout itself as safe.


> Practically speaking, no, there are not.

Yes, there are. C++ has constructors which ensure that uninitialized objects don't happen.

   mynumber i;        // i is initialized by constructor
   mynumber j = i;    


   mynumber x = mynumber(1) / 0;  // reliable exception is thrown

   i * INT_MAX;       // exception or bignum support in mynum


> Yes, there are.

We're not talking only about read-before-write and integer arithmetic, we're talking about all causes of undefined behaviour.

If it were possible to define a practical subset of C/C++ which guarantees the absence of undefined behaviour, someone would have done so by now, and cybersecurity would be in a much better place. The subset would have a name, would be widely discussed and studied, and we'd be able to point to a large security-sensitive project making successful use of it. I don't believe any of those are the case.

We have subsets like MISRA C and MISRA C++, but they do not offer solid guarantees against UB. We have ambitious formal verification systems such as [0] which are able to reason about C/C++ programs to help the developer produce a solution guaranteed to be free of undefined behaviour.

All that said it would be an interesting project to see how far you could get with a truly safe C++ subset. Like MISRA C, it would be important that there be an automatic way to check conformity.

> C++ has constructors which ensure that uninitialized objects don't happen.

Yes, Boost offers a library for safe integer arithmetic [1] and it's a pity it's so rare for that kind of approach to be taken. A similar approach could presumably be taken to close the door on unsafe union operations, by insisting on using std::variant instead. C++ is particularly amenable to this kind of thing, on account of its operator overloading.

Unfortunately there are many other areas of the C++ language where it's not so straightforward, such as concurrent programming and memory management. Again please see my linked comments where I give a number of examples.

On the memory management front I imagine the way to go would be to ban pointers outright, but permit smart pointers, and have strict rules regarding references.

Also, here's a neat blog post I stumbled across on how std::string_view is unsafe, among other things. [2] edit I now see you've already seen it. [3]

[0] http://www.eschertech.com/products/

[1] https://www.boost.org/doc/libs/1_79_0/libs/safe_numerics/doc...

[2] https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

[3] https://news.ycombinator.com/item?id=26948129


>Yes, Python is safer than C++. C++ makes no pretence of being a safe language. Static typing is not the same thing as safety.

I'm just addressing the posters comment on statically typed langauges. Not saying anything about "pretenses".

>Out-of-bounds array access is undefined behaviour in C++, but that isn't a result of C++ being statically typed. If you want a statically typed language with bounds-checked arrays, look at Java, Ada, or Rust.

Yeah I know.

>Again this doesn't generalise to all statically typed languages.

I never said this. I am just commenting on the opposite generalization the poster I'm replying to made. In general statically typed is safer the dynamically typed but this doesn't apply to all actual languages with these properties. There are actual statically typed languages like C++ that are less safe then a dynamically typed language like python.

>Also, Python isn't untyped, it's dynamically typed.

This is just a semantic mistake. Clearly I meant dynamically typed. Not untyped. But I hope people get the meaning.

>How is JavaScript different from Python here?

Javascript doesn't crash as much as python. For example Calling someHashTable[nonexistentKey] in javascript returns a value that can actually be propagated far along into the datapath before throwing an error. IN python that's an instant crash. This makes python much easier to debug then js. I use the words less error prone, because python in the long run ends up having less errors due to creators implementing failures that happen quickly and early.

>I'm not sure I follow, what doesn't depend?

Statically typed languages prevent run time errors related to types. I am saying that while this is great this property isn't that important then errors that are easily traceable.

What's by far more important is that even if the programming language has many holes where run time errors can occur it is better just to have those runtime errors easily traceable to the source of the error. The existence of errors that are hard to trace in C++ actually makes it much much less safe then python despite the extensive type system. It indicates that type safety is much less important.


> Statically typed languages prevent run time errors related to types. I am saying that while this is great this property isn't that important then errors that are easily traceable.

I don't think I agree, it's very beneficial to catch a bug at compile time (or at least 'type-check time') rather than at runtime. It's particularly valuable for refactoring. Even if it's easy to trace a runtime error, it's still a win to transform it into a compile-time error, or a bright red line in an IDE.

> Statically typed languages prevent run time errors related to types.

The upper-limit on what can be done with type-systems isn't terribly clear, as sophisticated type systems (of the kind we don't see in today's mainstream languages) can prevent errors which aren't simply passing the wrong kind of data. An old Rust example: [0].

> What's by far more important is that even if the programming language has many holes where run time errors can occur it is better just to have those runtime errors easily traceable to the source of the error.

An error might be very rare or might only occur when a malicious action is taken, so it's important for a language to be helpful in writing correct code. Consider a buffer-overflow in a C codebase that only happens when a malicious packet is received.

Of course, type systems aren't the only way for a language to help the programmer write correct/safe/secure code. For example GCC has a flag to have the compiler insert code to explode noisily if the program tries to dereference NULL. That's an improvement in safety when compared to what standard C gives you (undefined behaviour of course), but doesn't relate at all to type systems.

[0] https://yoric.github.io/post/rust-typestate/ (although typestates were since removed from the Rust language)


>I don't think I agree, it's very beneficial to catch a bug at compile time (or at least 'type-check time') rather than at runtime. It's particularly valuable for refactoring. Even if it's easy to trace a runtime error, it's still a win to transform it into a compile-time error, or a bright red line in an IDE.

Well explain python then. Why is python easier to work with then C++? There is literally so much raw data to back up my statement on how python is "easier" then C++. Anyone who has worked extensively with both languages, and I mean very extensively, knows how painful C++ is compared with python.

I think you may not agree. But that's just your own unique opinion. In general, I'm the one who's right.

>An error might be very rare or might only occur when a malicious action is taken, so it's important for a language to be helpful in writing correct code. Consider a buffer-overflow in a C codebase that only happens when a malicious packet is received.

So what's your point? Your stating this because of what?

>Of course, type systems aren't the only way for a language to help the programmer write correct/safe/secure code.

Why are you telling me this? Did I say type systems were the only way?

>That's an improvement in safety when compared to what standard C gives you (undefined behaviour of course), but doesn't relate at all to type systems.

Another improvement is to make a programming language that is impossible to crash. Several languages already have this ability. But this is a tangent. Again I really don't want to dive to deep into this.

The point remains this: Python is more safe then C++ despite having more holes for crashing and no type system. Can you prove otherwise or not?


> explain python then. Why is python easier to work with then C++?

This isn't an apples-to-apples comparison. They're very different languages, in terms of everything from memory management to build models to type systems to concurrency to historical baggage.

Python is certainly more writeable than C++, for countless reasons. In terms of maintainability, Python's dynamic types are a disadvantage, but Python is also a much simpler and safer language than C++, which is very beneficial.

> I think you may not agree. But that's just your own unique opinion. In general, I'm the one who's right.

Please refrain from this kind of thing on HackerNews.

> what's your point?

Traceability of errors isn't everything. It's also important for a language to help the programmer avoid introducing defects in the first place.

Neither Python nor C++ do this particularly well. Neither is anywhere near, say, Ada or Haskell.

> Why are you telling me this? Did I say type systems were the only way?

There's no reason to be defensive. Not everything is a refutation.

> Another improvement is to make a programming language that is impossible to crash. Several languages already have this ability.

I'm not certain what you mean here. Which languages are you thinking of?

It's very often useful for a language to have runtime checks that may result in immediate termination, rather than continue with unaddressed erroneous execution. Examples are when Java throws on an out-of-bounds array access, or on divide-by-zero, or on dereferencing null.

> Python is more safe then C++ despite having more holes for crashing and no type system.

We're agreed on this point, although I'm not sure I'd say Python has more holes for crashing. Python is the safer of the two languages. [0] C++ is a minefield of undefined behaviour, [1] Python is not.

Our discussion has not been limited to safety.

[0] https://news.ycombinator.com/item?id=31223659

[1] https://news.ycombinator.com/item?id=31224310


>This isn't an apples-to-apples comparison

Sure but the comparison or topic at hand was safety. In the case of safety, python is more safe.

>Please refrain from this kind of thing on HackerNews.

Why? I'm saying the general opinion among most people is that my point is correct. Your point is unique and more exclusive to you. Why refrain from stating general opinions and distinctions? I think you should refrain from telling people not to state reasonable things.

>In terms of maintainability, Python's dynamic types are a disadvantage,

Safety is correlated with maintainability. Before I address this, let me say that python now has type annotations and when paired with external type checkers during build time, python effectively becomes a statically typed language with compile time type checking and a type system more consistent and powerful then C++.

That being said. My claim is EVEN without using type annotations, python is by far more safe and more maintainable then C++, simply because of higher traceability of errors.

>Traceability of errors isn't everything. It's also important for a language to help the programmer avoid introducing defects in the first place.

Yes of course. But again this isn't the point. The point is python vs. C++. That being said my claim on the side also supports the notion that error traceability is more important then pre-compile time safety.

>There's no reason to be defensive. Not everything is a refutation.

Not being defensive. Just not seeing the point. There's a topic, but your introducing topics that are obvious? It's like you saying the sky is blue. Thank you for letting me know, but why?

>I'm not certain what you mean here. Which languages are you thinking of?

Crashes or undefined sections of a language are defined by the language itself. You can always design a language where every hole is filled. Outside of using up too much memory and FFI there are languages that are defined so that they can never crash and that there is zero undefined behavior period.

>It's very often useful for a language to have runtime checks that may result in immediate termination,

There are languages that don't even compile with out of bounds access. Such languages force you to write handler logic in order to even work.

>We're agreed on this point, although I'm not sure I'd say Python has more holes for crashing.

Python does have more holes for crashing. The thing is those holes aren't "undefined" though. They're very much defined behavior. That's why they're so traceable, but these are still holes in my mind because there are languages so safe that crashing holes don't even exist.

>Our discussion has not been limited to safety.

No it has not. but mainly because you've been taking it on tangents. I've been addressing your tangents but the main point is: Python is safer then C++. If we agree then the discussion is finished in my mind. I stated that here.

https://news.ycombinator.com/item?id=31225265

I mean I'm ok with tangents but in a lot of discussions, this one included, the tangents are introduced in a way that obfuscate the main point. There's a discussion where I'm trying to prove a singular point and then suddenly one person introduces 20 different topics and pretty soon the singular point is lost in the weeds. Nobody was a definitively right or wrong.


What? No it doesn’t?

Python doesn’t stop until you attempt to use Nil, no different from JS or any other lazy bound language.

You might be conflating null/undefined handling with type coercion, which is very different, and much worse.


That wasn't what the OP was talking about. He was talking about failure.

Python doesn't stop you from using a None, it just fails hard and fast.

Javascript on the other hand doesn't fail. It has 3 things equivalent to nil or Null. Called undefined and NaN and NULL. It lets you do some nutty things with undefined. undefined + 1 = NaN. NaN + 1 = NaN. NULL + 1 = 1. In python None + 1 is an instant hard crash.

Still these types of errors are traceable by following the code path and even easier with a debugger. For C++ a seg fault could trigger an error anywhere and there's no direct path to the source of the error. For nullptr it triggers a runtime error but this is the same problem as javascript... not a huge deal.

In terms of safety the order is: Python, Javascript, C++.

C++ alone with the segfaults is what causes it to be on the bottom of the list despite the extensive type system. One seg fault alone is worse then a python program with 20 type errors due to the fact about how hard it is to trace seg faults. There are C++ production code bases where known segfaults exist but no one has ever been able to find the offending line.


It's worth noting that Python also has NaN, and it behaves the same way as in JavaScript when being added to other numbers. This has little to do with the safety of the language, and more to do with the IEEE 754 spec on floating point numbers. For example, `math.nan + 1` in Python should equal NaN as well.

While NaN technically is a kind of null, it's one embedded in the spec for floating point numbers, and therefore present in every language that uses them, including far safer languages like Rust.

It's also worth noting that neither language has the behaviour described in this article, which is that doing anything to a null (not NaN) value returns a new null value. In Python, there are a handful of allowed operations on `None` (such as converting it to a string), and everything else will return an error. Javascript is somewhat more lenient, in that it allows type coercion, so in a numeric context, `null` will first be converted to 0, and `undefined` will first be converted to NaN. Likewise, in the presence of other strings, these values will be coerced to string values.

However, if you attempt to access attributes on these objects, call methods on them, or pass them to methods that don't accept null values, then they will throw errors. This is the big difference to a language like Objective C.


Yes I'm fully aware of the technical details behind most of this stuff. However the high level consequences of it is exactly as I said. IEEE floating point values aren't really used that much in python as much as they are in JS because python chooses to crash the entire program rather then return a NaN. Probably the only thing I use out of that spec is infinite.

In the end for JS, The user ends up seeing 3 values equivalent to null in javascript and these values propogate through the program stealthily. Lenient isn't the right word... error prone is more inline with the consequence of javascript type coercion.

Additionally yes, I am aware of how every function in Obj C is the maybe monad as the article mentions. I'm just addressing the comment. Thank you but there's no need to reiterate the article to me.


Python definitely returns NaNs when required to by the spec (which is to say: whenever maths is happening with NaNs). I think you're getting confused between propagation and coercion.

NaNs are always propagated in both languages when doing operations on numeric values. This is the IEEE specified behaviour. `null`, `None`, and `undefined` are never propagated implicitly in either language (although Javascript now has a null chaining operator that can propagated nulls explicitly).

In Javascript, the `null` and `undefined` values can be coerced to different values if you apply certain operators to them. This is different to null propagation (although it's still not a good design choice for a language). For example, when added to a number, `undefined` will be coerced to NaN, and `null` will be coerced to 0. (The NaN value will then it course be propagated if you do further operations with it, but, like I say, this is true in both Python and Javascript.)

In Python, there is no implicit type coercion (except arguably between different numeric types e.g. int to float and vice versa), so this does not happen.

NaN is not really equivalent to null in either Javascript or Python, except explicitly in floating point operations. Rather, in both cases, the rest of the language treats NaN as a regular floating point value (which it is).


>Python definitely returns NaNs when required to by the spec (which is to say: whenever maths is happening with NaNs). I think you're getting confused between propagation and coercion.

It returns NaNs in absurdly unused mathematical operations like:

   x=float('nan')
   x = float("inf") * 0
Javascript returns it on any math operation on an 'undefined' which is another bad value likely to occur in JS. It basically never happens in practice for python. Yes python will propogate a NaN but the operations that instantiate a NaN are so absurd and rare that effectively NaNs aren't used or ever encountered in python.

I personally use floating point infinite to create initializer values that are always greater then everything else for algos involving min, but effectively I've basically never seen infinite used in code either. There is a hole here in that one day I may encounter the NaN because as shown above... usage of infinite is one of the ways to generate a NaN, but again this is so rare that effectively it never happens.

I am not confused. I believe you're just getting a bit pedantic.


I program in C++ as a hobby and typescript for my day job. I would say if anything it's just as hard trying to figure out why a JS object is undefined as it is finding out why you get a segfault in C++.

I will add it's really easy to have a custom memory allocator that allocates extra bytes at the beginning and end of every allocation to detect heap corruption in C++. It's also super easy to add memory leak detectors. I've written both of those utilities in a couple hundred lines of C++ I honestly can't remember the last time I had a segfault that wasn't immediately obvious. Nowadays my biggest problems are debugging OpenGL global state errors.


>I program in C++ as a hobby and typescript for my day job. I would say if anything it's just as hard trying to figure out why a JS object is undefined as it is finding out why you get a segfault in C++.

I programmed both for my career. At one point I was literally full stack as in device and sensor integration code all the way through application code and databases to react and typescript on the browser.

An undefined can be traced backwards following previous logic. You can also set breakpoints earlier and earlier and eventually find the problem. It's tedious but straightforward. Additionally the error is reproducible.

In C++ If you write to memory out of bounds anything can happen and the behavior is actually random. Changing things in unrelated places can actually affect the outcome and give you false positives.

It might hit an unallocated page, in which case you get a segfault. Or it might hit unused data on a page that is allocated to your process in which case it won't have any practical effect (unless it is properly initialized afterwards, overwriting your first, illegal, write, and you then try to read from it, expecting the original (invalid) value to still be there. Or it might hit data that's actually in use, in which case you'll get errors later, when the program tries to read that data.

Pretty much the same scenarios exist when reading data. You can be lucky and get a segfault immediately, or you can hit unused and uninitialized memory, and read garbage data out (which will most likely cause an error later, when that data is used), or you can read from memory addresses that are already in use (which will also give you garbage out).

There is no direct backwards path from a segfault to the source of the error. All you know is that a segfault happened here, and the cause of the segfault must've happened somewhere before it.

Sure you can also code safely in a way that avoids these foot gun errors. But you can do the same in any language. People have been doing this with the "Good parts" of javascript prior to ES6 and typescript for years, but that's not the point.

There are also tools like valgrind and asan, but these tools are ALSO not the point.

What we're talking about is the difference in safety intrinsic to the language not intrinsic to your style of coding. C++ is definitively worse. OpenGL is also a lower level C library and tracing errors is also hard for the same reason why C++ has segfaults. Basically there's no intrinsic checks anywhere so although things are faster, things are much more error prone as a result.

If you really want to have fun try vulkan.


I completely agree with all your points except for the fact that tracing a segfault is harder than tracing an undefined object in JS. JavaScript will happily chug along with an undefined object and do random stuff as well. It may not throw any errors or warnings, and you'll only realize something is wrong because your app is behaving weirdly. This is the same problem as a C++ segfault.

And coding styles absolutely have a role to play in how safe a language is. A good developer can work just as efficiently in C or C++ as they can in something like Rust. Especially if they've built tooling to catch nasty bugs like heap corruption or memory leaks (which takes a couple hours at most).

This doesn't mean I'm advocating everyone abandon safe languages :)

I haven't had a chance to try out a language like Rust, but I definitely want to give it a try at some point. Safe languages are trying to solve specific self induced headaches which is awesome. A couple other languages I'm very interested in are Odin and Jai. Anyways, language safety is a great goal, but that doesn't mean a good developer can't avoid a lot of memory related footguns with a bit of forethought and planning (and again, yes, I know a safer language avoids those footguns by default :)


>JavaScript will happily chug along with an undefined object and do random stuff as well.

Javascripts behavior is not random. The exception thrown by an undefined object is deterministic meaning you run the program again with the SAME input parameters the program will crash at the same point. The only time it could be random is if you have time as input or a random number generator that effects the logic.

Additionally the undefined can be traced backwards logically by following the datapath to the source. You can do this by reading code or setting break points in the code and looking at the state along each step.

Both of the above DO not happen in an out of bounds memory error. A segfault may not even occur. Your program can run and nothing can happen. Then suddenly your program can run and then segfault at a random location. The segfault happens randomly and at random locations and there is no direct datapath that leads to the source.

>And coding styles absolutely have a role to play in how safe a language is. A good developer can work just as efficiently in C or C++ as they can in something like Rust. Especially if they've built tooling to catch nasty bugs like heap corruption or memory leaks (which takes a couple hours at most).

I don't think you have much experience with C++ if you're unaware how much harder debugging a segfault is than an undefined. Modern C++ doesn't need additional tooling and you can definitely stay within the bounds of features introduced in C++11 and above for safety, this is trivial and easily done. However segfaults can still creep in via libraries or other entry ways. This occurs especially for extremely large code bases.

>but that doesn't mean a good developer can't avoid a lot of memory related footguns with a bit of forethought and planning

Have you worked on large codebases with hundreds of developers? Even when every developer attempts to follow good practices or always uses smart pointers, eventually just via statistics some C++ foot gun accidentally goes off. You are infinitely better off in a language where the foot gun doesn't even exist.

Additionally some apis and writing really optimized code requires use of foot guns. Many safety features like smart pointers just have too high of a performance overhead so we avoid and use archaic stuff like 'new'.

CUDA for example doesn't allow you to use the std and introduces new foot guns such as dereferencing a device pointer on a host device. You can get around this by wrapping device pointers in a special device type but again this is overhead and the safety abstractions are not zero cost. Safety has a performance cost and making this performance cost as small as possible is why C++ is the way it is.


You seem to be missing the forest for the trees.

> I don't think you have much experience with C++ if you're unaware how much harder debugging a segfault is than an undefined.

I love how developer's go to argument is, "you must not be very experienced if you've never encountered X". It's gatekeepy and gross. I have had plenty of experience with C++, including maintaining a large corporate application in C++ professionally. I have had nasty segfaults. Remember, I was qualifying my statements with how "I program in C++ as a hobby" these days, which means I have complete control over how much unsafe C++ code I allow in my codebase. It turns out you can avoid most segfaults using a little forethought and planning, like I said.

> Have you worked on large codebases with hundreds of developers?

Yes, and it's gross. I worked on a 20 year old legacy C++ application that dealt with high frequency trading at my old job. It was nasty and there were several problems including a very large memory leak that was untraceable. You seem to have missed what I said, which was "but that doesn't mean a good developer can't avoid a lot of memory related footguns with a bit of forethought and planning". Key words, "a good _developer_". Once you throw in hundreds of developers into the mix all bets are off, and that's not the claim I was making.

If you're dealing with a system that you know inside and out, it is usually a very simple matter to trace a segfault. There are a lot of things that you can do to avoid untraceable segfaults, some of which you've alluded to. Use modern C++ safety constructs, things like `vector.at(i)` instead of `vector[i]` if you don't know if `i` is going to be in bounds. That is what I'm alluding too. If a solo developer codes with some simple forethought and planning, segfaults are not a big deal.

JavaScript undefined objects can be just as difficult to trace, under the assumption that you're a solo developer in C++ using good practices. Yes you can place breakpoints and try to figure out where the error occurs. You can do the same thing with a C++ segfault. The hard part is figuring out where the error occurs, which I believe we agree on. I know what you're saying about a segfault occuring at random times, but for the hundredth time, if you're following good practices, segfaults can be as predictable as a JavaScript undefined object. There's a qualification there, which is if you're following best practices.

Anyways, I hope you have a good day :). My intent is not to have an argument over an inconsequential problem, so I'm done replying. I hope I've made myself clear, I'm not trying to argue that C++ is better, or that segfaults are the same exact thing as JavaScript undefined objects. All I'm saying is that both are difficult things to debug, and if you're coding by yourself, you can use a few safety features (using built-in constructs, handrolled tools, or third party apps, I don't care what you use) to make a segfault no worse than a JS undefined object.


>I love how developer's go to argument is, "you must not be very experienced if you've never encountered X". It's gatekeepy and gross.

Ignore it then. I'm wrong. You honestly seem like you don't have much based off of what you're telling me about segfaults.

>It turns out you can avoid most segfaults using a little forethought and planning, like I said.

Just "avoid them" lol is what you're saying. That's how I program. Avoid all errors and bugs with a little planning and forethought so my programs are 100% bug and error free. That's how you do it.

Again I don't think you worked for a large corporate application if you think forethought and planning are all it takes to avoid a segfault or memory leak. With enough people and libraries and dependencies in the codebase statistically it inevitably gets introduced.

The only time it absolutely never gets introduced is if the language doesn't even have that class of errors.

>Yes, and it's gross. I worked on a 20 year old legacy C++ application that dealt with high frequency trading at my old job. It was nasty and there were several problems including a very large memory leak that was untraceable. You seem to have missed what I said, which was "but that doesn't mean a good developer can't avoid a lot of memory related footguns with a bit of forethought and planning". Key words, "a good _developer_". Once you throw in hundreds of developers into the mix all bets are off, and that's not the claim I was making.

Why make this claim at all? My point is C++ is less safe then python. Your attitude made it seem like your countering my claim but really you're just going off on a tangent on how good developers make a language safer? Why? Off topic. I mean the way you put it, we're basically in agreement. C++ is less safe then python and you need "good developers" in order to make it safe. Yet despite this agreement your attitude is passively hostile especially near the end where you just say you're "done replying" lol. I think what's going on here is you're changing the goal posts.

>If you're dealing with a system that you know inside and out, it is usually a very simple matter to trace a segfault.

How is this even relevant? When is a code base so simple that you know it inside and out? Have you seriously been dealing with C++ outside of your hobby projects?

>You can do the same thing with a C++ segfault.

No you can't. There's no data to follow backwards from the segfault. The segfault can even occur in a library you don't have source code for and it's very hard to trace this without external tooling. For javascript you just trace the path of the undefined variable. That's it. You don't even technically need a debugger for this.

Are you sure you worked with C++ in a large code base? Because what I stated above is somewhat definitive.

>Anyways, I hope you have a good day :). My intent is not to have an argument over an inconsequential problem, so I'm done replying.

No man. This is a cop out. I'm open to being convinced but you know your arguments are reaching dead ends you're going off on strange tangents just to try to stay afloat. So you act polite and pretend you're too above it all. Come on man that's weak.

It is literally definitive that an undefined is easier to trace then a segfault. There's no leeway or opinion here yet you have inserted so much off topic and questionable stuff that I'm not sure if you're even being honest.


Ok, clearly you do not know how to trace a "random" segfault. Segfaults are not random, you said so your self. It's when you write to, and I quote (you), "an unallocated page, in which case you get a segfault". Well, is there a way you can trace that? Yes.

Here's a great article I'll link to in a second, it might help for you to read a thing or two because you seem to think some problems are unsolvable.

> There is no direct backwards path from a segfault to the source of the error. All you know is that a segfault happened here, and the cause of the segfault must've happened somewhere before it.

Really? No direct way? Here's a cool article: https://ourmachinery.com/post/virtual-memory-tricks/

This article says:

> To see why, first note that the term random memory overwrite is actually a misnomer. Just like regular space, address space is mostly empty. With a 64 bit address space and an application size of say 2 GB, the address space is 99.999999988 % empty. This means that if the memory overwrites were truly random, most likely they would hit this empty space and cause a page fault/access violation. That would give us a crash at the point of the bad write, instead of the innocent read, which would make the bug much easier to find and fix.

> But of course, the writes are usually not truly random. Instead they typically fall in one of two categories:

> Writing to memory that has been freed.

> Writing beyond the allocated memory for an object.

And then he goes on to write about how you can use virtual memory to detect exactly when a segfault occurs. When I talk about avoiding segfaults these days, it's because I've written similar tools for myself. It doesn't mean a segfault never occurs, but when it does occur I get an error that says:

"Error: Buffer overrun detected from memory allocated on line 215 in file foo/bar.cpp"

Which means I know exactly where to begin tracing the segfault from.

I'll give you the benefit of the doubt and assume that you have programmed in C++. You just must be a pretty poor C++ programmer if you've never figured out how to make tracing a segfault easier. It's OK, we've all been there and you'll eventually figure out that this is a problem that actually has a solution. I know it's a crazy idea that these things aren't completely random and we shouldn't just throw our hands up in the air and say "oh well, this is unsolvable". Instead of, you know, being smart and using just a tiny bit of forethought and planning by making sure every heap allocation is wired through a custom allocator. That way you can detect stuff like segfaults and prevent them. Just with a bit of forethought and planning. Crazy, I know.

Lastly, a JS undefined object can happen in a library as well. You may or may not have access to the non-minified, non-transpiled source code. You may or may not get a correct stack trace depending on where the error occurred. This is a moot point. If you're using libraries, there will be untraceable bugs. Period. No matter the language.

And by the way your attacks on my experience are great. I feel like you're insinuating that I've lied about my experience because that might be a projection of your own experience? If that's the case, I'm sorry to tell you, most people are honest and don't go lying on hacker news for internet points. Once again, you're probably like 15 or 16, we've all been there. You'll grow out of the clout chasing don't worry :)


>Here's a great article I'll link to in a second, it might help for you to read a thing or two because you seem to think some problems are unsolvable.

There's plenty of ways to trace a segfault. One way is not intrinsic to the language and it involves recompiling the program via asan. Without this, you can literally have a memory error but no segfault. The segfault just doesn't occur at all out of pure chance. This basically makes it so there's no direct way.

There are ways, but none of them direct, as I said. Virtual memory tricks isn't a direct way. Creating non zero cost abstractions to debug the fault is not direct. I'm sorry.

>You may or may not get a correct stack trace depending on where the error occurred. This is a moot point. If you're using libraries, there will be untraceable bugs. Period. No matter the language.

Minified javascript is still more traceable then a C++ lib. As I said though the point of the stack trace in JS is the end point of a direct datapath to where the undefined was generated. You can follow the logic backwards via just your brain OR use breakpoints to check the state along each section of the path.

>And by the way your attacks on my experience are great. I feel like you're insinuating that I've lied about my experience because that might be a projection of your own experience? If that's the case, I'm sorry to tell you, most people are honest and don't go lying on hacker news for internet points. Once again, you're probably like 15 or 16, we've all been there. You'll grow out of the clout chasing don't worry :)

Except I'm not. I think you're the one not being honest here.


Last thing, and this is an alt account because of my procrastination settings, go ahead and type this into your browser's inspector Console:

```

const foo = {};

const bar = foo.baz + foo.something;

console.log(bar);

```

What is bar at the end of this? It's NaN. There's no error thrown at all. What does this mean? If you have some complicated math going on somewhere and all of a sudden you use undefined objects because something went wrong, now all your calculations are incorrect. Hmmm, this sounds familiar. Oh yes! It's just like when you read garbage data in C++ because of out of bounds memory. What does this commonly lead to...? Oh yes! It commonly leads to your app seemingly behaving erratically because it's using garbage data now, and it's pretty much untraceable!

In other words, if you had actually programmed in either of these languages for more than a couple months, you probably would have run into problems like this in both languages. What conclusion does that lead a sane person to? It leads them to the conclusion that segfaults (which are caused by writing to an unallocated page and often go undetected and just put garbage data somewhere in your program) can be just as difficult to debug as JavaScript undefined objects (which can happily be used in most calculations in JavaScript without throwing any errors).

Here are a few examples of programmers that are upset because they can't trace the source of their bug in the apparently completely traceable language of JavaScript:

https://stackoverflow.com/questions/2631464/whats-with-the-r...

https://joa.medium.com/hunting-a-javascript-heisenbug-cb13ce...

https://stackoverflow.com/questions/26563329/detecting-when-...

https://levelup.gitconnected.com/dont-fall-into-the-nan-trap...

https://arcade.ly/blog/2020/02/23/how-to-debug-nan-and-nan-p...


No. Because in C++ no segfault may be thrown at all. You can put a for loop around an out of bounds memory write and it may hit a segfault at index 300 for no apparent reason or the segfault likely even happens AFTER the for loop.

The examples you posted are typical complaints. Those errors don't hold a candle to out of bounds memory errors though. Typical solutions in the examples you posted involved people suggesting breakpoints for really complex datapaths. The solution to the problem is straightforward however tedious.

One of the bugs you posted involved a bug in the interpreter. If that's the case all bets are off.


>Dynamically typed languages (at least, JavaScript and Python) have been gradually learning this lesson.

Modern JS avoids this issue entirely with destructuring, optional chaining, and default params. You can actually write completely safe Javascript now without using any awkward safety checks.


After starting using Option<T>, no implementation of null is 'brilliant' any more. Not that any one of them ever was.


I want to specify Option<T> with exhaustive pattern matching as the only way to extract the value out of the container is the safest way.

Optionals without this feature/restriction are actually the same as a null.

   if(optional_value.has_value())

   if(value != NULL)
Both of the above checks are STILL required or it's an error. Optionals only propagate the null to another method call. There has to be exhaustive pattern matching for this feature to truly shine.


Even optionals with bad 'is null / get' assessors are better than nulls, because they come with nullsafe operators like map / flatMap and the 'get' call usually always errors or panics on failure.


It's the same thing right? flatMap or Map must call the has_value method to work without crashing.


It's not because (I assume) even the shitty optional gives you a signal that the value is nullable and that you have to check.

The important part is moving `null` out of every type in the language, and into its own little box that the compiler can tell you about. It's nice if you can make it a "library" utility without holes, but it's also fine if it's not perfect, or even if you have to make it a builtin magic thing, like C# &co.

> flatMap or Map must call the has_value method to work without crashing.

The implementation details don't really matter to the caller, what matters is that they're always operating in a safe environment without being bothered.


>The implementation details don't really matter to the caller, what matters is that they're always operating in a safe environment without being bothered.

It matters because it's not true safety. flatMap is a higher level operation, I can make >=> or >>= work for values with null but that's not true safety.

True safety as you said, is no null. But an optional that still generates a runtime error is basically isomorphic to a null. The point of getting rid of null is not null itself, but the associated runtime error that comes with it.


Tony Hoare called null his billion dollar mistake[1] for good reason. It’s especially remarkable given that Mr. Hoare is easily one of the finest computing scientists ever. Even the most brilliant can get it wrong. And only the most brilliant will admit it. Hats off to Sir Tony.

[1] https://www.infoq.com/presentations/Null-References-The-Bill...


He said that he knew it was a kludge at the time, but it was seductively easy to implement... just add a special case to the type checker so that if the type being assigned to a reference is wrong, allow it if it's null. Properly working nullable and non-nullable references (or making optionality and references orthogonal concepts) would have been a much deeper change to the language and the compiler.


Optional or Maybe is just “type or null” which is the implicit type of every language with nullable by default types. It’s worse in every way to null.

If your language can’t enforce non-nullable references then you have to worry that your Optional might be null. And if your language can enforce non nullable references then you’re boxing your types for no reason because inside if x != null you could just use x unboxed instead of x.getValue(). The only thing you miss is that None<A> and None<B> are distinct types but Go solved that with a typed nil.

For example Python with mypy is so much more ergonomic.

    def foo() -> Optional[int]:
        return None if coinflip() else 4

    x = foo()
    if x is not None:
       print(x + 5) # no error
    y = x * 2 # error None doesn’t have a __mul__ operator.


> Optional or Maybe is just “type or null” which is the implicit type of every language with nullable by default types. It’s worse in every way to null.

This is completely backwards, and it's much more basic than ebingdom's explanation. Although, I don't think they're necessarily wrong, but I don't think you need functors specifically or category theory in general to explain this.

When your language allows the presence of null values, then by default you are asserting that null is a member of every type in your language. This means that any value can possibly be null.

When using Optional or Maybe or $My_Language's_Name_for_a_Some_Value_or_No_Value_Type, with a language having a half-decent typechecker, for any potential place where you may have a `Nothing` or what-have-you the typechecker will enforce that you are addressing that.

In a language that doesn't provide the ability to distinguish between null values and anything else via static analysis, you are forced at runtime to explicitly check every possible place that a null value could occur (I have used dynamically-typed languages for the better part of two decades professionally; nobody does this), or you can keep the entire logical structure of your application in your head perfectly such that you know where any nulls could occur and thereby skip writing as much code (no one can do this, and anyone who believes they can is either a junior or a junior) or acknowledge that you are a human and write a bunch of tests to try to mitigate potential failures (you will still have a microservice crash and spend a day or two putting out fires when that null slips through, because something something tests don't prove absence of bugs, thank you Dijkstra).

On the other hand, with a sufficiently sophisticated type system, you are able to write programs that let you flexibly target where optional values are present and where they are not with a minimum of boilerplate, and you never have to worry about this particular--but rather _significant_ when it comes to tolerating incessant firefighting--class of errors at runtime. This is better in every way to null.


How do you feel about Kotlin where nulls are present but by default all types are not nullable? This is pretty much my ideal balance between unwrap hell and safety. You only have to acknowledge the nulls on nullable types.


> How do you feel about Kotlin where nulls are present but by default all types are not nullable?

I have to ask--how is that different from an option type?


I think the GP is probably thinking about Java's Option<T>, which adds an extra layer of indirection and is in some ways the worst of both worlds. In Java, the Option<T> could be null, or it could wrap null. The clue here is that the GP is talking about extra boxing/unboxing related to Option<T>, which you don't get in languages with better Option<T>/Maybe<T> implementations.


> it's much more basic than ebingdom's explanation

Just to contextualize my explanation, I took it as a given that we were talking about the two _safe_ ways to handle missing data: a type system which has a notion of nullable/non-nullable types (e.g., Kotlin) vs. a type system which has Option<T> and no primitive notion of nullability (e.g., Rust).

I thought it was obvious that one would want this to be tracked in the type system _somehow_ (to prevent the billion dollar mistake), and that we were just discussing different approaches to achieving that goal.


I like the way PHP does it, where the return type of a function can be Object, which does not allow null, or ?Object, which does allow null. Although I would probably still consider using an Optional<Object> type if PHP had generics (of course you can still write an Optional-pattern using a mixed return type, but then you lose type guarantees), just so you cannot access the value at all without being reminded that you do need to check that type, even if you never look at the return type


> It’s worse in every way to null.

No, Optional is actually better than null because it's functorial. That means it obeys some common sense laws that one might intuitively expect. Instead of reciting the functor laws, I'll give you a concrete example.

Consider a `HashMap<K, V>` type with the following API:

    get(key: K) -> Option<V>
So, `get` returns `None` if the key is not found in the map. That's the proper API one would expect. It forces the caller to acknowledge the possibility that the key might not be in the map.

However, if you try to do that with nulls instead of with Optional, it breaks if the value type is nullable, because nulls don't nest. For example, if your `get` method looks more like this:

    get(key: K) -> V?
and you use it with a nullable valuable type like HashMap<String, Int?>, then when the `get` method returns null you have no idea if it's because the value was null or the key was not in the map. Then, every time you want to look something up in the map, you have to first check if it's in the map with a different method and then do your lookup. This is error prone, because if you forget to do the check first, your program now has a silent bug that the type checker does not detect.

You might think to yourself, "that's silly, why would I ever want to store nulls in a map?" Well, here's one of many possible use cases: suppose you are using the map as a cache, and you want to cache the fact that something doesn't exist. This is called negative caching, and it's occasionally useful.

Or maybe you're building some generic code (like a collections library) that happens to use hash maps internally. If the hash map's get method uses a nullable type instead of Optional for its result, then it's likely that your library does not work correctly for nullable types, because it's easy to accidentally assume that null indicates that the key was not found in the map. That kind of bug won't be caught by the type checker.

Nulls are bad. Optional is good.

We really need to teach category theory to programmers so people can stop making this mistake which leads to error-prone APIs and code with hidden bugs.


This assumes that functional is some undisputed good and that we want to encode all errors in the type checker. Which dgmr, there are some definite benefits but it’s a trade off — in this case it’s verbosity and running into cases where you have to write extra code to prove to the type checker that valid code is valid.

   mymap = {
     “hello”: “world”
   }

   print(mymap[“hello”])
There’s no need to handle the None case because it’s impossible. But the type checker can’t figure this one out so we have to write.

    x = mymap[“hello”]
    if x.some:
        print(x.unwrap)
Every* language with nulls solves for this. Go returns ok, Python has KeyError. They model the same problem in a different way with different trade-offs, the main one being more ergonomic for the programmer and avoiding having to do the same checks anyway and call .unwrap.unwrap.unwrap.

* Java has sinned so I can’t really defend that one.


I disagree strongly.

> print(mymap[“hello”])

There are all kinds of different semantics one might want. For example:

You might want the lookup function to throw an exception on a missing item because you plan to use exception handling correctly. Python and (with “at” C++) does this, but its users mostly don’t.

You might “know” the lookup can’t fail, and you don’t care what happens on failure. I hope your debugging output is good when your assumption ends up wrong some day. (Again, Python. Also C++ if you think inserting the item is reasonable on error.)

You might want the lookup to return a default value if the key is not present. I hope you don’t need to distinguish not-present from present-but-the-value-is-the-default. (Go)

You might not care what gets returned if the key is not present because you “know” it’s present. You will not get good debugging output if you’re wrong because the eventual failure will occur later. (Go)

You might “know” the value is there and you’re willing to make it explicit in the code that you’re doing this by writing “unwrap” or something similar (Rust).

You might have a logging library that implements a print function that accepts an optional and does something sensible along with a lookup function that returns optional. Then you get entirely correct semantics with no boilerplate! But null can’t actually do this because null is ambiguous.


> get(key: K) -> Option<V>

This is also not clear whether the value is missing because it's not set or because it's set to nothing

You might want

> get(key: K) -> Option<Option<V>>

? But you can do that with nulls too, just a matter of having a box around the value


No, the type for `get` would only have a single layer of `Option`. But V is a type parameter, so you can instantiate it with another layer of Option. You don't need to (and shouldn't) modify the type signature of the `get` method. It's already generic.


> "This is probably the least-Swift thing ever, but, if you're used to coding in this style, it's fantastic."

As someone who has been building on Objective-C for >10 years now... it's horrible. Sure, there's less compiler-yelling-at-you-itis, but the code sample the author gives is almost certainly the single biggest source of bugs in Objective-C codebases.

The inclusion of Optionals in Swift is specifically to, in one fell swoop, eliminate the single largest source of bugs in Obj-C codebases.

I get the appeal of getting the compiler to be less strict and yell at you less for just hacking around - but something you use to write production code needs be very liberal about error detection and yelling at you.

Not to mention the same lack of verbosity is supported in Swift with all of the safeties of optionals:

`if dict?.keys.contains("a")`

Which is both semantically and typographically terser, and makes the intention of the code clear (i.e., "dict" is expected to sometimes be nil)


This nil-swallowing behaviour works really well with the "out error parameter" pattern of Cocoa methods. In short, methods that can fail have a signature like:

- (id)thingThatSometimesWorks:(NSError *)error;

You can chain these to your heart's content. The one that fails sets the error and returns nil; following messages are sent to nil which does nothing and a particular characteristic of doing nothing is that it doesn't overwrite the error that was set. So you still get to see the error when the workflow finishes.

As an objective-C programmer, I also have a tool in my toolbox which gives NSNull the same message-swallowing behaviour as nil.

@implementation NSNull(GJLNilReceiver)

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector { return [super methodSignatureForSelector:aSelector] ?: [NSMethodSignature signatureWithObjCTypes:"@@:"]; }

- (void)forwardInvocation:(NSInvocation *)anInvocation { [anInvocation invokeWithTarget:nil]; }

@end


I mean this with love, this extension to NSNull gives me a brain aneurism. If I had this in my codebase I wouldn't be able to trust anything in a collection.


You might want to return NSNull instead of nil (easy to do: in fact I extracted this from an object in which the nil-return value is configurable but simplified it for the post), in which case it becomes map-safe because NSNull->NSNull on map.

I use this pattern because then I can trust _everything_ in a collection. If an object, even an object that represents emptiness, receives a message, it will handle it and do something, even where that something represents emptiness.


You're better off having an operator in your language that explicitly chains together methods returning Result<T,E>s. That way, you can easily get this behavior if that's what you want, but it's clear from reading the code that it's intentionally not quite a simple chain of method calls.


In Kotlin you can have a similar behavior but it's explicit which I like more.

var a : T; // cannot be null

var b : T?; // can be null

a.something; // OK

b.something; // not possible because b could be null

b!!.something; // assert that b is not null, then do the stuff

b?.something; // if b is null, return null, else do the stuff

So, each time you see the "?." you know to take extra care with that statement. On the other hand it's very short and easy to write chains of function calls that could return null which then falls through.


Note that this is all exactly how Swift works (though it uses one exclamation mark instead of two). The author is doing an extremely limited comparison, and simply doesn't mention these features.


Not sure if it's some runtime mode in kotlin but b?.something always threw panic for me when writing kotlin plugins for flutter, so I ended up having to guard the expressions with ifs anyways...


C# too.


> In a lot (most?) cases where there's a possibility that dict could be nil, the thing you want is for nothing to happen.

And therein lies the root error which makes this entire blog post misguided.

Trivial (hopefully) question:

Would you rather fix 1 bug that takes 10 minutes to find, or 10 bugs that take 20 seconds each?

Given that 'my code silently just... does.... nothing? That's odd' is tens to perhaps even a hundred times harder to find than 'you assumed this could not be null, right on this very line right here, here is the full stack trace how we got there - it IS null' - NPE is superior to this silent treatment.

That's not to say runtime-only null checking (e.g. java without nullity annotations for example) is the only right answer. Far from it.

But it's superior to leaving a trail of ticking time bombs like this.

In effect, you make a difference choice about the semantic meaning of it.

In Objective C, `nil` means: "A sentinel object that is real, has semantic meaning, and that meaning is that it returns nil for all things". Woe is you if you treat it as "An object that indicates the value is unknown or does not apply (no interaction with it was ever expected)".

In java, `null` means: "A sentinel value that indicates the value is unknown/unset or does not apply - hence any attempt to interact with it will throw an error". Woe is you if you treat it as a sentinel. But less woeful - you simply get an exception pointing at the line where you made the brainfart.

Unfortunately lots of program language ecosystems aren't quite as clear about what nil/null is supposed to actually mean (e.g. java where it's often used as sentinel, and then leads to code littered with `if (x == null || x.isEmpty())` style lines, a clear sign someone is not sticking to a meaning of `null` that works well with that particular language.


I'm not fluent in either Objective-C or Swift, maybe that's the reason why the brilliance of having four types of "null"/"nil" escapes me? Ok, it probably works well if you're used to it, but it feels more like a kludge than anything else...


FTA: the concept of “nothing” in Objective-C is kind of a mess, since there’s four different versions of it

The article is about nil, not about that.

And yes, part of it is a kludge.

I think that’s only NSNull, and that, technically, isn’t a language issue, but a library one. As https://nshipster.com/nil/ (which explains all of this) says:

NSNull is used throughout Foundation and other frameworks to skirt around the limitations of collections like NSArray and NSDictionary not being able to contain nil values. You can think of NSNull as effectively boxing the NULL or nil value so that it can be used in collections.

I think a better implementation would use a private singleton NSNull inside the container as a sentinel, so that users of the class could store nil values.


> the concept of “nothing” in Objective-C is kind of a mess, since there’s four different versions of it

Sounds like Visual Basic of old. We used to refer to the four constants of the apocalypse: null, nothing, missing, and empty.


The “brilliance” is not having four nulls - that honestly more of a step down the road to use specific null values.

The brilliance is the nil messaging semantic, it allows you to write concise code with a single error check at the end of appropriate, e.g:

    [[[foo bar:wibble] thing:1, thing: 2].wat again]
Without having to have error check after error check after error check.

These worked together, that return nil had so little cost on the API, you were able to have more error checks that return nil instead of trying to create some kind of fake value. Because you have so many error returns you have to check a lot if you didn’t have nil messaging.

Our belief of what makes things good language design have changed a tiny bit in the intervening years :)


It tells the tragic story of a language designer fighting to avoid null while building on/interfacing with an existing platform that has null.


"My favorite is always the billion dollar mistake of having null in the language. And since JavaScript has both null and undefined, it's the two billion dollar mistake." -Anders Hejlsberg

"It is by far the most problematic part of language design. And it's a single value that -- ha ha ha ha -- that if only that wasn't there, imagine all the problems we wouldn't have, right? If type systems were designed that way. And some type systems are, and some type systems are getting there, but boy, trying to retrofit that on top of a type system that has null in the first place is quite an undertaking." -Anders Hejlsberg


JS has null, the notion of a reference/binding being undefined, and the notion of a reference/binding being defined but assigned the value undefined.


Kotlin kind of got away with it, in the sense that I (massive statically) don’t hear people complain about null on the Java interoperability calls


I think Kotlin took a more pragmatic approach - basically introducing ? and ! to signify that something could be null or something isn't null even though the compiler cannot see it.


It’s only “brilliant” if you think data corrupting bugs that fail to run expected code and silently spread “nil” throughout your application’s data model are brilliant.

If something can be nil, handle the case. There’s nothing brilliant about objective-C’s nil kludge.


I notice that all the comments that agree with this one point out specific dangers, while those disagreeing basically say "Eh, bugs happen".

Yes, you need to know what you're doing in any language, the difference is whether those things are due to the language or the problem you're actually trying to solve. The more things in the language you have to keep in mind, the less of your domain problems you have room for.


Well said. This feels terrible: just silently ignore logical errors (where some code expects but doesn't receive a fully constructed object) and ... keep going.

I'd rather take a crash (that way I'll know soon that I have a bug and it'll be easy to fix) or, even better, a compiler/type error ("code is expecting a fully constructed object but doesn't always receive one").

In C++ I recently started using a NonNull<> templated type (with specializations for std::unique_ptr<> and std::shared_ptr<>): https://github.com/alefore/edge/blob/af1192a70646f662539bfe6...

It's slightly verbose, but has allowed me to delete a ton of CHECK(x != nullptr) statements (since the type will already carry the information that x can't be null) and this use of types has helped me detect a few mismatches (where e.g. the consumer had unnecessary complexity to deal with null but the caller always emitted fully constructed types, or the, much worse, symmetric case).


Like any language feature, you have to know what you are doing.


This is terrible reasoning. You're basically saying that language features have no impact on bugs, putting all the blame on the programmers. But programmers are humans, and as such they make mistakes (even the best of us). We should embrace tools which are designed to help us catch mistakes.

The correct reasoning would be to recognize that certain language features make entire classes of bugs impossible. For example, in my Rust projects, I never have to worry about null pointer errors. Sure, there might be other types of bugs in my code. But at least I don't have bugs due to nulls. Also, there are no data races in my code—yet another thing I don't have to worry about thanks to the design of the language. (I'm not saying Rust is perfect. I'm just using it as an example.)


Programming languages are like shepherds. They offer a compromise between the freedom to do stuff efficiently and the danger of unwanted behavior. You cannot maximize both at the same time. You can, however, minimize both by making bad design decisions. Footguns are more and more considered a fault in the language design and less the responsibility of the programmer.


Sometimes the serious training wheels on the system make sense, especially when navigating the learning curve.

Other times, the need to have direct hardware access from the BIOS on up cannot be avoided.

The bugaboo is the quest for the One True System that is all things to all people.

Emacs is as close as we get to that Nirvana.


It’s a foot-gun, and a significant source of bugs in just about all software of any complexity written in ObjC.


Writing code is a significant source of bugs in just about all software of any complexity.


Good news, then; languages with an Optional<T> type don’t require you to write more potentially buggy code to check for (or forget to check for) nil.


Meh, different languages give you different tools, different degrees of nannying vs freedom. Some may give you a foot gun, but at least you have a gun and you aren’t required to point it at your foot.


I prefer languages that make it easy for me to focus on solving novel problems, rather than wasting my mental energy just trying to not blow my foot off with a rusty musket.


Bugs exist. No question.

What does bother me is where you detect the bug.

Is it at compile-time?

Is it at run-time through a crash?

or is it months later after you notice that many of your users seem to have January 1st 1970 as their birthday with no way of recovering the lost data.

But hey - at least stuff kept running. Right?


If you write bad code then your code will be bad.


That’s a pretty unhelpful attitude.

This year has already seen CVE-2022-25636 (heap out-of-bounds write), CVE-2022-27666 (buffer overflow), and CVE-2922-0847 (uninitialised memory). Three vulnerabilities in the Linux kernel, in pretty important (and therefore, presumably, closely scrutinised) bits of the code base. And this is the Linux kernel, arguably the most important open source project in the world, worked on by some of the most skilled developers out there.

Everybody writes bad code, and everybody misses bad code on review, even in the really important bits, even when they’re well-known bug classes. Criticising a language for leaving these foot guns lying about is perfectly reasonable, and it’s important we talk about avoiding those languages wherever possible.



I found myself using nil in C++ everywhere.

  typedef std::optional maybe;
  using nil = std::nullopt;
 
Not quite the right definition, but I’m on an iPad.

When you combine those with an “either” function which returns either a maybe<T> else a default T, you end up with a nice way of having default args in C++. Everything defaults to nil, then you set it to either(foo, defaultFoo()) in the body of the function.

Of course, C++ already has what it calls default arguments. But lisp has corrupted me. The defaults can’t be a function call, and the values can’t refer to other arguments, so they feel useless.

Heckin C++. I feel like programmers 80 years from now will still be yearning for an escape, whereas C++100 will still be the dominant systems programming language. Golang was a nice attempt, and so is Rust, but empirically the evidence is that (at least in AI) you can’t escape C++. The best thing to do is to just dive in.


Not only in AI, anything GPGPU related (just look how many ISO C++ folks are on the payroll of the big three), and all major language and GUI runtimes.


  using nil = std::nullopt;
I don't think that compiles? The right hand side is a value, not a type.


constexpr auto nil = std::nullopt. I was on an iPad. :)


Option type was my favourite solution to this until i was introduced to nil punning. I still haven’t found a more pragmatic approach.

Some languages allow you to say “this can never be null”, and in general, i find that really helpful and ergonomic but there’s cases where it’s not practical and you have to permit null and at that point, you’ve sunk the boat.

Strict non-nullable with Option types has been moved to 2nd place solution since i experienced nil punning.

Treat nil as a value, like in this article’s example of the array being terminated by nil. Don’t make it an exception to compare against nil. But don’t ascribe any other meaning to the value - that means no matter the context, nil can semantically fit your use case.

It’s better than option primarily because it means a lot less code. You delete all “is this value present?” type code from your codebase because encountering nil in your happy path is rendered harmless.

Very nice.


>encountering nil in your happy path is rendered harmless.

Only if by "harmless", you mean "creates a bug later rather than a crash now", as compared to traditional null handling. But Optional has neither problem.

>It’s better than option primarily because it means a lot less code. You delete all “is this value present?” type code from your codebase

"It's better because it's less code" is a pretty big red flag if that's the only advantage. Especially when "less code" means `foo.bar` instead of `foo?.bar`, or `if foo.bar` instead of `if foo?.bar == true` (and even in the second example, if you're touching `foo` multiple times, you can unwrap it one time with `if let` instead).


This sounds like an invitation for logic bugs and corruption.

> It’s better than option primarily because it means a lot less code.

Option doesn't need any extra code for checking the presence of a value when it's done right. In Rust, you just place a ? sigil at the end of the option expression. In Haskell, you just use the do syntax where val <- optional will assign the value to `val` if `optional` is some value rather that none, and it will exit the do block otherwise.


>> Option doesn't need any extra code

Ahh, those are both logic errors in your code. You do need the extra code to fix your errors here:

>> In Rust, you just place a ? sigil at the end

That changes the behaviour to bubble up that there was no value, the caller has been delegated responsibility and further processing has stopped at this point.

Instead to accurately implement the required behaviour and replicate the nil punning logic, you’d need ? to behave in such a way that it inserts a suitable “empty” value for whatever type the option is guarding. So if this was an optional collection for example, the ? would need to figure out what type of collection then how to get an empty one to provide here (and you’d probably want that to be a singleton immutable collection, otherwise you’d be risking introducing logic bugs).

In practice, you’d instead write code to provide that empty collection at the call site. At this point you’ve rendered the Option wrapper pointless though.

It’s the exact same story for your Haskell example.

You’ve given a different behaviour which is of course, a logic bug.


> to behave in such a way that it inserts a suitable “empty” value for whatever type the option is guarding.

Some types have no suitable "empty" value. It is dangerous to assume you can always find such a value.

In essence, when you're expecting that such a value exists, you're implicitly relying on your type being a suitable monoid. That's fine when you're expecting this and when it is actually a monoid, but not all types are.

> In practice, you’d instead write code to provide that empty collection at the call site.

Not all types can be modelled as collections. This is the same problem as above.

What I'd actually do is either:

1. Fail the parent operation by again using `?`, or 2. Provide a suitable default/"empty" type, if it is possible to continue.

It's very important that such a decision is explicit on a case-by-case basis, and this is what `?` allows you to easily do without much fluff or noise.

> At this point you’ve rendered the Option wrapper pointless though.

How was it useless? It allowed us to safely write a procedure as if the value existed and otherwise cheaply signal failure to the caller. The caller can then repeat the same process.

What's important is that the procedure stands on its own and is safe out of the box. It clearly says that it can fail (by returning an option) and we know exactly when it has failed and when it has not.

> You’ve given a different behaviour which is of course, a logic bug.

I don't see where the bug is?


> Some types have no suitable "empty" value. It is dangerous to assume you can always find such a value.

Indeed. In Rust you have Option::unwrap_or_default() for this, which is only available if the underlying type implements the Default trait. It's unfortunately more verbose than `?` but it gives the behavior the GP wants, without making unsafe assumptions like 0 always being a safe default for any integer result.


>> I don't see where the bug is?

That appears to be a willful decision on your part i think :-) A'dieu and all the best


I was trying to be deliberately obtuse, it was a good faith question. I guess I just misunderstood what you were trying to say then.

But no hard feelings and all the best to you too.


*wasn't :D Yes, yes, cue the Freudian jokes.


You are acting wrong here, fyi.


Most 'optional' types have a 'map' function that allows you to apply a function to it, if it isn't None. That seems like a decent middle ground, though maybe less useful when writing a few short functions.


>Don’t make it an exception to compare against nil. But don’t ascribe any other meaning to the value

If comparing against nil does not fail, you are ascribing the value of "false" to it. If it works in every use case, you are ascribing an arbitrary value to it in every use case.


I would rather have my program hard crash on unexpected null than to have it silently ignore potential logic errors. I have never used a language with nil punning, but to me Option<T> still feels to be the best approach.


It’s a small example and won’t claim it’s life changing but if you get a spare hour some time, i’d definitely vote for broadening experience horizons and playing with an example.

Specifically to your point about the hard crash, I’m interpreting this as meaning “i want the programmer to be told they’re mistaken”, if I’ve got that right, you’ll be pleasantly surprised.


Glad to read an opinion from someone who can think for themselves. Rust/Swift propaganda can be quite effective. These languages solve fake problems.


Big fan of options, but I am curious to see some examples of how it's cleaner, even in pseudo code.


The author seems to not know Swift very well, because this behaviour was ported over to Swift in a very nice, type safe manner:

In Swift, you can call methods on Optional values, and they evaluate to null or the actual value. So the authors example:

if (dict.containsObjectForKey[@"a"]) { ... }

Would look like this:

if dict?.contains("a") { ... }

The `?` tells Swift to only call `contains` if `dict` is not null. The return value of this call would then be another optional. This second point is the improvement over Objective-C. There, you would not know whether the return value of `dict.containsObjectForKey` is a potential null value. This, in turn, would crash the app if you'd try to use this return value to, say index into an array or perform math on it. In Swift, the compiler yells at you to remind you that this value can be null.

So, really, Swift has the same benefits, but better.


Note that it would actually be (also changing to an array to make your abbreviation of the author's example make more sense):

`if arr?.contains("a") == true { ... }`

or

`if arr?.contains("a") ?? false { ... }`

or

`if (arr ?? []).contains("a") { ... }`

or

`if let arr= arr, arr.contains("a") { ... }`

or

`if arr!= nil, arr!.contains("a") { ... }`

Optional<Bool> will not automatically unwrap where Bool is expected (since the compiler would have to assume nil is falsey, which is the kind of mushy shenanigans Swift explicitly denies).


Yes, but Objective-C is even more permissive than that. For example,

    [object method]
where method is supposed to return an int, will return 0 if object is nil. This is of course a double-edged sword: it's too easy to make a mistake in your logic, but also you can use it to your advantage if you know what you are doing.

The advantage really being just writing slightly less, i.e. eliminating extra if's. All this, however, didn't save Objective-C from being over all a very verbose language. At least from my experience of rewriting from ObjC to Swift, the latter can make your code half the size of the original and as an added bonus will make it so much safer too.

This is why Objective-C is objectively and hopelessly dead now.


In swift you do mymethod?.call(...) Where method can be nil and it has the same effect. Similarly object?.something has the same effect. Use exclamation mark instead of question mark, to make swift runtime panic if the object or method is nil. Obj-c is objectively worse in every way and easy way to improve bugs in your code that go unnoticed for long time.

Another cool thing is that you can use ?? At end of expression that can return nil to decide what to do if nil is returned (for example return something else). So swift actually reduces the amount of ifs required.


> In swift you do mymethod?.call(...) Where method can be nil and it has the same effect.

No, it doesn't. (You probably meant object?.mymethod()). If mymethod returns int, then in Swift the result of this expression is strictly Optional<Int>, whereas in Objective-C it's Int defaulting to zero if object is nil. The implications in terms of safety are significant.


DefaultZero = Object?.mymethod() ?? 0;

Optional-null + null-coalescing is strictly more flexible than obj-c’s behavior, while more sanely defined, with very minimal syntax


I did mention object?.mymethod as well, method?.call is needed if you want to call on nullable method itself in a safe way.


double fraction = 1.0 / [object method];

huzzah :)


`NSArray` is not `nil` terminated; the argument lists to some of its variadic initializers are, but the array itself simply does not contain `nil`.

The literal syntax `@[ ... ]` gets transformed into an initializer call, which throws an exception if it is passed a `nil` value. And I can't remember, but I would expect the compiler to at least warn, if not error, if you put a literal `nil` in there.


They're just referring to -[NSArray arrayWithObjects:...] which takes a va_list. Same with -[NSDictionary dictionaryWithObjectsAndKeys:...]

These both long predate the array and dictionary literal syntax.


Not crashing on null is part of what led to the remarkable stability of iPhone apps vs Android. Same with javascript apps. Can you imagine how painful the web would be if the entire page 'crashed/disappeared' every time an unhandled exception was hit in javascript.

If programmers today treated software like a car, they would blow up the car every time the air conditioner button didn't work. Desktop and console apps work this way.

Perfect is the enemy of the good, many times your app isn't running a nuclear reactor, and even if it was you might want it actually be able to recover from a failure. Think Erlang.


> Think Erlang.

Conventional Erlang style is to crash a lot, but to limit the amount that crashes by using enough processes, I believe, c.f. "The How and Why of Fitting Things Together" <https://www.youtube.com/watch?v=ed7A7r6DBsM&t=1535s>. But the key part to handling failure is to detect it first, and silently passing through nil/null/etc is not that.


No matter how you do it, avoiding a hard crash is the point. Don’t throw away the users work because some feature threw an uncaught null exception.


> Mostly used to recodesent null

Is this line the victim of an overeager s/pre/code/g I wonder…


Haha, indeed. I can't think of an explanation for this one though:

> switdhing to Swift “feels like a Jedi who’s trying to switdh from a lightsaber to a blaster,”


I think it's because the author thought the tags you use in tables are <tr> and <tc> (table-row and table-column) and then did a s/tc/td/g.


I guess today the author will learn about word boundaries in regexp, then.


Oh thanks my mind just hopped over it without even trying. It's nice to see it was no knowledge gap but an well known word


    if (dict != nil && dict.containsObjectForKey[@"a"]) { ... } 
    Can become this:
    if (dict.containsObjectForKey[@"a"]) { ... } 
    This is probably the least-Swift thing ever, but, if you're used to coding in this style, it's fantastic
Why is this better than a language that can guarantee that dict is not null?


ObjC is similar to other things of the time. There used to be wonderful chaining nil/null check semantics. And duck-typed languages only made that better.

if foo || bar || baz || you_do_not_even_know_what_comes_next break

or:

retval = foo || bar || baz || you_do_not_even_know_what_comes return retval

&c

I don't understand why it's notable except that it's not really JS or Pythonic . . .


I don’t see how it’s that different to Swift? All that would be needed is a ? after `dict` and a direct comparison to true.

dict.containsObjectForKey[@"a"]

Becomes roughly

dict?.containsObjectForKey(“a”) == true


This would be cool as a comparison to traditional null handling (i.e. crashing). But the comparison to Swift is ridiculous. This begs for overlooked nulls to silently create logic bugs.

Also, the author admits there are two kinds of null in Swift, as compared to four in Obj-C. Is the second kind he's referring to `NSNull`? Doesn't that only exist to bridge with Obj-C APIs that haven't been updated yet? That's pretty misleading, if so.


NSNull is used to store nulls in NSDictionaries when you really want them to, since they don't allow storing `nil`. An example being decoding JSON, which does allow it.


I was thinking that counted as one of the "Obj-C APIs that haven't been updated yet", but I guess it's not actually going away since it's qualitatively different from Swift's Dictionary (being a reference type instead of a value type).


As an ObjC programmer, I concur.


NSArray *array = @[@"a", @"b", nil, @"c"]; // array = [@"a", @"b"]

I sort of prefer Swift:

let array = [“a”,”b”]


And the Swift version is even automatically typed as containing Strings using generics!


Though this isn't precisely typed:

  let dict = ["a":[1],"b":["c"]]
  print(type(of:dict["a"]))
  
  =
  
  Optional<Array<Any>>


My only contention with Optional<T> types as opposed to NULL/nil/null is that in practice they're almost useless. An Optional<T&> is a pointer, plain and simple. If NULL is one of the expected values, it should be handled as expected. This leads to a vast simplification of a number of algorithms and data structures and no need for `.unwrap().unwrap().excpect("unreachable")` in the cases you can't do anything anyway. I've rarely seen cases where null values were possible at the algorithmic level, and expected, and not handled, because, lazy programmers I guess? Honestly all I can say for that particular case is that if you write half an algorithm, expect it to fail half the time...

On the other hand, if NULL is not an expected value, then how often do you really find the logic error at the same spot that the panic occurred? Not very often in my experience. We have callstacks for a reason. You usually find the NullPointerException at the bottom of the stacktrace or the None.panic() after 100-something other function calls in significantly complex codebases. No, if a function expects a value 100% of the time, and it was passed a NULL value, that's the caller's problem. And Option<T> won't help you if the user-facing calling code is still passing None or std::nullopt for whatever misguided or erroneous reason they may be doing so.

Certainly I agree there're benefits to encoding this kind of behaviour into the type system, but I don't agree that it's ultimately a solution to the '4 billion dollar mistake', since mistakes ultimately still happen. You just swap NULL for None and you still have a bug in your logic, just no longer a bug in the types. It's kicking the can.


In C a pointer is a version of the optional type, implementing an explicit version of it is therefore pointless. In other languages you could enforce that a pointer can't be null or even forbid it to be uninitialized therefore you need to have an optional type for the cases where you want to allow null, so if you get a pointer you can trust it a little more.


nullable types are bad and hopefully everyone knows it by now.

Other things too but I’m too lazy (forgive the clickbait title):

https://vadosware.io/post/how-and-why-haskell-is-better/


Message-eating nils helps you write elegant code. But it also makes it easier to crash when you didn't know what you were doing. I thought crashes like this were one of the reasons to create Swift.

PS: The co-existence of nil, Nil, NULL and NSNull was just unfortunate.


Optional-chaining in other languages gives you the same ergonomics without the funky semantics


There is this weird disconnect going on in this article and therefore the HN comments. The author's first sentence is, "I don’t code much in Objective-C these days but one thing I miss about it compared to Swift is how nil works." He's not just saying "this part of Obj-C's nil is good", he's saying, "this part is good even compared to Swift", which makes no sense because his "before" example in the before-and-after is Obj-C, and not Swift. The HN comments then proceed to enumerate the advantages of modern handlings of null, of which Swift is a prime example. The author's lack of clarity is understandably causing this.


But this is potentially dangerous. E.g.:

    dict.setObjectForKey(@"b", @"a");
    assert(dict.containsObjectForKey[@"a"]);
A bit tangential related to the article:

In Python, I often found that it is helpful to also have a separate "None"-like type despite None, which I called NotSpecified. Often in functions you have sth like:

    def func(opt: int = None):
       if opt is None:
           opt = get_default_opt()
       ...
This works fine when you now that opt should not be None here. However, if None is allowed for opt and causes some other specific behavior, I now do:

    def func(opt: Optional[int] = NotSpecified):
        if opt is NotSpecified:
           opt = get_default_opt()
        ...
This is also useful for dicts:

    my_dict.get("foo", NotSpecified)


Very verbose, but at least it's not Java.


Verbose symbol names aren't a problem unless you need your text editor window narrow. I find the problems are verbosity in # of symbols and punctuation, which Java also had.


Could there be a way to eliminate the problem of null values floating through your system?

Maybe.


I see what you did there. And Swift, the very language the author is comparing to, has this functionality.


So does Haskell and Rust.

It's called exhaustive pattern matching on sum types with the Maybe monad (or Optional). I believe Haskell popularized this syntax and Rust and Swift borrowed it from Haskell. Strictly speaking though, haskell pattern matching isn't exhaustive but the compiler will give you a warning if your code doesn't handle every case.

Elm is another language that exploits this feature to a scale where unless you use the FFI, elm programs cannot logically ever crash.


Null is simple, straight-forward solution to the problem of missing values.


A programming language without the concept of NULL can be created in a way such that it is impossible for the program to crash. Elm is such an example of this.

Null solves the problem of missing values, but introduces the problem of undefined behavior. What happens to when you try to divide an integer with NULL? Often times the program just chooses to crash.

To be fair though the problem isn't null itself. It's the failure of null to be incorporated into the type system. Instead null is used as a value to bypass the type system and silently propagate through your program and finally cause an error when you try to do an operation on it.


Simple and straightforward at write-time. A nightmare at run-time, modification-time, and debug-time.


people should really try non-nullable types in typescript and kotlin, in addition to smart-casting. Then they would realize optionals are niche and cumbersome.


I’ve used TypeScript’s handling extensively as well as Rust’s, and I would say that there are actually significant tradeoffs, rather than it being an easy win for either. In particular, the fact that `| null | undefined` is just another union type can be very nice but also makes it annoying/impossible to implement a lot of the niceties like Rust’s `unwrap_or_default()` in a generic way (though some of that is just a function of how different Rust’s trait system is from JS). It also has weird fallout in the ways that it collapses together `map` and `andThen`/`flatMap`/`bind`. And it’s kind of annoying that it is a special case rather than having all the exact same tooling as all other types in the language the way sum type–based optional types do. On balance I very much prefer the Rust/Swift/etc. approach.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: