Hacker News new | past | comments | ask | show | jobs | submit login

I disagree with the author here, and while I admit that Null failures are a pain, it's not true to say they don't exist in other languages. "Maybe" and "Option" etc may force you to deal with them before they will compile, but that's not the same thing as saying the concept of Null does not exist in those languages. I'm sure there are some languages out there without any concept of Null, but they would have their own shortcomings for the lack of it.

It's also wrong to suggest that Null has no place in "logic". Boolean logic is one type of logic, but it's not the only type.

Finally, the examples of how Null works in various languages are really poor. "Why doesn't Ruby coerce nil into zero?" Because it doesn't coerce any types implicitly. Why should nil be the exception? How is that expectation "logical"?




I think you misunderstood something about the languages mentioned like Swift or Haskell. They do have a way to represent the concept of "nothing", or "missing" using an enumeration (called Optional in Swift, Maybe in Haskell).

What they don't have is the global concept of Null that sits at the root of the type hierarchy. This means every type in a language like C# or Javascript has to have Null as one of its members. If you want to define some operators/functions on the members of the type, have have to always consider Null as a member. Like the author listed: How do you compare Null, how do you negate Null? Those questions shouldn't even be asked because Null is not negatable or comparable. Maybe the problem is that Null shouldn't have been admitted as member of a type that you consider comparable or negatable.

So in Swift/Haskell, when you define a type you don't consider "nothing" as one of its members.

The mental model in Null-using-languages is that a Type is a sort of blueprint that can be stamped out to make instances of that type. From that point of view its natural to consider you may be missing an instance.

In language like Swift or Haskell the concept of a Type is closer to a set in math; in this case the set of all possible values of that type. This is why I said above that Null has to be considered as a member of a type. I’m using math language to re-interpret the type-as-blueprint world view, to show why it leads to illogical results.

Seriously, Swift demoting of the concept of “nothing” to just another enum is alone worth the price of admission. This has cascading consequences that lead to safer code. You also get simpler code as you strive to resolve the uncertainty of a missing value as early as possible in your code. Swift is the most impressive language I've seen in a while, but think its origins at Apple have overshadowed the its incredible technical value.


I think using an Option / Maybe over null has the benefit of the caller knowing that the method might not return something.

e.g.

    public Foo GetFooWithId(int id){...}
When I call that, I might get a Foo but what if I don't... Will it be null, is Foo a struct or a class? I have to know what Foo is and check if I need to check the result for null.

As opposed to

    public Maybe<Foo> GetFooWithId(int id) {...}
I know just from reading that I might not get a Foo back so I had better match over the result.


Have you used any language that has a type system powerful enough to support arbitrary sumtypes (and also has null as a separate type)? If not then I can recommend playing around some with Crystal.

IMO optimals are just a crutch to make up for a too weak type system.


Given a sufficiently powerful type system, you should still err towards an Option type, because you don't want to have to make two types for everything you want to represent - the real type, and the real type but also null. Having an Option type lets you compose that behaviour, even if, yes, it's as simple to define as `type Maybe x = Just x | Nothing`.


No, optional really isn't needed in crystal. You can just add `| Nil` to any type wherever you use it. We ever have a short cut in the type syntax of the language `?` to add null to a type union. So in practice at the type layer all you're doing is replacing Maybe Type with Type?. But now you have a strictly more powerful construct which behaves like options in some ways (you can call try on any union with nil because all types including nil implement the try method) but supports flow typing for null checking, removing all of the extra dereferencing syntax for option types.


This is also how Swift implements it support for optional. So far I don’t see a difference in expressive power between the two languages. A user never has to actually type out Optional, when adding ? is enough. Swift also has optional chaining and nil coalescing operators as syntactic sugar. Under the hood there is still an Optional type for the type checker to work with.

I think this is becoming a trend in modern language design. However as this comments section demonstrates it’s hard to understand its benefits or why it’s an important improvement, if your only experience is from C/C++/C#/Java etc


Does swift have flow typing? The expressive power of sum types with nil is only exposed with flow typing. Also I don't think swift supports arbitrary sum types it just has a "fake" sum type syntax that only works with null. In crystal you can have an `Int32 | String` just the same as you can have an Int32?


Swift doesn't have flow typing, unfortunately. It has a few constructs, like `guard let` and `if let`, which let you unwrap optionals in a somewhat nicer way than other languages and are sort of similar to one aspect of flow typing.


I've seen this come up a couple of times in this discussion, so ELI5: What is "type flowing"?


Flow typing is where the type of a variable changes based on control flow.

Consider:

    x : Int32 | String
    if x.is_a? Int32
      # typeof(x) == Int32
    else
      # typeof(x) == String
    end


Thanks!


You mean "union type" where you say "sum type". AFAICT Swift does support sum types.


yeah in crystal we call them union types. We don't have sum types so I'm not really certain on the difference.

I should have stuck to terminology I know.


Swift has sum types, not union types.


This was my initial thought reading this post, that logic has developed quite a bit the past few thousand years, with different logics able to evaluate different types of values (e.g. Kleene's logic). But I feel the author's better point is that different languages evaluate unknown/null/nil/none values differently, or in an illogical way. Like, ideally if a language were to use a null value, I feel that such a value shouldn't ever evaluate to, say, 0 or an empty set, because it's making an unwarranted assumption.


I remember this insight from a paper on SQL's 3VL issues (true/false/null), and I think it somewhat apllies to nulls issues inanguages; its a 3VL logic system that doesn't go all the way. Sometimes it gets forced into 2VL semantics (conditionals), and the conversion is where all the problems lie.

If it stayed true to its word, and fully embraced 3VL, there wouldn't be an issue.


There's one key difference between null as it applies to languages like C and Java versus Option<T>-like renditions of null: null doesn't infect the type-system in the latter.

Type systems can be viewed as a lattice, usually as a bounded lattice. If you look at Java's reference type system (ignore primitive types), there is a top type, aka Object. There is also a bottom type. It doesn't have a name you can spell, but it does have a single value--null. This means that null can satisfy any type, even if the type is impossible to satisfy--and as a result, there is an unsoundness in Java's type system.


Do you still feel the same way if you consider the option type a container type.


Maybe/Option aren't Null. They don't crash when you to perform type-checkingly valid operations, unless your language allows you to write partial functions. And you aren't required to wrap a type in Maybe/Option the way that languages require you to admin Null into every (non-primitive) type.

Obviously "nothingness" exists and needs representation. The problem with Null is that it's not part of the type system but is part of the program, so it undermines the value of having a type system.


I've seen programs that crash given a none. Often explicitly called out with a message saying that a value must be supplied. So, a good thing in those cases.

I'm not optimistic enough to think it would always be a good thing, though. I've actually seen some where the lack of a value was not noticed because people just mapped over the optional and did not code for the missing case. Effectively coercing the value to whatever the zero was.

Now, I can't claim empirically that these would outnumber null pointers. I just also can't claim they don't exist.


Yeah, I have this intuition that options create there own horde of problems because of the way that map/<$> propagate None so that, once you get an unexpected None, it becomes harder to trace the source of the problem. A null pointer exception or segfat, on the other hand, will give you a stack trace or similar that will help you pinpoint the source of the problem.


> Finally, the examples of how Null works in various languages are really poor. "Why doesn't Ruby coerce nil into zero?" Because it doesn't coerce any types implicitly. Why should nil be the exception? How is that expectation "logical"?

That was exactly what I would've guessed with those examples, even without knowing Ruby. Likewise, C# being the one where things "get strange", especially after Javascript? From the example, the C# version acts really close to NaN - a "this is unknown" value that contaminates whatever it touches. I'd call C# and Ruby equally logical, with different intentions, given the examples.


Unrelated to the discussion of null: Ruby the language might not coerce types, but the standard library will happily do things like multiply a string by a number, or parse a malformed date resulting in garbage.


>it's not true to say they don't exist in other languages

Sounds good, waiting for an example to support this...

>It's also wrong to suggest that Null has no place in "logic". Boolean logic is one type of logic, but it's not the only type.

What other types did you have in mind? Given that we work as programmers in a world defined by true/false 1/0 logic, I think you might want to reconsider this blanket dismissal.


You seem intelligent, so I'm going to drop you into the deep end. In category theory, a topos [0][1] (plural topoi) is a structure-place where logic can be done. Boolean logic corresponds to a particular topos. Actually, there are at least two topoi which do Boolean logic; one of them has the law of excluded middle, and another has the axiom of choice. [2]

And there's an infinite other number of topoi to choose from! Topoi can be custom-made to categorical specifications. We can insist that there are three truth values, and then we can use a topos construction to determine what the resulting logical connectives look like. [3]

Finally, there are logical systems which are too weak to have topoi. These are the fragments, things like regular logic [4] or Presburger arithmetic.

To address your second argument, why do we work in a world with Boolean logic? Well, classical computers are Boolean. Why? Because we invented classical computing in a time where Boolean logic was the dominant logic, and it fits together well with information and signal theory, and most importantly because we discovered a not-quite-magical method for cooking rocks in a specific way which creates a highly-compact semiconductor-powered transistor-laden computer.

Computers could be non-Boolean. If you think that the brain is a creative computer, then the brain's model of computation is undeniably physical and non-classical. It's possible, just different.

Oh, and even if Boolean logic is the way of the world, does that really mean that all propositions are true or false? Gödel, Turing, Quine, etc. would have a word with you!

[0] https://en.wikipedia.org/wiki/Topos#Elementary_topoi_(topoi_...

[1] https://ncatlab.org/nlab/show/topos

[2] https://ncatlab.org/nlab/show/two-valued+logic

[3] https://toposblog.org/2018/01/15/some-simple-examples/

[4] https://arxiv.org/abs/1706.00526


There are a lot of three-valued logics that lots of programming languages implicitly follow by accident sometimes. For example:

  true || 3 ==3 || panic()
will, through short-circuiting, evaluate to “true” even though the final element doesn’t evaluate to either true or false.

(As you may observe, they also follow a weird logic where AND and OR are not commutative...)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: