
JavaScript: The Curious Case of null &gt;= 0 - beefhash
https://blog.campvanilla.com/javascript-the-curious-case-of-null-0-7b131644e274
======
alangpierce
Not to defend JavaScript too much, but I think the three examples seem
reasonable when viewed in the right way, and I think it's hard for JavaScript
to do much better with its existing design philosophy (dynamic typing and
coercion rather than runtime errors to deal with type mismatches).

>= and > are _numerical_ operations, so both sides are interpreted as a
number, and null is 0 when treated as a number. That means `null > 0` becomes
`0 > 0`, which is false, and `null >= 0` becomes `0 >= 0`, which is true. In
contrast, == is a much more general operation that works on numbers as well as
objects, strings, and all sorts of other things, so it certainly doesn't make
sense to always convert both sides to a number first. I'm certainly happy that
`null == 0` is false in JavaScript, and I wouldn't want that changed to be
consistent with numerical comparison (which comes up much more rarely than
equality checks, at least in my code).

This type of issue isn't unique to JavaScript. In Java, if you have boxed
Integers, `==` will do object identity comparison (so two different instances
of the same number will compare as not equal), while `>=` etc will unbox them
and do numerical comparison, which I think stems from the same issue of
equality being more general than numerical comparison.

Also worth noting that it's _not_ always true that `a >= b` means `!(a < b)`
in JavaScript. `'a' >= null` and `'a' < null` are both false.

~~~
justinpombrio
Let me agree with you but then turn that on its head.

You've argued that having `null >= 0` is the _best that JS could do_ while
having a coercion-based semantics. And I agree: this confusing behavior may be
the best you can possibly do with a coercion-based semantics. And it's not
just this example, either. JavaScript is filled with gotchas, and a lot of
them have to do with automatic coercion. For example, did you know there are
number and string values x, y, and z, such that x<y, y<z, and z<x?

But if this is really the best you can do in the presence of automatic
coercion, maybe that's an argument against coercion.

~~~
lisper
> having `null >= 0` is the best that JS could do while having a coercion-
> based semantics. And I agree

Well, I disagree. You have to add a third condition for this to be the best
you can do, and that is that the boolean type has to be two-valued. But that
is not a given. Just as numerical types can include infinities and NaNs, a
boolean type could include a third value that is neither true nor false. The
IF statement would have a third clause to handle this value, something like:

    
    
        IF condition
        THEN [code if condition is true]
        ELSE [code if condition is false]
        OTHERWISE [code if condition is neither true nor false]
    

Like ELSE, the OTHERWISE clause would be optional so this would be a backward-
compatible change. By default, existing code that encountered non-true-non-
false conditions would do nothing.

~~~
fenomas
Are you seriously suggesting that, widgety though JS's implicit type
conversions might be, if its booleans had three values then things would be
better?

~~~
lisper
Yes. Why would you doubt it? (i.e. why would you doubt that I'm seriously
suggesting it, not why would you doubt that what I am suggesting has merit)

(Oh, and BTW, before you completely dismiss the idea that my suggestion might
have merit, you might want to look me up. I didn't just fall off the turnip
truck.)

~~~
eropple
FWIW, for the downvoters: you really should look 'lisper up. I'm not sure I
agree with him but the incredulity shown here is unwise.

~~~
curun1r
The down votes are likely for trying to invoke an ad hominem argument. No
matter someone's credentials, it's still a logical fallacy and something to be
avoided. If he is an authority on the subject, it should show through in his
comments. I'm sure he's a very smart and knowledgable person. But there are a
lot of smart and knowledgable people here and by invoking his credentials,
he's presuming that the people he's talking to are less knowledgable,
intelligent or entitled to their opinions. It's insulting and he's earned his
downs.

Full disclosure: I downed his 'look me up' post and upped his more substantive
posts.

~~~
carapace
> If he is an authority on the subject, it should show through in his
> comments.

The comment speculating on the "OTHERWISE" clause is the kind of brilliant
off-beat thinking that challenges the level of understanding of the reader.

The people scoffing at and downvoting it are, however smart and knowledgeable
they may be, ignorant or foolish. If they thought it through they might learn
something.

It's a perfectly valid response to say, "Hey wait a minute, I know what I'm
talking about here." It's not appeal to authority, and it is certainly not an
_ad hominem_ since the people he's talking to have shown that they don't
understand.

~~~
curun1r
> brilliant off-beat thinking

Those words must have new meaning since he was citing as an example a language
that's existed since the 70s. Ternary booleans are far from a unique or novel
concept. The main difference is that for most languages, including Javascript,
that third value exists at the variable binding or expression level, not the
value level. Other languages that don't allow nulls encode that third value in
a Maybe type. SQL, being a thin abstraction over data storage that needs to be
able to encode nulls, doesn't make that distinction. That his idea was poorly
worded to state that the actual Javascript value would have a third
possibility rather than the comparison expression being nullable doesn't add
to his credibility. The distinction between an expression evaluation, variable
binding and a value isn't exactly a newbie concept, but for someone with the
credentials he's claiming, I'd expect him to know it and be more precise.

> since the people he's talking to have shown that they don't understand.

Perhaps you should go back and read the exchange. The comment he was replying
to was specifically about the mixture of Javascript and ternary booleans, not
just ternary booleans. Also, if you re-read the comment that you're calling
ignorant in the context of the authors original confusing of expression
evaluations and values there's at least the possibility that he's making a
very valid point, albeit in a way that can be read as confrontational.
Javascript could already add an otherwise block without adding a third boolean
value...all that would be necessary is to allow comparison operators evaluate
to null or undefined and then execute the otherwise block in that case. Adding
a third value would mean that a boolean variable binding could now have 5
values, true, false, not_true_or_false, null or undefined. That's nonsensical
and deserves to be called out. It's bad enough that the language has null and
undefined, since it causes a ton of confusion that could have been avoided.
Adding some special "not null but sort of null" boolean value would just add
more confusion to the language.

~~~
lisper
> Adding a third value would mean that a boolean variable binding could now
> have 5 values, true, false, not_true_or_false, null or undefined. That's
> nonsensical and deserves to be called out.

FWIW, I actually agree with this. There's no need to add a new value. It would
be fine with me (within the context of Javascript's already horribly broken
design) to use an existing value (null or undefined) as the third "boolean"
value.

The part that really matters is that whatever null>=0 returns it should be
different from either 1>=0 or -1>=0.

~~~
fenomas
It seems like this thread has gone several different directions. Surely
suggesting that that (null>0) should return null or undefined is a separate
topic from suggesting that if() statements have three different control
branches?

FWIW, I totally buy the former. If (null>0) returned null, then most things
would still work as expected, and you could easily check the result of the
comparison to catch unintended behavior. One can easily imagine a world where
JS was defined this way from the start, and it'd be pretty much the same
language.

The latter (three-way IF control structures) strikes me as waaaay more tenuous
- much bigger change, more complexity on the programmer, and no benefits I can
see that you wouldn't get from the former bit.

~~~
lisper
The reason for the three-way IF would be to avoid conflating null and false.
Without that you'd have to write:

    
    
        IF (x<y)==null {
          [otherwise]
        } ELSE IF (x<y) {
          [then]
        } ELSE {
          [else]
        }
    

That seems awkward to me.

~~~
fenomas
Yeah, I get that that's the intent, but (per my original comment) I'd think it
would surely create more mess than it cleans up.

For example, I suspect that three-way IFs like you describe would only
_really_ get used in two ways - often the second and third branches would be
identical, and the rest of the time the third branch will be some kind of
"console.warn('bad inputs!')" kind of error handling.

For the latter of those two cases, testing for the error case before
proceeding to the regular logic (as in your code sample above) seems
intuitively like the right thing to do - analogously to how you might check
(isNaN(operand)) before doing some math. And in the former case, if the second
and third branches are identical you'd almost certainly want some syntactic
sugar to avoid writing the same logic twice, like "IF (bool) {...} ELSE-AND-
OTHERWISE {...}" \-- which would just be isomorphic to what we have now. You
know what I mean?

------
567arlo
This being a result of JavaScript's weapons grade weak-typing, I've always
wondered of what benefit weak typing actually brings, once dynamic typing is
assumed, over strong typing as in python/Ruby. Or at least strong _er_ typing.
Reasonable coercion from 11434 -> "11434" is one thing, but why not throw an
exception when anything more ambiguous is encountered? It would seem even for
an absolute newcomer to programming, or someone experienced who wants to
rapidly prototype, the bugs resulting from this hidden coercion complexity
outweigh any gains in productivity. Especially considering the alternative is
simply a set of special unambiguous casting functions that could be easily
looked up.

~~~
alangpierce
> I've always wondered of what benefit weak typing actually brings

It makes your code crash less, and I would argue that in many cases that's
desirable. I think it's a bit sad that the default behavior for computers is
often to completely give up on the sight of any error. If you're running code
in development or need to worry about data integrity, it makes sense to fail
quickly and loudly, but if the analytics system or the "like" counter or some
other non-critical feature crashes for a real user, it's a much better UX to
degrade that feature rather than taking down the whole page.

I think the "avoid crashing at all costs" mindset was especially sensible when
the web was viewed more as a collection of documents, with JS existing to give
light optional enhancements rather than drive the core functionality. Imagine
opening a Word doc or a PDF and having it crash because the author made a
mistake. These days, I would certainly prefer that JS be more strict by
default, especially in development.

I think the right way to handle the situation these days should be to define
boundaries where a crash in one part of the code is contained to its boundary,
e.g. React error boundaries (
[https://facebook.github.io/react/blog/2017/07/26/error-
handl...](https://facebook.github.io/react/blog/2017/07/26/error-handling-in-
react-16.html) ). I also think it's useful to have a variant of "assert" that
just logs a warning and continues in production, since in many cases that's
the desirable behavior.

~~~
landryraccoon
> It makes your code crash less

Wait, can you back this up? What makes you think strongly typed languages
crash more than weakly typed languages? What's a scenario where a strongly
typed language will crash at runtime, but a weakly typed language won't?

I can see an argument that an interpreter of a weakly typed language will let
you run a program with dangerous type conversions without complaining, while a
strongly typed language won't even let you execute it - but are you counting
that as a crash?

~~~
alangpierce
To be clear, I'm talking about strong/weak typing and static/dynamic typing as
distinct concepts. My comment mostly applies to dynamic-typed languages.

In JavaScript, `1 + {}` gives the string `1[object Object]`. In Python, `1 +
{}` crashes. Neither language has a static type system, so neither language is
able to disallow an expression like `a + b`; it needs to actually execute the
code to find out that you're adding two things that don't make sense. I think
most examples of "Python is stronger-typed than JavaScript" are of that form;
Python crashes while JavaScript silently does something that may or may not
make sense. Accessing a missing property, accessing an array out of bounds,
and calling a function with the wrong number of arguments are also examples
where Python crashes and JS doesn't.

So "crashing less" is pretty much inherent in my (simplified) definition of
weak typing, and not meant to be anything controversial.

~~~
goatlover
Why is crashing less often better in these cases? When it crashes, you know
there is a problem with the code that needs to be fixed. When it doesn't, you
might have nonsensical results that you're not aware of.

~~~
alangpierce
It depends on the exact context, but I think that in many cases, it's better
to present wrong data to the user than to crash the page, since crashing the
page makes it useless. Let's say Facebook accidentally introduces and ships
some bug when computing the "like" count on about 1 in 5 news feed items. I
could imagine two scenarios:

1.) The JS code crashes and the Facebook news feed page breaks for basically
everyone. The flood of errors is reported to the error monitoring system and
Facebook engineers frantically fix or roll back the problem to limit the
amount of time Facebook is unusable.

2.) 1 in 5 news feed items shows "NaN people liked this", but Facebook is
otherwise usable. The flood of errors is reported to the error monitoring
system and Facebook engineers frantically fix or roll back the problem to
limit the amount of time the weird "NaN people" message is shown.

Scenario #1 is a really bad outage, and scenario #2 is a temporary
curiosity/annoyance that most people don't notice, and hopefully it's clear
that scenario #2 is a better situation for everyone. But it really depends on
context; if the bug is "a bank's website shows incorrect account balances",
then crashing the page is probably a better user experience.

I'm certainly not saying JS got it right; JS doesn't do the part from #2 where
it alerts you if there's a non-fatal error in production. But I think the
basic idea of error resiliency has plenty of merit.

------
rictic
If you're wondering why JavaScript does this, it's because early versions of
the language did not have exceptions. So every operation defined at the time
has to return some value for every possible input, even when those inputs
don't make any sense.

It's likely that the only way we could get rid of this behavior would be
through some `use strict`-like opt-in semantics.

TypeScript helps a lot too:
[https://www.typescriptlang.org/play/#src=0%20%3E%3D%20null%3...](https://www.typescriptlang.org/play/#src=0%20%3E%3D%20null%3B%0D%0A0%20%3E%3D%20undefined%3B%0D%0Anull%20%3E%3D%200%3B%0D%0A'10'%20%3E%2010%3B%0D%0A'10'%20%2B%2010%20%3E%2010%3B%0D%0A)

------
userbinator
Dynamic languages with implicit type conversions are designed to make the easy
cases easy (think of JS operating on webpage input, where everything is
initially a string; would it have the influence and popularity it does today,
if lots of explicit conversions had to be written?), but as a side-effect, the
hard cases can become perplexing.

I have no doubt those steps specified in the standard had plenty of thought
put into them. It has to handle all the types, as well as those special cases
of infinities, NaNs, and +/-0, such that the common cases make sense. However,
I think a flowchart or other graphical means of illustrating those algorithms
would be far easier to understand.

~~~
TheRealPomax
you mean in this blog, or in the spec? (for the spec having the actual step
spelled out is pretty much all you need when you're a spec implementation
engineer. The flow chart is basically just eye candy for readers)

------
acjohnson55
The rule `if null < 0 is false, then null >= 0 is true` seems to rely upon the
law of the excluded middle, which is violated by the comparison algorithms. I
think this summarizes the issue.

~~~
lou1306
More precisely, it relies on the assumption that

(x > y) || (x == y) || (x < y)

is true for all values x, y.

But the semantics of the comparison operators lead to null > 0 || null == 0 ||
null < 0 being false and the whole house of cards crumbles down.

------
Animats
This is what comes from allowing conditionals on undefined values. The result
of the relational operators isn't allowed to be "undefined". Nor is it an
error to apply "if" to "undefined".

A similar problem comes up at the CPU level with IEEE 754 floating point
arithmetic. There's a set of reasonable rules obeyed by FPUs about how results
can yield a NaN. NaN values propagate through the math operations; if any
operand of +, -, *, / is NaN, the result is NaN. That was well thought out.

But it breaks down at comparisons. Comparisons always return True or False.
All comparisons with NaN return False. There's no such thing as "not a
Boolean". Logically, there should be "not a Boolean" in compare condition
bits, and attempts to use it to control a branch should cause an exception.

For historical reasons, FPUs were designed as add-ons to the main CPU. They
were once separate "coprocessor" chips, back when one chip couldn't contain
enough transistors to do both jobs. Early microprocessor FPUs had an arms-
length relationship with the main CPU, and the x86 instruction set still
reflects this. So the FPU comparison results and the branch logic aren't
tightly integrated.

(There's also the fact that few languages can handle a floating-point
exception well. You usually can't just put a try/catch around your number
crunching and get an exception if the computation starts crunching
meaninglessly on NaNs.)

------
danschumann
In a game of code golf, or something where you purposely make your code as
hard to read as possible, I can imagine doing something like `if (!(x>0) &&
!(x==0) && (x>=0))` instead of doing `if(x===null)`.

~~~
koolba
Not as hard, it's as short as possible, i.e. a keystroke is stroke.

The hard to read is a pleasant side effect.

------
bga
Did he seriously claim at the end that that made sense "mathematically"?

~~~
matthewaveryusa
Yes. For X and Y being of the same type, defining the < operator should be
enough to derive all other operators:

Define X < Y, and then derive the rest:

X > Y : X < Y

X == Y: !(X < Y) && !(Y > X)

X != Y: !(X == Y)

X >= Y: !(X < Y)

X <= Y: !(Y < X)

That's why it's the only operator that needs to be defined for std::map/set
(which are rb-trees) in C++

Now, if X and Y aren't the same type, the only thing you can expect to get out
of the system is a cronenberg.

~~~
adwn
> _X == Y: !(X < Y) && !(Y > X)_

Only for totally ordered sets [1]. Floating-point numbers, for example, are
not totally ordered, because !(NaN == NaN).

[1]
[https://en.wikipedia.org/wiki/Total_order](https://en.wikipedia.org/wiki/Total_order)

------
gcb0
bottom line is: if your language allows you to compare apples to oranges, then
you are comparing apple to oranges.

------
recentdarkness
I thought so when starting to read this article and seeing > and == cases
being false. I guess, with some experience one probably has implemented such
logical shortcuts as well themselves. And now I am pretty sure this is a
question of backwards compatibility and not anymore easily reversible

------
jmull
I think a lot of you are drawing overly broad conclusions from this, according
to your predispositions.

A good language feature should be intuitive, given a basic understanding of
the feature, and this falls within that.

A JavaScript programmer considering what the relative order of null and 0 is
(and therefore how an inequality operator between them would behave) would
intuitively conclude, "I don't know, 0 and null don't have a natural relative
order. I probably shouldn't be doing that or else I need to go to the spec."

Anyway, you can hardly judge a language feature as bad because it alllows you
to do things that don't make sense (in an edge cases, no less)... then what
language features in any language be good?

~~~
catnaroek
> then what language features in any language be good?

Those that obey general principles without exception. For example:

(0) “Abstraction clients must not rely on implementation details”: parametric
polymorphism, abstract data types.

(1) “Case analyses must be exhaustive”: algebraic data types and pattern
matching.

(2) “Make as few assumptions possible about the intended meaning of programs”:
principal types.

------
roberttod
I wish in JavaScript that inequality operators would always return false if
either side of the operator needed coercion, at least then this sort of
scenario would be predictable (and a little more JavaScripty than throwing an
error).

As it stands, it's bad practice to rely on this sort of edge case in your
code. If you know that one of the variables can be null, always handle that
case before doing the comparison.

~~~
nebulous1
It does. [https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Equa...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Equality_comparisons_and_sameness)

------
alkonaut
See the biggest problem with JS dynamic types isn't "type uncertainty" but
coercion. Fail instead of limping along ffs.

------
tejas1mehta
I think the right response for such an operation would be to throw a runtime
error as opposed to coercing it into an integer. That's what a lot of other
languages like ruby do. While I understand JS' philosophy of dynamic typing
and coercion rather than runtime errors. I think this is taking it too far.

------
dwenzek
Here is a nice investigation on how such curious javascript behaviors can have
an impact on a real code base.

[http://stedolan.net/incomparable/](http://stedolan.net/incomparable/)

------
spo81rty
The least favorite things of all developers... Nulls and JavaScript.

------
singingfish
I had to analyse some javascript malware the other day. It was quite
impressive, and reminded me about some of the hateful parts of javascript.

------
sAbakumoff
Life is too short to understand why JavaScript acts the way it does.

------
quickthrower2
It ain't no preorder

------
infogulch
Dynamic typing is a nightmare.

~~~
namelost
It's not so much dynamic typing, but a lack of typing. Dynamic typing means
that type checks are made at runtime.

e.g. in Ruby:

    
    
        irb> nil >= 0
        NoMethodError: undefined method `>=' for nil:NilClass
    

and Python 3:

    
    
        >>> None >= 0
        TypeError: '>=' not supported between instances of 'NoneType' and 'int'
    

Obviously JS (and Python 2!) get this wrong, but to be fair JS was designed in
the 90s.

~~~
matt2000
I agree with everything you said, except for the part where you said
JavaScript was "designed".

------
wwwkeykey
Forget JS! WebAssembly is coming and you can use nice languages with it, for
example C#. and if you need compatibility with older browsers you can still
compile WebAssembly to JS and send it to old browsers.

~~~
dep_b
That's nice. But also a new attack vector.

------
nebulous1
*or

