Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You want to set allow_nan=False to be compliant. Otherwise this _will_ annoy someone who has to consume your shoddy pseudo-JSON from a standards-compliant library

Funny (well, not really) thing is NaN and Inf are perfectly valid floating point numbers acoording to most (?) standards used on computers. To the point that I don't understand why it was left out of JSON. So unless you're 100% sure you won't encounter these numbers the choice is between not being able to use JSON, or finding hacks around (and using null isn't one of them since you have 3 numbers to represent), or just using non-compliant-yet-often-accepted JSON and possibly annoying someone whos parser doesn't handle it.

And for me there have been quite a lot of cases were I just quickly needed something simple to interface between components so when finding out they all support JSON+Nan/Inf then the choice is usually made quickly.



From a practical standpoint, defining numbers in JSON to be "whatever double precision binary floating point does, or optionally something more precise" would have been good enough, and capture what we end up having anyway.

Still, I prefer Crockford's choice: that JSON numbers are defined to be numbers. Infinity and the flavors of NaN are... not numbers.

In an extensible data interchange format, like [edn][1], people could define conventions about more specific interpretations of numbers, e.g.

    #ieee754/b64 45.6653 ; this is a double
We could build such a format on top of JSON (there are probably multiple), but I again agree with Crockford that this sort of thing does not belong in JSON.

Makes for a bunch of headaches, though, for sure.

One example is a data scientist I used to work with. He was working with lots of machine learning libraries that liked to use NaN to mean "nothing to see here." A fellow developer ended up writing code that used some sort of convention to work around it, e.g. number := decimal | {"magic-uuid": "NaN"}. I can see why some people are of the opinion "this is stupid, just allow NaNs." I disagree.

[1]: https://github.com/edn-format/edn


> We could build such a format on top of JSON (there are probably multiple)

Wouldn't a perfectly valid JSON pretty print nuke your values since it could parse a 128 bit value to double before writing it out again? I wouldn't trust your theoretical format unless it ensured that normal JSON parsers, which are not aware of its requirements, fail to parse it.


The theoretical format would have to represent the more-specific numbers using strings, e.g. if we want to mimic the edn:

    {"#ieee754/b64": "123.45"}
Nevermind that this takes up way more space, and whatever other problems.

If an intermediate parser encounters this, we don't have to worry about its representation of JSON numbers, because our numbers don't involve JSON's.


> Funny (well, not really) thing is NaN and Inf are perfectly valid floating point numbers acoording to most (?) standards used on computers. To the point that I don’t understand why it was left out of JSON.

There are all kinds of ways to encode that in JSON, but (contrary to JS, where “numbers” or IEEE doubles, which include various things which are either not numbers or not finite), JSON numbers are generic finite (both in size or decimal representation) numbers, so “as JSON numbers” is not one of them. (And there’s no explicit way defined in JSON, so if you want it to be unambiguous, you need externally defined semantics, but you need that for most real uses anyway.)


> To the point that I don't understand why it was left out of JSON

I think you're forgetting the birthplace of JSON. Who deals with the concept of infinity and NaN in the context of web front ends?


Ranges are pretty common in APIs and both -Infinity and Infinity can naturally arise from one-sided ranges. Since they are absent in JSON, they are frequently replaced with null, ad-hoc sentinel values with uncoded assumptions (e.g. timestamps should be always positive) and missing fields.


I get that, but to go from "oh this won't be very common" to willingly "let's just leave this out" is something else. At least in my mind :) Or was it an oversight?


I suspect it was a bet on worse is better.

Whether it was a good bet is debatable, but given Crockford's focus on "try and leave out as much as possible" I can certainly see it making sense at the time.


> To the point that I don't understand why it was left out of JSON

Because JSON has generic numbers that just happen to be able to represent every numeric IEEE floating point double value. In theory you could have an implementation that uses a BigDecimal class or something similar to represent numeric values. Which is of course completely incompatible with every other JSON implementation and just asks for badly tested edge cases to rear their ugly head.


every numeric IEEE floating point double value

Well, but not NaN and Inf? Or which IEEE are you referring to?


NaN and Inf are not numbers.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: