Hacker News new | past | comments | ask | show | jobs | submit login

0.2 + 0.1 === 0.3

That's not really a JS problem, that's a floating point problem. Plenty of languages will have the same issue.

+!![]

"" - - ""

(null - 0) + "0"

Calling these things weird is fair enough but I can't help thinking this is code you'd never actually write outside of the context of a "Look how weird JS is!" post. It's like picking examples from the Annual Obfuscated C Contest to show how hard it is to understand C. Yes, some languages enable you to write weird garbage code that's hard to reason about, and ideally they wouldn't because enforcing sane code would be great, but come on. Most of these things aren't a big problem that we suffer every day. Just be moderately careful about casting things from one data type to another and all these problems go away.




I think the situation is a bit different. This situation looks different in reality. The result may be (null-0)+"0", but the actual code will be foo()-bar()+baz(). And C will at least give you warning about types, even if NULL-0+"0" could give you an address. Plain JS without extra tooling would happily give you the unexpected result. Some other dynamic languages would at least throw an exception about incompatible types for -/+.


We've had these exact same sorts of issues in PHP. It can go undetected for awhile and cause subtle bugs. A good type system helps a lot. I appreciate that Kotlin is more stringent than Java with no implicit conversions between Int/Long/Float/Double.


I once lost most of an afternoon debugging an issue where orders in a PHP e-commerce system would very occasionally fail.

Turns out several months before, someone was doing some refactoring, moved some methods around, but also changed a "==" to a "===" in the process. Generally a good idea, but it slipped through to production without anyone noticing or breaking any tests.

The issue ended up being that a rare code path in a tangentially related method would cause that method to return a float instead of an int. This propagated through, eventually causing a check of 0.0 === 0 to fail where previously 0.0 == 0 passed.


The problem here is == was used in the beginning. Always use ===.


Unfortunately "use X from the beginning" is rarely a solution when you're no longer at the beginning.


Because it's more to do with weak typing as opposed to dynamic typing. Many dynamic languages are strongly typed.


> C will at least give you warning about types

For the same reason that, as the saying goes, there is no such thing as a "compiled language", no, "C" doesn't give you a warning about types. The compiler you're using gives you a warning. If you want a typechecking pass over JS, then use a tool that does typechecking. Eschewing with a typechecker and then complaining about the lack of type mismatch warnings, however, makes little sense.


Sure, we can go into very true, but specific cases. But in a day to day usage: When you're writing plain JS, you have to do extra work to get type checks. When you're writing plain C, you have to use an unusual environment to not get basic type checks.


So? The majority of the business world does their desktop computing with Microsoft Windows, but it doesn't mean you have to. The same principle applies here. If you don't like your environment, fix it. Choosing not to and then complaining about the result makes little sense.


You don't control the whole environment. You'll likely use some libraries where people didn't use type checkers and wrote libraries in a complicated enough way that the analysis cannot give you an answer. This is where you control some of your environment and fixing it involves forking dependencies and more maintenance burden if you really do want to do it.

In this case, complaining about the environment as a whole does make sense.


Yeah guess this is where Typescript comes in


TypeScript wont magically fix type errors. It is not that hard, to sanitize any expected parameter for functions and their output, typescript wont do that for you. So if you typecheck i/o values per se, using typescript only slows down dev process, as this can be easily done in vanilla javascript. no need for more bloat, but only for some defensive programming.


TypeScript "magically" fixes the need for defensive programming by ensuring you won't write unsanitary code or invoke functions with possibly null&undefined values by accident. So then clearly the defensive programming is the bloat, because you could avoid it entirely by putting a type system in place to prevent you ever putting yourself in a situation where null&undefined get passed to functions you don't want it to be.


This falls apart the moment you're pulling remote data at runtime. You're right back to defensive programming since there's no more type system to help you at that point.


That's nowhere near "falling apart". That's just the simple fact there's no silver bullet.

Someone arguing a little defensive programming is equivalently strong to a type system is clearly unaware just how much work a good type system does for you. I of course agree with you: when you're fetching data that you don't know the type of, recklessly casting it to some type is going to cause issues. This is true in every language that has ever existed. It's also why tools like IO-TS[0] exist, and of course you can enforce this with JSON Schema techniques or custom validators or a million other options.

Edit: in case the ultimate point of this comment is not clear, by using a type system and some type validator on your fetches, you are able to reduce the need for defensive programming exclusively to your fetch points. Clearly, defensive programming at the fetch points was already needed, so this is why I do not agree with the claim TypeScript's value add disappears from remote fetches.

[0]: https://github.com/gcanti/io-ts


So you are saying that because we need to do validation in a very specific case, we should just throw the towel and do validations every time?

The IO entry point of your code will always be unknown no matter what programming language you are using. In typescript, you do validations in these cases to make sure outside data fits into your type system, from then on (probably about 99% of the rest of the code) won't need any validation whatsoever because the compiler is already doing it for you.

Bloat is the amount of time you lose doing code review to check if things are possibly null or doing null checks on stuff that is never null or a bunch of other stuff that the compiler will do for you just by writing some minimal types. The compiler does this stuff automatically without getting tired, can't say the same thing for humans.


so when the transpiled typescript is used by the next door 'i know javascript'-1337-hax0r and is fed with some arbitrary data, your wonderful conceptual typed world does not exist anymore and that wonderful code eventually fails, because a simple sanity check was too much.


This seems like a pretty bad-faith comment. This user is not proposing they're building a library, and if they were, it would be reasonable to assume they also would assume to extend the IO protections they discuss to the "IO" points of library - external user input.

Additionally, I think it is safe to say a JavaScript user who grabs a TypeScript library and uses it without importing the types to be misusing the library. Imagine if someone were to have a whole test suite was available to them while they develop, and they opted to never run it. And then they complained to you the tests didn't catch any errors. You would look at them sideways, no? Misuse and poor application (human error) are of course things TypeScript cannot solve.


> no need for more bloat

What bloat are you referring to? TS compiles down to plain JS.


Toolchain bloat is still bloat.


To call TypeScript, a very strong type system which compiles down to terse JS with no extra JS for even complex types, and thus accordingly removes the need for all sorts of tests, "toolchain bloat" is a fairly one-dimensional view of things

Edit: I accept the downvotes for my tone and have updated it. However, I do feel that by the exact same argument "toolchain bloat" exists, surely one could seamlessly argue "testing bloat" exists, and it should be transparent from the popularity of TypeScript that it's a good tradeoff


Yup, garbage in garbage out. I'm not a huge fan of JS, but this sort of criticism is absurd.


I rather have the language say "Error, this is garbage!" than silently output garbage.


The sad truth is that this stuff was not in the first version of JS. It was added AT THE REQUEST OF DEVS (a decision Eich has said he regrets).

Like most bad things, it's only around because a big company said so.

Like all bad things in JS, there was a push to remove it at ECMA, but Microsoft had reverse-engineered JS into JScript and refused to go along with the changes to fix the weirdness.


The root of the problem is the original intent of Javascript. Javascript was intended to be a small layer of dynamism added to web pages that were mostly defined via HTML which were presented to a human for interpretation. When your user agent is a human trying to look up an address for a restaurant, they can look at a garbled piece of crap where the JS crashed and still maybe find what they were looking for in the rendered text. Limp along the best you can is a great failure strategy for Javascript's original use-case. Only now that we've turned Javascript into a general purpose programming language is this a failure.

Even Javascript's weak typing makes sense in this case. Why automatically convert everything? Becausethe expectation was that inputs to your Javascript would be HTML attributes which are all strings. Automating type conversions from strings made sense. But once you move to larger scale programing, Javascript's weak typing is awful


Which is why we use Typescript


The fact that there has to be a different language on top of your language to make it same days all that needs saying really.


I keep having this argument with my boss but he refuses to let me write machine code.


If the good lord wanted us to code in assembly language, he'd have made transistors operate on mnemonics, not electric currents.


If the good lord had wanted us to interact with transistors based on mnemonics, not electrical currents, he'd have implemented our brains in mnemonics, not electrical currents.


Nitpick: voltage potentials and ion channels. There's not much actual current flowing.


Nitpick over nitpick: synapses are not exactly electrical or ion current based, they have active transporters.

If all electrical activity ceases, does memory survive? (Answer is very likely yes given cryonic experiments. Brain is protein, ion currents have tendency to auto fire on defrost.)


Ah, I was under the mistaken assumption that ion transport was a subset of ion channels but I see now the latter is passive only.


This goes for all compiled languages?


We invented compiled languages because of issues with writing everything in assembly. In other words, we invented C because assembly wasn't very good.

Just like we invented TypeScript because JavaScript wasn't very good.


Pragmatism of an runtime/ecosystem that highly favors backwards compatibility above all else.


Man do I hate my phone's autocorrect. For anyone confused as shit:

The fact that there has to be a different language on top of your language to make it sane says all that needs saying really.


Typescript doesn’t save you from all the weird things happening at runtime. A missing check at a context boundary and you can have a wild time (been there).


Context boundary meaning where it interfaces with javascript?


The points where you parse JSON, for example.


This particular annoyance has made me a huge fan of Elm's (and other languages') JSON decoders, and specifically No Red Ink's JSON decoding pipeline. All the type safety I could ever want and no falling back to writing defensive JS to maintain safety at runtime.


I've been thinking for a while that modern languages shouldn't default to floating point computations. They're exactly the right thing if you do data stuff or scientific computing, but given how much of the internet relies on things like e-commerce and how often floating point is still misused for dealing with money, coupled with the fact that even many senior developers aren't fully aware of the subtleties of floating point, I don't understand why we keep making them the default.

If a user does need to do scientific computing, they could use a "# use-floating-point" pragma or something so that literals get interpreted as floating point. Otherwise, we map them to rationals.

Of course, doing rational arithmetic is much slower (for example, Gaussian elimination is exponential for arbitrary precision rationals, while it's famously O(n^3) for floating point), so there's a danger of people accidentally using it in triply nested loops etc., but I have a feeling that if you need to do that kind of thing you know that you might run into performance issues and you think twice.


I have been a professional software developer full-time for 12 years and I have only worked on one system that needed an exact decimal representation for money. I just don't deal directly with payments, billing, or account balances. I do sometimes have to represent money, but in simulation or estimation scenarios where nobody gets cheated or audited if it's $0.01 off.

Arbitrary-precision rationals would get extremely hairy extremely quickly and simply break down for a huge number of use cases. People use exponentiation, square roots, and compound interest! If a "senior developer" who actually works with real account balances doesn't understand floating point, why would you expect them to understand why they can take the square root of 4, but the square root of 2 crashes their application? Or why representing the total interest on a 30-year loan at 3.5% takes over 400 bits?

The reality is that software engineers need to understand how computers work for the field they're working in. Many (if not most) programmers will never encounter a situation where they're responsible for tracking real account balances. The ones that do simply need to know how to do their job.


I agree. I think a high level programming language like JavaScript should default to the more "correct" (least surprising) behaviour, and let the programmer opt in to floating point numbers when needed for performance.

In modern JavaScript, you can use BigInt literals by suffixing an n, like this:

  const maxPlusOne = 9007199254740992n;
If I could magically redesign the language, I would make all integer literals be effectively BigInt values, and I would make all decimal literals be effectively decimal values so that 0.1 + 0.2 === 0.3. I would reserve the letter suffixes for more performant types like 64-bit floating point numbers that have surprising behaviour.


> "# use-floating-point" pragma

I haven't come across the "pragma" directive in a Javascript context. I guess it's another new feature (I have trouble keeping up these days).


I don't think they have it (although, you can do whatever you want with babel nowadays, I guess). It was a suggestion aimed at programming languages in general.


The javascript equivalent would be an expression like 'use strict';


I was being flip; I should have used a smiley, I guess. Sorry.


Having the full number stack supported (natural numbers - integers - rationals - reals) would be, indeed, awesome.

Sadly, most people don't understand the distinctions, so this will never happen. (Even in reply to your post people keep talking about "decimals", as if the number base is at all relevant here.)


"Decimal" usually refers to a data type which is "integer, shifted by a known number of decimal places". So, for example, if you had an amount of money $123.45, you could represent that as a 32-bit floating point number with (sign=0 (positive), exponent=133, mantissa=7792230) which is 123.4499969482421875, but you would probably be better off representing it with a decimal type which represents the number as (integer part=12345, shift=2), or as just a straight integer number of cents.

The number base is relevant, because money is discrete, and measured in units of exactly 1/10^n, and if you try to use floating point numbers to represent that you will cause your future self a world of pain.


Non-decimal currencies have existed in the past and there are still some remnants: https://en.wikipedia.org/wiki/Non-decimal_currency


Yes, that's a good point. If you try to use decimal for those currencies, you are in for a very similar world of pain as you are by using floats for decimal currencies.


> reals

This is technically impossible. Almost all of the real numbers can't be represented in a computer. The most you can get is the computable reals. But to be able to represent some of those and to do arithmetic on them, you have to use somewhat complicated representations such as Cauchy sequences.

The use cases for computing with exact representations of (computable) irrational numbers are fairly limited (probably mostly restricted to symbolic algebra). In 99% of cases, if you need the square root of 2 that's probably because this comes from some sort of measurement (e.g. you need the diagonal of a square with sides 1), and if it's a measurement, there's going to be an error associated with it anyway and there's no point in insisting that this number that you measured is "exactly" sqrt(2).

This is different from rational (and, in particular, integer) numbers, in which case there are many valid use cases for representing them exactly, e.g. money.


> This is technically impossible.

It's technically impossible for the integers too.

> Almost all of the real numbers can't be represented in a computer.

Almost all of the integers can't be represented in a computer.

What, exactly, is your point?


These two things are not alike.

For every integer, there exists a computer that can represent it. Even with constant memory, I can right now write a computer program that will eventually output every integer if it runs for long enough.

By contrast, for almost all (i.e. an uncountable number of) real numbers there exists no computer whatsoever that can ever hope to represent any of them.

Another way of seeing that is that, while Z is infinite, any single integer only requires finite amount of memory. But a real number may require an infinite amount of memory.

The integers can also be represented fairly easily as a type. For the naturals, for example, it's as easy as

  data Nat = Z | S Nat
(ML-type languages allow to do this very concisely, but you can do theoretically the same type of thing with e.g. Java and inheritance; if you use Scala or Kotlin, use a sealed class, if you use Swift, use an enum, etc.)

The integers are slightly more complicated (if you just try to add a sign, you'll have to deal with the fact that you now have +0 and -0), but still not hard. Rationals are a bit harder in that now you really have multiple different representations which are equivalent, but you can also deal with that.

By contrast, you won't be able to construct a type that encodes exactly the set of real numbers. The most you can do is to provide e.g. an interface (or typeclass) Real with some associated axioms and let any concrete type implement that interface.


You're trying to explain the difference between countable and uncountable infinities here.

The distinction is irrelevant in the context of computers.

> By contrast, you won't be able to construct a type that encodes exactly the set of real numbers.

You don't need to encode exactly, just like you don't need to encode the integers "exactly".

All you need is a way to guarantee a finite number of significant digits in your real number approximation.

Floating point numbers give you that, problem solved.


> Floating point numbers give you that, problem solved.

No. For example, floating point addition is not necessarily associative. In that sense, floating point numbers aren't even a field and it's wrong to say that they can be used as a stand-in for real numbers.

Floating-point numbers are incredibly useful and it is amazing that we can exactly analyze their error bounds, but it's wrong to treat them as if they were real numbers.


> That's not really a JS problem, that's a floating point problem

More accurately it's a binary problem. 0.1 and 0.3 have non-terminating representations in binary, so it's completely irrelevant whether you're using fixed or floating point.

Any number that can be represented as the sum of powers of 2 and 5 have a terminating decimal representation, whereas only numbers that can be represented as the sum of powers of 2 have a terminating representation in binary. The latter is clearly a subset of the former, so it seems obvious that we should be using decimal types by default in our programming languages.

Oh well.


While true, there will be rational numbers you can't represent as floating-point numbers no matter which base you choose. And the moment you start calculating with inexactly represented numbers, there is a risk that the errors might multiply and the result of your computation will be incredibly wrong. This is the much bigger "problem" of floats, not the fact that 0.3 is not "technically" 0.3, but off by some minuscule number.


It's not a binary problem, it's a particular binary representation problem. You can represent 0.1 and 0.3 such that it terminates. In Java for example just use BigDecimal (integer unscaled value + integer scale) and you are ok.


The floating point standard defines various float types, the common float/double types are called binary32 and binary64. It also defines decimal types.

> integer unscaled value + integer scale

binary float does the same, just using 2^exp instead of 10^exp for scale.


In pure mathematics, you can get perfect precision with non-terminating fractions. For example, 0.(6) + 0.(3) = 1 is true. The decimal (or binary) representation is just "syntax sugar" for the actual fraction - in this case, 2/3 + 1/3 = 1; or, if you prefer, 10/11 + 1/11 = 1, or 0.(10) + 0.(01) = 1.

Note: I'm using a notation for infinitely repeating decimals that I learned in school - 0.(6) means 0.6666666...; 0.(01) means 0.010101010101...


Yes, it's a theorem that every rational number can be represented by a decimal number that is either terminating or repeating.


Floating bar numbers are an interesting way of giving terminating representations to more commonly used decimals. Each number is essentially a numerator and denominator pair, with some bits to indicate the position of the division bar separating them.

https://iquilezles.org/www/articles/floatingbar/floatingbar....


> 0.1 and 0.3 have non-terminating representations in binary

No. "1", "3" and "10" can all fit easily in just four bits.

Just use rational numbers and solve the problem for good.


No, it is problem for any base. For example decimal system can represent 1/5,1/4, 1/8 and 1/2 properly. But, what about 1/3, 1/7, 1/6, 1/9 as decimal numbers with finite number of digits.

This will be a problem for any base representation when it has to be boxed in finite number of digits or memory.

One good thing is decimal is widely used format, so it is good to go with that for representing stuff. But, it is more of an accidental advantage that decimal has. Nothing more.


Did not read the whole parent comment and my message is redundant. But, yes, objectively decimal can represent more numbers (the numbers that are composed of 1/2 and 1/5).

I think there are arguments to use decimal as representation is there, (where I originally came to know about this problem) [1].

[1] - https://www.crockford.com/dec64.html


Discussion on hn about this : https://news.ycombinator.com/item?id=16513717


Bring back BCD hardware (hmm. Does x86 hardware have built-in BCD arithmetic?)


Yeah, a lot of these have nothing to do with JS. I have no idea what the site is trying to accomplish.

I mean:

    !!!true
In what language does that (or its equivalent) not evaluate to false?



Great point, I'll edit my comment to say "(or its equivalent)", instead of "(that precise sequence of characters, no matter what you've redefined true and false to be)". Not sure why I originally wrote it that way.


One might want to add that it does work when you use a variable however. https://play.golang.org/p/6UfNFWm_-JR


The issue isn't that they're constants, but that GP called them "true" and "false" and assigned the opposite values you'd expect. You can break it in exactly the same way with variables: https://play.golang.org/p/EVU84l0A57I

Kinda crazy to me that Go doesn't reserve the words "true" and "false", but ¯\_(ツ)_/¯


Can anyone explain this to me?


Looks like the `true` and `false` _bindings_ in Go are mutable and can be re-assigned. The same thing was possible in Python 2, IIRC:

    False, True = True, False # Have fun debugging!


Ah, I missed that line. Now I feel stupid ;)


!!!true is equal to false in JS, and in Go (and in any other language I can think of.)

In that Go example above, the author is reassigning true and false to be their opposites.


`!` is added to values in most languages as a shorthand for saying "give me the opposite boolean value of this. So `!true` would equal `false`

Some people add two exclamation points as a shorthand to cast a value to a boolean. So if you wanted to see if something was 'truthy', you could say `!!truthyValue` and it would return ` true` instead of the value itself. Literally what your asking the language is "give me the opposite boolean value of the opposite boolean value of `truthyValue`"

Now you can probably see why three exclamation points is silly, it's not giving you anything that a single exclamation point wouldn't give you. Both `!truthyValue` and `!!!truthyValue` evaluate to false, you are literally saying "give me the opposite boolean value of the opposite boolean value of the opposite boolean value of `truthyValue`"

The example in Go is intentionally misleading because Go lets you reassign the values for `true` and `false`. It's going through all the same steps I described above, but it's starting with a value opposite of what you think it is


They're defining a const named true and false to their inverse values, shadowing the builtin true/false keywords.


The site does say so in the introduction

> Even if you're a JS developer, most of this syntax is probably, and hopefully, not something you use in your daily life.

So I think you should look at this site more as something fun you might not have known if you're an js developer than as criticism of js.

That being said, the !!"" isn't that weird of a syntax is it? I see and use the double exclamation mark all the time.


The output is "weird" if you don't know the rules for operator precedence and how things convert to their primitive values.

Most people arent going to "know" what '+!![]' will resolve to because it makes literally no sense to combine those operators into a single expression in anything approaching normal code.


I have definitely used the +!! "operator" before. It coerces a value into a boolean integer. It's not weird at all, just a mechanical application of the not ! and Number coercion operators +.

The fact that [] is truthy is something everybody learns in their first weeks of JS programming otherwise you would be writing `if (someArray)` and wondering why your code is broken.

A weird one would be to explain why +[] is 0 and +{} is NaN. That is nonsensical.


This is true; a lot of the questions are about automatic type conversions.


Poor frontend devs.. stuff like this has to result in incredibly painful debugging because of assuming code would act one way when it does something else entirely.

> the !!"" isn't that weird of a syntax is it?

Why would someone say not not empty string in code somewhere? Or do you mean seeing !!var_that_could_have_empty_string isn't too weird?


That's what tests and code review is for. Truth is that if any of this kinda code gets into prod, then you got bigger problems than some arcane JS gotchas.


I doubt that anyone who hasn't written JS will recognize !! as an idiom for converting to boolean.


That kind of type coercion predates JS.

  perl -e 'print !!"whee"'
  1
  php -r 'print !!"whee";'
  1
  awk 'BEGIN {print !!"whee"}'
  1


[flagged]



[flagged]


There is no such thing as standard English. The link says only that the usage is informal, and HN comments are not formal writing.


If your comments were comprehensible to most HN readers, I wouldn't see the problem in using Hiberno-English, or any other colloquialisms.


[flagged]


That's completely your opinion, though - there's nothing in the guidelines or FAQs that say that.

When on HN, I engage with things I find interesting, irregardless of how good their ritten. If we intimidate users with an expected level of ability in written English, we'll be excluding a lot of interesting comments.


English is as weird as JavaScript.


no, it's fine - you just need the right education https://publicdomainreview.org/collection/english-as-she-is-...


In my humble and admittedly little experience, its not what you intentionally write, but what gets unintentionally written and needs to be debugged later.

Carmack-like people utilising this stuff for good are few and far between, for your everyday joe programmer this is a footgun, a very-nonobvious and non-intuitive one (hence footgun moniker) that they write in crunch, it slips past reviews because reviewers are just joes with couple extra years, if at all, and then whrn things break after deployment, its an absolute pain to debug.


Yea.. "clever" devs can be a nightmare to have in teams. "I implemented all of that functionality in less than one hour" is great and all but it is just tech debt that needs to be re-factored later and costs ridiculous amounts of time to support and maintain until its re-written.


> Calling these things weird is fair enough but I can't help thinking this is code you'd never actually write outside of the context of a "Look how weird JS is!" post.

That is the whole premise of the site, though. They even say that these examples aren't common syntax or patterns before you start.


The site is called "JavaScript Is Weird", not "Weird Javascript", even if they tell you that the examples aren't common they're still saying that this weirdness is unique to JS. Which definitely isn't true in the case of basic floating point precision problems


> The site is called "JavaScript Is Weird", not "Weird Javascript"

am i being punkd?


The former is a general statement about Javascript itself as a whole while the latter is describing a set of examples.


The same for “== considered harmful”. I scanned the entire comparison table and the only unobvious or error-prone cases are those you never really do in programming.

https://stackoverflow.com/a/23465314

For me it’s only rows [[]], [0], [1], i.e. array-unfolding related, but all others are regular weak-typed comparisons like in perl and other dynamic semantics. <snip> Edit: just realized “if (array)” is okay, nevermind.


> the only unobvious or error-prone cases are those you never really do in programming.

You never do them on purpose. The problem is when you do them by accident because of a mistake in your code, and the error slips through unnoticed, doing the wrong thing.

> weak-typed comparisons like in perl

Perl has separate operators for working on strings vs numbers, so you are always explicit about performing a numerical vs string comparison etc. Not so for JavaScript.


you are always explicit about performing a numerical vs string comparison

But the values which you compare do not have to be of the same type, and it does not end at scalars (which are really ephemeral in their exact typing even in native API, see perlapi). Basically it has two flavors of ==, each with its own preferred coercion. Perl also has contexts, e.g. boolean and scalar which can operate on lists and hashes (@list == 5). While the form is different, semantics are similar.

The problem is when you do them by accident because of a mistake in your code, and the error slips through unnoticed, doing the wrong thing

If your string array contains some [[0]] by accident, I’d say there is not much left to do anyway. === doesn’t report this either, it only has narrower trueness, which may decrease or increase an error surface depending on how your conditions are spelled. And to be clear, I’m not arguing against using === in places where identity^ or strict equality check is necessary or desired (you may desire it most of the times and that’s valid). My concern is that (a) everyone blames == anywhere, for completely zealous reasons, (b) I have to type a poem to test for null and calm down someone’s anxiety.

^ I know it’s not exactly identity, but is close enough for practical purposes


Agree. The tests are silly, but each test highlights a gotcha that you might bump into while debugging some horrible heap of legacy code.


undefined and null sometimes make problems. IMHO it's good that undefined == null but some people don't realize.


I agree. If native apis didn’t return nulls in some cases, and undefined was named “undef” at least, null could be ditched. But then again, it’s only because js has no bad habit of treating the same of non-existence and undefinedness^. If not json (which has no undefined), we could ditch null. But it’s there and with === it leads to either

  object.someField === null || object.someFeild === undefined
madness, or to a potential error if a programmer thinks that null can not be there.

We could do an exception for null === undefined, but it’s against its spirit and will not be accepted.

^ languages that treat non-existent key as undefined are usually doomed to introduce Null atom in some form, or to work around that limitation constantly in metaprogramming


null and undefined is one of the things Javascript actually got right imo. Undefined is what lets Javascript turn what other dynamic language would throw as a runtime error into a value. "I do not have a definition for the thing you want" and "the thing you want is known to be unknown" are two totally different concepts that languages with only null to lean on must collapse into a single concept.


I've never found this to be a useful difference in practice. Also, JavaScript really doesn't use it correctly to begin with. Arrays are a mess. Like the poster below, I usually cast to one of them, except I cast to null.


The problem is undefined is used in places null makes much more sense (like a result of the find function). I follow the rule to never use null and convert to/from undefined if needed by some library.


Leave it to javascript to pull defeat from the jaws of victory.


Some of them are stretched examples, others comes from other languages/constraints (floating point, octal, ...), but some other are legitimately weird and error prone:

[1, 2, 3] + [4, 5, 6] // -> "1,2,34,5,6"

[,,,].length // -> 3


The first is only weird if you expect the + operator to perform an operation on arrays. It doesn't, so each array becomes a string and those two strings are concatenated.

The second is only weird in that you are constructing an array with implicit undefined values, which is exactly what I would expect to happen if my linter didn't complain about the syntax and I had to guess what might be happening.


> The first is only weird if you expect the + operator to perform an operation on arrays.

Like GP said, comes from other languages. That line can be copy/pasted into python and it performs concatenation.


Both of these examples are well-known (and not unexpected) behaviours. I assume you already know why it behaves like that. If not, I can explain it.

> [1, 2, 3] + [4, 5, 6] // -> "1,2,34,5,6"

What would you expect instead?

> [,,,].length // -> 3

Is there any use case where you would want to deal with sparse arrays?


[1,2,3] + [4,5,6]

An addition operation over two numerical vectors of length 3.

I would expect the result to match its inputs and provide a numerical vector of length 3.

Thus: [5, 7, 9]


The issue there is that arrays aren't first class types in JS, they're just objects with numeric property names.

So if applying an operator to arrays spread the operation across all the elements that way, it would imply that the same should happen generally for all object properties, with whatever weird implications that would entail.

    a.foo = 1
    b.foo = 'there'
    a + b // { foo: '1there' } ?


What language has this behavior by default?


Fortran :)


> [1, 2, 3] + [4, 5, 6] // -> "1,2,34,5,6"

> What would you expect instead?

If I let my first instinct speak:

[1,2,3,4,5,6]

And then if I think a little more, then maybe:

[5,7,9] //with obvious caveats

In no way do I expect what GP actually provided.


I understand where you are coming from. But the addition operator simply does not have any special handling of arrays. The specification [1] clearly defines what it should be used for: "The addition operator either performs string concatenation or numeric addition." As JavaScript is weakly typed, it is the programmer's responsibility to use proper value types with these operators. That limitation (or advantage?) is also well-known and applies to all weakly-typed languages.

[1] https://tc39.es/ecma262/multipage/ecmascript-language-expres...


It seems that by "expected", you mean "expected, by anyone who read the spec" which I don't think is a fair use of that word. Obviously, most JS developers have not and will not read the spec.

I am very happy with JS and TS and I think the coercion rules are easily worked around with linter rules and policies, but they are definitely weird and I think the language would be better if it simply threw exceptions instead. But then, such an issue shoud not be surprising for a language that was designed in 10 days.


No, I meant "expected by anyone who learned the language". Knowing the addition operator including its limitations is quite basic. I'm not saying you need to be able to solve all this "JavaScript is weird" puzzles as they are mostly non-sense. But you definetely have to know what you can us `+` for.

If someone does not like the ECMAScript specification, that is fine. But at least use a proper unofficial documentation like MDN.


well I guess the question is then not just what would you expect instead but at what familiarity with the language should one be asking people what they expect of it?

If you ask experts it is because you want to get an actual correct answer, but if you ask neophytes it is because you want to get an answer that might be obvious even if not correct.


Perl would give you 9 for the equivalent expression of (1,2,3)+(4,5,6) :)


> Is there any use case where you would want to deal with sparse arrays?

Not really. Now explain why [,,,].map((e,i) => i) is [,,,] instead of [1,2,3] please ;)


(assuming you're really asking) It's because JS has a notion of array elements being "empty", and the map operation skips empty elements. Basically "empty" means the element has never had a value assigned to it, but its index is less than the array's length property.

    Array(4)               // [empty × 4]
    a=[]; a.length=4; a    // [empty × 4]
    Array(4).map(n => n)   // [empty × 4]
    [,,1,,].map(n => n)    // [empty × 2, 1, empty]
My go-to way of avoiding this annoyance is "Array.from(Array(N))":

    Array.from(Array(4)).map((n,i) => i)  // [0, 1, 2, 3]
Alternately there's a recent "fill" method, that assigns all elements (including empty ones) to a given value:

    Array(4).fill(1)      // [1, 1, 1, 1]


> [,,,].length // -> 3

I don't think this is too weird if you think about it. JS allows trailing commas, so the last one is ignored. Effectively this is `[ undefined, undefined, undefined, ]`. A syntax error would have made sense here, but the length of three is a result of the usual syntax rules, not a particular strange quirk of JS.


Sorry to be pedantic, but `[,,,]` creates holes instead of undefined. They are different, because for example `[,,,].forEach(() => console.info(1))` doesn't do anything, but `[undefined,undefined,undefined,].forEach(() => console.info(1))` prints three "1"'s.


I could also make a php is weird page and say:

"WAT $a .= "World!"; ?"


Yes. You could.


I am sure a reasonable person and educated computer programming person will be able to avoid these traps, by adhering to certain standards. However, you often have that other person on your team, who does not care about being careful and writes code like it is to be once written and never touched again. And that's where the danger creeps in.


Entirely orthogonal to these operator semantics being unreasonable, that person should not be on your team: replace them with someone who cares.


Someone had linked on here a website that showed the 0.3 thing in practically almost every programming language and their output. I wish I could remember the domain / url cause it is interesting to compare languages defaults. I know most languages have ways to handle it correctly.



Yes this is it! Thanks for that, I do appreciate that they show multiple approaches in each language to showcase which one gives you the desired result.


I know most languages have ways to handle it correctly.

Including JS - http://mikemcl.github.io/decimal.js/


There's a proposal to add a decimal type to the spec, too.

https://github.com/tc39/proposal-decimal

Seems like it's a long ways away from being available, though.


Those last examples happen when you have variables containing those values. Without strict type checking, it gets really hard to know in all situations (especially when you're pulling values from a service) what you will have in there. And if a service changes, you won't have a lot of warning in your client code. So yes, these kinds of errors are very common.


Knowing that the underlying representation will vary is CS101 material. It applies to almost every language because that's how the hardware works.


Hardware doesn't exist in any meaningful way for 90% or professional programmers anymore.

Way too much abstraction for that to be an argument.


I don't agree. Even if you aren't programming microcontrollers, even if you're just building brochure websites for mobiles, you'll do it better if you understand hardware.


> you'll do it better if you understand hardware.

Oh for sure. I agree. That's not what we're arguing though.


Also things like +!![] is because while JS is dynamic it does have a very strict type system! ![] becomes a boolean +true becomes a number.


JS has a very weak type system, full of implicit conversions. In dynamic languages with strong type systems, like Common Lisp, such operations typically result in errors:

  (+ (not (not '()))) 

  The value NIL is not of the expected type NUMBER.
Similarly in Ruby:

  +!![]
  
  undefined method `+@' for true:TrueClass (NoMethodError)
   [Condition of type TYPE-ERROR]

Interestingly, Python does the same thing as JS in this case, even though it is typically quite strongly typed. Edit: not quite the same, as the empty array is converted to False in Python, just as it is in Ruby (in CL, nil/'() IS the canonical false value); but still, Python outputs 0, it doesn't complain like the other two.


> Interestingly, Python does the same thing as JS in this case, even though it is typically quite strongly typed. [...] > Python outputs 0, it doesn't complain like the other two.

yep, this is one of those Python weird bits. in Python, booleans are ints, True is 1 and 0 is False. and i don't mean it in a JS-ish way like "they can be converted to...". no, True is the integer value 1. in fact, the type bool is a subtype of int. if you think of types as sets of values, and subtypes as subsets of their parent set, that `book < int` relation suddenly makes a lot of sense ;)

  >>> +(not not [])
  0
  >>> (not not [])
  False
  >>> 0 == False
  True
  >>> 1 == True
  True
  >>> True + True
  2
  >>> True - False + True * 0.5
  1.5
  >>> isinstance(False, bool)
  True
  >>> isinstance(False, int)
  True
  >>> bool < int
  True
so, if you accept that the operation `not []` makes sense to be defined as True (because `bool([])` is False), and that it makes sense for False to be the integer 0, then `+(not not [])` being 0 is just a logical consequence of that :)

for the record, i do think it's weird for Python to define bools as ints, and to make all values define a boolean semantics via __bool__().


Python was created February 1991, but it didn't have a boolean type until 2002 per PEP 285 <https://www.python.org/dev/peps/pep-0285/>

The rationale for making bool a subset of integers is for ease of implementation and substitutability (which aids backwards compatibility)47, as explained here:

> In an ideal world, bool might be better implemented as a separate integer type that knows how to perform mixed-mode arithmetic. However, inheriting bool from int eases the implementation enormously (in part since all C code that calls PyInt_Check() will continue to work -- this returns true for subclasses of int). Also, I believe this is right in terms of substitutability: code that requires an int can be fed a bool and it will behave the same as 0 or 1.

I have some Python code where there are still a number of uses of 0 and 1 for false and true, because it was written before Python added a boolean type.


This is a result of the implicit type conversion feature rather than a strict type system.


We have this on our interview test. It was the one almost everyone got wrong except for a couple people who ended up being really detail oriented and had deep knowledge (as opposed to broad).

We consider >70% passing.


I'd honestly consider these types of questions one of the poorest ways to test front end developers.

The only reason I have learned some of that oddball stuff with JavaScript is because of some job interviews e.g. when I was earlier in my career, I used to Google things like "top 30 questions asked in a JS interview", etc, but I forget it after that until I'm about to look for another job. However I wouldn't do this type of learning any longer, since I wouldn't want to apply for a company asking these types of questions.

In the end at work you should use ES6/TypeScript with linting, proper tests and these cases would never occur.


you're right in both points technically, but if that means that somehow these things don't make JS "weird" for people to learn/program, then i disagree

> that's a floating point problem. Plenty of languages will have the same issue.

yes, many (most) other languages do have the same floating point issues, but that doesn't make them less weird. JS numbers are IEEE floating point numbers, therefore floating point issues/weirdness are also JS issues/weirdness :)

> Calling these things weird is fair enough but I can't help thinking this is code you'd never actually write outside of the context of a "Look how weird JS is!" post.

the verbatim code snippets in particular, yes, you're completely right. but the underlying problems that these snippets exemplify are still there on actual "real" running code. they just look less suspicious on the surface.

> Just be moderately careful about casting things from one data type to another and all these problems go away.

true. but still, these things can happen, and when they happen, they tend to sneakily manifest as weird UI bugs, like rendering "NaN" or "undefined" on the screen (we have all seen those... there's plenty of meme images about them too), instead of noisily breaking with an exception, which is way more noticeable and actionable for us programmers when doing introducing these bugs in the first place hehe.

it's true that they are not the most common kind of bugs, i'll give you that, but when they happen, they can be incredibly frustrating in my experience, because you may only realize after they have been affecting user for a looong time (maybe years), but you just didn't know because JS decided it was a good idea to carry on after one of these nonsense operations like adding an array to a number, giving you nonsense results instead of useful (runtime) type errors.

story time!

i remember an ugly case of this which involved some search filters on a big-ish system. the JS code was quite generic in how it handled filters, and looked fine. for some time, all filters were single-value, as simple strings, but at some point the system started handling multi-value filters, which were encoded as arrays of strings. well, the programmers who implemented the UI for multi-valued filters just tried sending array of strings as filters to the existing filtering system, and it seemed to work correctly in the results it yielded, so they assumed the code was prepared for that too (it looked generic enough), and so they shipped the feature that way.

it was only years later, when i was porting some of that code to typescript, that typescript complained about invalid type operations. i was converting between languages pretty willy-nilly and assuming the existing system worked correctly, so i suspected typescript was being dumb with that type error. but no, it was actually complaining about an actual bug. when passing arrays of strings as filters, the underlying filtering system was at some point coercing those arrays to strings by accident. so ['apples', 'oranges'] became 'apple,oranges'.

the search results were always right when the multi-valued filters had only one value (because ['apples'] coerced to string is 'apples') and kinda right in some naive cases of multi-valued filters (because of how the text search engine worked), which was probably why the original programmers thought it was working correctly and didn't give it much more though or more thorough testing. but the search results were definitely not correct in most non-naive cases of multi-valued filters.

so our users had been affected by this bug of multi-value search filters basically not working for years, and we only discovered it by accident when porting some of the JS code to typescript because typescript was kind enough to tell us about this nonsense operation statically, without having to even run the code. i wish that JS were also kind enough to complain about these nonsense operations, albeit at runtime. it would have made this problem obvious from the get go, and would have prevented the original programmers from shipping a broken feature to thousands of users.

plot twist: although the story may make it seem that i was the competent programmer that figured it all out when porting the code to typescript, in reality "the original programmers" also included me (probably, i don't remember really), just some time before the typescript port :/


In C# you don't have such a problem and 0.2 + 0.1 is exactly 0.3. Proof:

https://dotnetfiddle.net/qTiq6U


In every language where you can use a decimal the result will match exactly. That's... not what's discussed here and it's not a default in c# either.


Unlike other languages, in JS you don't have decimals... So you stuck writing a garbage code by multiplying all floating point numbers by a factor to avoid rounding errors.


The last couple generations of POWER chips from IBM have implementations of decimal32, decimal64, and decimal128. To my knowledge, no other big-name, general-purpose CPU has these.

To "implement decimal numbers" in .net on x86 hardware simply means writing a custom decimal implementation in software.

JS had implicit integers since the beginning. It has had arrays of integers (what's necessary for fast decimal implementations) since BEFORE the release of WebGL a decade ago. It has also added BigInt this year. Just like .net, there's decimal libraries available if you know to use them.

https://github.com/MikeMcl/decimal.js/

The real takeaway is that modern processors should definitely add hardware support for decimal numbers.


I don't think that's fair: C# has decimals but everyone uses floating point numbers by default so C# developers still need to know that 0.1+0.2 != 0.3


Exactly. Finding flaws in a language doesn't mean the language is bad, just that it has... flaws. The proof that JS is actually pretty good is that it has been used to build so many things. Like the old adage about economics, these criticisms are taking something that works in practice and trying to see if it works in theory.


Just because something is popular isn't reason for it to be good. Examples: fossil fuels, (over)fishing, rage-based engagement, hard drugs.


On the contrary - the number of people taking hard drugs to get high is fantastic evidence that they are good for getting high. Otherwise, why would people be buying and taking them?


People use hard drugs to cope with their problems, and they're rubbish for that.


Uh Fossil Fuels powered the Industrial Revolution and literally created the modern economy.

It's true that being popular doesn't necessarily make something good. But it's also true that having some flaws discovered later doesn't make something that revolutionized the world and massively uplifted the standard of living of basically everyone in it bad.


Just to add to your list: Ed Sheeran


Excepting fossil fuels, none of those things are popular.


No. Popularity does not indicate quality. Judging the quality of something by its popularity is a form of cyclical reasoning. Especially when there is an obvious alternative explanation, namely that JS is the only language you can in the browser without transpiling and for a long time the only one period.

While the fact JS can be used to build all these things puts a floor on its quality, that floor is uselessly low.


I'm not using popularity as the metric. "1 million identical websites were built with JavaScript" wouldn't say much beyond the first website.

But that there is a very broad range of successful applications partly relying on JavaScript for their success undercuts the idea that JavaScript is inherently rubbish. Whether it has some subjective "quality" is a conversation best left for art galleries.


"The proof that JS is actually pretty good is that it has been used to build so many things."

[French Narrator] 5 minutes later...

"I'm not using popularity as the metric."


If you say JS is "pretty good" how is that not a statement about its quality?


Fair point. I guess I could rephrase it as "JavaScript provides a lot of value".




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: