I wish I could just write that I'm expecting to receive a User object in a function without having to resort to TypeScript. It would make code so much more readable.
On the zoomed-in scale we see things like tricky semantics for regexp or Math.round(), which major implementations still get wrong.
On the zoomed-out scale, we see things like property iteration order. This touches on array semantics and hidden classes. Why does v8 permit two objects to have the same properties but different hidden classes? Because JS requires that properties be enumerated in their definitional order - something no other language requires, and which puts serious constraints on implementations.
TypeScript can layer types on top, but can't make arrays sane or fix the other deep language issues.
But initial implementations just stored properties in a flat list, so in practice all had iteration order matching definition order. And then websites were created depending on that behavior.
So now you had a spec saying "the order can be anything" but real-world compat constraints meant that in any implementation shipping in a web browser the order had to be definition order. At some point this compat constraint was simply documented in the spec, so there wouldn't be a gotcha lurking for a new implementation.
So it's hard to blame this problem on JS, really. It's more like Postel's Law (or a failure thereof, depending on who you're talking to, but I feel like in this case the output is the iteration order, and having it be fixed corresponds to being conservative in what you do. The pages that failed on different iteration orders correspond to _not_ being liberal in what you accept from others...
This made my morning. :)
I will quote you on this.
Of course it helps that Lua has much simpler semantics.
Protypical inheritance goes back to the damn Actor pattern and modern versions (like JS) derive from Smalltalk.
The particular implementation of this in JS is idiosyncratic to JS but not conceptually unique. They improved it a LOT with strict mode too.
I swear, most JS criticism on HN seems to sound like developers who only know C++/Java/Ruby/Python critiquing stuff that comes from the Lisp/ALGOL/Smalltalk idioms and not understanding that the way they are used to isn’t the One True way.
As for things like prototypical inheritance, `this`, and other language features not being distinct to JS, I'll concede that point. However, I still think my main point stands, namely that criticisms of JS aren't just blanket criticisms of "interpreted languages".
EDIT - Good from a performance perspective, things like static typing and symbolically executable and pre-compiled.
All those other dynamic scripting languages? They're DOA because nobody's going to download the entire VM every time.
What about all those nice functional languages? Very problematic because the toolchain relies on LLVM-like semantics which don't like good garbage collectors or functional programming in general.
We then get down to C-like languages, Rust, and more esoteric languages (and promptly discard the esoteric ones for lack of a decent ecosystem).
Who in their right mind wants to write a front-end in C++ or Rust? By the time you get anything done, the web has changed and you're stuck with a pile of dated code that takes too much time and costs too much money to update.
The web had a shot at a decent language with Dart (it was/is even an ECMA standard). It didn't die because of other browsers. It died because of poor web dev adoption rates.
If only Eich had been allowed to implement scheme then none of this would have been an issue.
Hosted on a CDN in compact form, it'll be feasible; application can just link against it, it does not have to be recompiled on the user's machine.
> Very problematic because the toolchain relies on LLVM-like semantics which don't like good garbage collectors or functional programming in general.
That will indeed be a problem. The design restrictions imposed by the environment (continuous heap, emulated concurrency) will bite us in the ass hard. Instead of layering a lot of very leaky, performance killing "security" features over other half assed features, a better way would be to step back and use a tiny hypervisor. That way you have hardware accelerated memory safety, but it would require the application to be able to run in a microkernel; recent IncludeOS + ukvm has a boot time of 11 ms. (Great, now I want to write a plugin that embeds qemu in Chromium and uses a virtual device to communicate with the host.)
This has been repeated again and again, but the proof that you are programming faster in JS than in Rust (or whatever language you want) is very lacking. At least, if you want to use your code longer than two days, because that's the time many of those "I just write it and it will work!" JS programs stop being readable. But even then I have my doubts. Sure, if you've never used a language before you will be slower, but that doesn't make the "JS can be programmed faster" assumption correct.
Every AAA Game Dev.
When WASM gets mature, expect the return of the plugins, maybe even a Flash to WASM compiler.
In two to three years time, Flash will be back.
Because as soon as advertising kicked off JS usage did too. Even before then the DHTML movement had successfully moved JS into spaces people didn’t know it could be on the web.
You’re romanticizing an era I lived and worked through: it wasn’t really like that.