Hacker News new | past | comments | ask | show | jobs | submit login
How JavaScript engines achieve great performance (bekk.no)
144 points by EntICOnc 2 days ago | hide | past | favorite | 123 comments





This article is a part of a larger series that tries to figure out how Elm's runtime performance can be improved: https://blogg.bekk.no/successes-and-failures-in-optimizing-e...

I'm the author of the articles, btw.


Huge thanks for writing all of this up. This is great content.

I still think that http://steve-yegge.blogspot.com/2008/05/dynamic-languages-st... is one of the best overviews ever of how JavaScript managed to become as fast as traditional compiled languages despite being filled with features to make it slow.

What's amazing is that that talk came out several months before V8, the first JavaScript engine that actually used the ideas in that talk. But, of course, that talk was possible because V8 was already under development.


Eh, with the benefit of 14 years of hindsight, I want to push back on some of the things in that talk. (Context: I work on SpiderMonkey.)

First, all the stuff about tracing and trace trees is kind of obsolete. SpiderMonkey abandoned TraceMonkey a long time ago. (To the best of my knowledge, V8 never implemented a tracing JIT at all.) The problem with tracing is that you can get really good performance when everything goes right, but it's brittle. There's a reference in the talk to how implementations of the Game of Life can have exponential blow-up, for example. You can usually fix any individual pathological case, but the inherently exponential number of possible paths through a program make it difficult to completely eliminate weird performance cliffs.

If your goal is to maximize performance on a known set of benchmarks, go wild. If you want a robust engine that can handle whatever weird code the web is throwing at you, tracing JITs are (as far as I can tell) a dead end.

(Counterpoint: LuaJIT seems to be doing alright with tracing, although it may just solve the problem by punting to the programmer: https://github.com/lukego/blog/issues/29. That's more feasible when you don't have multiple engines with performance cliffs in subtly different places.)

Second, the idea that JIT-compiled code can be faster than AOT-compiled code has been floating around for a long time, but I don't think it really holds in the general case. Doing work at runtime isn't free: not just time spent compiling, but also time spent profiling and validating that your speculative optimizations continue to be correct.

SpiderMonkey had a top-tier optimizing compiler, IonMonkey, that got pretty darn close to native code on hot benchmark loops. We tracked whole-program information to ensure that type checks could be elided in inner loops. (For example, if the `x` property of a certain set of objects only ever contained 32-bit integers, then we could unbox it without checking the type. If any code elsewhere ever stored a non-integer value in that property, we would notice and invalidate the optimized code.)

We threw IonMonkey away, because it was too brittle. In practice, real-world code falls off the happy path often enough that we got better performance by accepting that even your highly optimized JS code will include some runtime checks. Invalidation and recompilation are real costs. So is the upkeep of all the global data necessary to support Ion. There's an engineering tradeoff between pushing up the performance ceiling and bringing up the performance floor; we've been happy with our choice to shift focus more towards the latter. Our numbers are down on artificial benchmarks, but it seems to have paid off in real-world performance. (Also, bugs in the optimizing compiler are significantly less likely to be exploitable.)

A lot of really smart people have done some incredible work on the JVM. Nevertheless, I'm still not aware of any code written in Java instead of (say) C++ or Rust because Java was faster. I think it's more accurate to say that JIT compilation can be fast enough that the other advantages of the language can make it the right choice.


> There's an engineering tradeoff between pushing up the performance ceiling and bringing up the performance floor

I hadn't thought about this, but I love this sentiment


Do you have some good examples of how this "real world" perf if tested?

I often write micro benchmarks to see if something is faster one way or another. I often find one browser or another to be 3x to 10x faster in a certain micro benchmark.

A big issue is, most websites don't need perf AFAICT. Of the sites I use regularly (HN, Stack Overflow, GMail, Gdocs, GSheets, Facebook, Messenger, Slack, Reddit, github, maybe only gsheets and gdocs need any real perf.

The place where perf is needed is things like three.js, playcanvas, babylon.js, unity -> html, etc... And on those AFAIK, Chrome almost always wins, probably, because it's GPU support is multi-process


There's no magic answer here. We track various metrics (page load time, responsiveness, gc pause time, and so on) in telemetry. (You can poke around at telemetry.mozilla.org.) We have some page load benchmarks that load a recorded copy of various pages and track how long it takes to load. Sometimes capturing a profile of a particular site can help. Of the big JS benchmarks, we've found Speedometer to be the least bad, because there was at least an effort to mimic actual websites.

Microbenchmarks are tricky because it's very easy to measure something other than what you intend: differences in inlining heuristics, say.

In the long run, I expect performance-critical websites to migrate to webassembly.


I have to second this, we have a JS based game engine and Firefox is very much noticeably slower than Safari and Chrome in terms of JS performance. I agree with Iain below that in the long run this will mean moving more code to wasm so hopefully the tooling there improves. Right now all it does is move users of performance critical web-first software away from Firefox.

Although I’d note that performance is utterly important for normal applications as well. Being noticeably snappy and prompt is great for UX.


A couple theoretical questions.

Has there been any consideration to create AI to work around pathological cases in tracing?

If compiling is done on a secondary thread, aren't you then reaping all the benefits of JIT optimization while still not paying the penalty on the actual JS execution thread?

Why throw away optimized code? Why not add a type check on the parameters and then dispatch to the highly-optimized code version based on the parameter count and types?


> Has there been any consideration to create AI to work around pathological cases in tracing?

None of which I'm aware.

> If compiling is done on a secondary thread, aren't you then reaping all the benefits of JIT optimization while still not paying the penalty on the actual JS execution thread?

Compilation can be done off-thread, but to decide what to compile you have to do some on-thread data collection, and until you're done compiling you have to run in a lower tier. Also, you usually have to do some work to make sure that your speculative optimizations are correct: for example, an inline cache needs to verify that you've got a cache hit.

> Why throw away optimized code? Why not add a type check on the parameters and then dispatch to the highly-optimized code version based on the parameter count and types?

Optimized code is optimized under a certain set of assumptions, which can be propagated through a function. For example, if the value that we read from an object's property is always an int, then we can do integer arithmetic with it afterwards. If we suddenly load a string or a double, then a bunch of downstream code will also be invalid. It's cheaper and less error prone to start over, rather than trying to track all the places that depend on a particular piece of information.

If you only specialize individual operations based on the type input, and don't propagate type information between operations, they you've basically got polymorphic inline caches. That's what we use in our baseline compiler.


I had the impression that Firefox' real world JS performance went up briefly 3-4 years ago and then down again 1-2 years ago. Totally subjective, of course. But I went back to a V8 based browser after using Firefox for 2 years or so.

> as fast as traditional compiled languages

I don't think JS is as fast as traditional compiled languages. Java/C# are faster, Go is faster, OCaml/Haskell are faster, Common Lisp is faster, C/C++/Rust are faster.


In my experience JS is not as fast as compiled languages.

It's not, but it's much closer than other dynamic languages like Python, PHP, Ruby, etc. For numerical code (which avoids allocation) it's often pretty close.

Microbenchmarks I know, but it looks like a magnitude slower than C++ to me:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I see a factor of 2 in the closest benchmarks. A factor of 4-5 for the middlish ones, and a factor of 10 for the least close. But it's worth noting that a lot of the those C++ examples are making heavy use of SIMD.

Can you do SIMD with JS? If not, then C++ has the real advantage.

SIMD.js was abandoned in browsers but you can use SIMD in WASM.

I was about to write that I'd be disappointed if V8 didn't vectorize simple loops, but I googled it and apparently it doesn't so maybe I should be disappointed.

Is there any JIT that does auto-vectorization?

I kinda got the feeling from C/C++ compilers that auto-vectorization is brittle and unpredictable even there and that's with all the time an AoT compiler can invest into optimization. A JIT always has to tread the line (or multiple lines in case of tiered compilation) between optimization effort and benefit. And the usual trade-off between larger and smaller code. So speculatively unrolling short loops just to see whether anything could be vectorized is perhaps out of scope for most JITs and even in AoT-compiled languages the preferred way of getting the most out of vector instructions is just to explicitly vectorize hot loops as a programmer, as the result is often better than what the compiler could do.


Yeah the JVM does autovectorization, but this feature is very finicky (everywhere, not only in JIT context). That’s why in Java now you have the Vector API, which allows one to create very low-level, but reliably vectorizable code (with the added benefit of sane fallback to for loops on non-supported CPUs)

Those benchmarks run the program once from a cold start, right? When people talk about JS nearing c++ performance I think they generally mean cases like the steady-state performance of a game loop, after the JS engine has had a chance to observe the hot code for a little while.

And some of those runtimes are 45 cpu secs. If that’s not enough chance to observe the hot code…

How about vs Java? How does JS engines compare against the JVM?

Java is a bit closer to native and works really nicely for AoT with GraalVM. It has the advantage of being more predictable so a JIT can make far more assumptions and doesn't need to revert as many optimizations.

Yeah, in some informal sorting tests I noticed that Node was roughly as fast as dmd, the D compiler which is basically optimized for fast builds. Compared with LLVM and GCC it wasn't even close.

I wonder if the browser cache could be used to effectively speed up compilation.

I'd imagine that most scripts in websites are static assets, i.e. change rarely and have cache headers that support storage. Couldn't you make use of that to save a compiled version of the script together with the cached file? So when the page is loaded again, scripts are already compiled.

Is something like this done in practice?


SpiderMonkey caches pre-parsed bytecode for scripts, which can immediately start executing in the interpreter. (I believe roughly the same is true for other engines. See [1], for example.) The tricky part of caching compiled scripts is that they generally refer to various runtime state. For example, a compiled script might include a check to verify the shape / hidden class of an object. If you want to cache that code, you also have to cache the shape tree. Multiply that out by all the runtime state that is referenced from compiled code, and it's a hard problem. We've talked about simpler mechanisms like caching function hotness / the number of iterations before our profiling data stabilizes, to trigger compilation earlier, but we haven't gotten around to implementing it yet.

[1]: https://v8.dev/blog/code-caching-for-devs


All browsers have JS byte code caching, but in my experience building web apps it doesn't make a huge difference in the real world. A user will hit a site, download and compile the JS bundle, and then that tab will be kept open for a day or two. They might close the tab and reopen it the following day. In any modern app that uses continuous deployment that means they'll certainly be downloading a new client bundle, and a lot of the time they'll also also get a new vendor bundle (client bundle is the app code, vendor bundle is the NPM packages). I have no doubt code could be split to optimize for fewest changes but JS tooling is complicated enough already so I doubt many sites would bother.

Javascript being fast as Java and coupled with Typescript the future of Java doesn't look very bright.

TypeScript is awful. Like, I shit on Java all the time, and I WAY rather work in Java than TypeScript.

TypeScript's type system is so utterly broken that I honestly don't know if my code is ANY more robust than if I had written it in JavaScript.

Record<> is broken/unsound: https://github.com/microsoft/TypeScript/issues/45335

Generics are wonky and sometimes wrong: https://github.com/microsoft/TypeScript/issues/31006

The `readonly` keyword does absolutely nothing: https://github.com/microsoft/TypeScript/issues/13347

Arrays are covariant in their type param, so I can pass a `Dog[]` into a function that accepts `Animal[]`. If that function adds a `Cat` to the passed array, the compiler is perfectly happy, but we'll see a runtime error.

TypeScript is actually so bad that it might have honestly made JavaScript worse, if that's even possible.


Wild how people can have such different experiences. Typescript is hands-down one of my favorite languages, and I would choose its structural type system over Java's any day. I also use Python at work, and it's like night and day in terms of quality. Every day I have to deal with fundamentally unresolvable issues due to Python's half-baked type hints, when I could be writing much more stable, functional, and maintainable Typescript.

Calling it "broken" is complete hyperbole, and no, it's not "worse than Javascript". There's a reason why the entire ecosystem is switching to Typescript, and that's because it is extremely effective at helping teams manage complex stateful applications.


Well, sure, compared to Python, I guess TypeScript's type system is pretty great...

Its type system is quite broken and I'm not going to walk back on using that term. Sometimes the TypeScript designers/devs will explain that the brokenness is intentional. But some of the issues have nothing to do with JavaScript compatibility. For example, we all know about the conundrum of `type` vs `interface`, but did you know that an object-shaped `type` will automatically conform to a mapped type, but an `interface` wont? Playground: https://www.typescriptlang.org/play?ssl=20&ssc=1&pln=21&pc=1...

Or how about the fact that basic inheritance sub-typing doesn't work?: https://www.typescriptlang.org/play?#code/MYGwhgzhAECCB2BLAt...


Both your examples are very interesting, in that I can't figure out how they would actually get hurt by these issues when writing code. For your `type` vs `interface` example, is the fear that you won't be able to use a type when a function takes a `Record` type? It's an inconvenience for sure, but that won't inadvertently introduce a bug.

And for your 'inheritance sub-typing' example, can you explain what the "right" behavior would be? It's a function that mutates properties on an object, how is the type system supposed to detect that? That's already a code smell and should be avoided.


> For your `type` vs `interface` example, is the fear that you won't be able to use a type when a function takes a `Record` type? It's an inconvenience for sure, but that won't inadvertently introduce a bug.

Right. That one, by itself, won't introduce a bug. But it's just another weird thing to remember about TypeScript's type system being inconsistent. I bumped in to it because I wanted to define my own JSON-like type for use as query params being sent to a particular API that has a custom syntax for certain things (like passing an object as a query param, etc). So I wrote a type something like this:

type ExtendedJson = boolean | number | string | null | undefined | OurCustomTimestampType | SomeOtherCustomTypeWeUse | ExtendedJson[] | ExtendedJsonObject

type ExtendedJsonObject = Record<string, ExtendedJson>

And then in my function that sends requests to the API in question, I require a parameter like `queryParams?: ExtendedJsonObject`, and I have a function that knows exactly how to format this ExtendedJsonObject into the right string for the request. Neato.

Except then, I realized that most of the specific query types I wrote were interfaces. But I only got compile errors in some places and not others. I had no idea why some of my types worked and some didn't. The error message was clear enough, I guess. It said that the type didn't conform to `{ [string]: ExtendedJson }`, but I coudln't figure out why some of my types did conform and some didn't.

What's worse is that some of my query types that were defined as interfaces DID work without a compile error! In hindsight, I now understand that the reason those worked was because I was actually "wrapping" those interfaces into types with things like Readonly<FooQueryParams> or Omit<BarQueryParams, 'someKey'>. So those were now types instead of interfaces, but it wasn't an obvious enough clue for me, so I spent hours trying to figure it all out.

If that were the only weird thing, I wouldn't still be bitching about it, but in the last year that I've been focusing on this frontend project, I feel like I've spent countless hours learning about holes and inconsistencies in TypeScript. It feels like every single week I learn about some other broken or inconsistent nonsense that I just think I shouldn't have to deal with.

> And for your 'inheritance sub-typing' example, can you explain what the "right" behavior would be? It's a function that mutates properties on an object, how is the type system supposed to detect that? That's already a code smell and should be avoided.

The correct behavior is that it should be an error to pass a HasDog to a function that requires a HasAnimal. { pet: Dog } is NOT a sub-type of { pet: Animal } just because Dog is a sub-type of Animal. The type theory concept is called "variance". You have "covariant", "contravariant", and "invariant". Mutable object and array types are invariant in their field types. So you can't pass a Array<Dog> for an Array<Animal> because adding an Animal to an Array<Dog> is a type error: an Animal is not necessarily a Dog. However, you CAN pass an immutable Array<Dog> for an Array<Animal> because you are guaranteed to only be reading the elements as Animals, and a Dog is an Animal. Return types often have the inverse type relationships to input params: You can return an Array<Dog> for an Array<Animal> return type because the caller who receives the returned value will treat it as an Array<Animal>, so it's okay to add a Cat to it, even though it was originally only Dogs.

And, while I agree that mutating an input param is generally a bad idea, that's no excuse for the type system to not be correct. Every other statically typed language is going to handle this correctly (except primitive arrays in Java. They are also unsound, like TS, but, in Java's defense, it's only primitive arrays, and you almost never use arrays directly in Java- you use List<T>, which is not wrong/unsound). When I say every other statically typed language, I'm literally talking about every statically typed language I've ever used: C++, Java (again, except arrays), Go, Swift, Rust, Kotlin, Scala, even PHP. They all handle this correctly, because it's really basic type theory.


> And, while I agree that mutating an input param is generally a bad idea, that's no excuse for the type system to not be correct.

Yes there is: it's more useful. I don't want to have to write a version of my function for every possible subtype. If I'm not mutating properties, then it's more useful for objects and lists to be covariant. It's not "incorrect" for the type system to do so, that is a deliberate design choice that makes functional patterns easier to express.

This is part of what makes TypeScript so great: they made careful tradeoffs given that they were working off of an existing language. JavaScript doesn't really have immutable objects or lists, but invariance would have caused far more problems than it would have solved.


Who cares if TypeScript's type system is unsound if it lets you express things that you cannot express in a nominative system like Java's. I get so much more productivity out of TypeScript's type system than any other language including "serious" type systems like Haskell's.

Moreover TypeScript's type system is so powerful it actually catches a good number of bugs that Haskell would never catch, because it does control flow analysis by default. For many languages you would need to install a separate static analyser (if one even exists), having CFA as part of type checking in your editor is godlike.

Sure if you are writing software for a radiotherapy machine, don't write it in typescript! Use whatever proof assistant takes your fancy. But if bugs are not literally going to kill people, TypeScript's type system is about the best in existence.


Because type system unsoundness leads to real bugs when people (quite reasonably) make assumptions based on the belief that it is sound. I think on the whole TypeScript does an OK job (considering that most languages have unsound type systems too, including Java) because it’s certainly better than nothing but it’s still something to watch out for.

It would be nice if typescript had a option to compile with (obviously limited) runtime checks

> Who cares if TypeScript's type system is unsound if it lets you express things that you cannot express in a nominative system like Java's. I get so much more productivity out of TypeScript's type system than any other language including "serious" type systems like Haskell's.

If you enjoy your type system being more expressive at the cost of soundness, you should try this language called JavaScript! It lets you express things that you can't in TypeScript!

> Moreover TypeScript's type system is so powerful it actually catches a good number of bugs that Haskell would never catch, because it does control flow analysis by default. For many languages you would need to install a separate static analyser (if one even exists), having CFA as part of type checking in your editor is godlike.

The control flow analysis has nothing at all to do with the soundness of the type system. Kotlin has similar with it's "smart casting," but that doesn't require it to have an intentionally unsound type system.


I’ve been using typescript for years, and I’ve rarely met these un soundness issues.

I agree that it has its limits, and that there are better type systems, but compared to JavaScript it’s a night and day experience.

The amount of help and clarity it brings on large codebases has no equivalent in JS world, and even tho I love Scala or Elm they are simply not realistic alternatives to JS for most of the industry.


There are so many choices, though. I haven't tried any of them myself because I hate JavaScript, I hate front-end stuff, and I hate Node, so I try to avoid all of this.

But I was told that TypeScript was awesome, and the whole industry is moving toward it, so that's what we chose for this project.

Next time I won't choose TypeScript. I'll either stick with vanilla JS and know that I don't have a type system to catch my mistakes, or I'll try one of: Elm, PureScript, Flow, ReScript, ReasonML, Dart, etc.

There are so many choices, and I'm sure the ecosystems for these other choices is absolutely abysmal compared to TypeScript, but I rather reinvent a few wheels than use wheels that look round at first, but aren't.


Why is Scala not a realistic alternative to JS? It has perfect JS interop.

If TS is an alternative to JS Scala is also. Hard to thing of an argument that doesn't go the same about both languages.

The only real issue is market reach at this point in time.


> Why is Scala not a realistic alternative to JS?

- Userbase. Hiring Scala developer is insanely harder than hiring JS developers in most of the world

- Worse ecosystem. This ranges from anything like finding libraries to finding help online.

- Generated bundles and performance are very subpar to most compile-to-js languages.


>"Calling it "broken" is complete hyperbole"

Well, when comparing it with strongly typed languages it does look broken for sure.


This reply is a good example of how "strongly typed" ends up meaning practically nothing -- except possibly "the kind of type system I prefer".

I once attended a talk where the speaker had identified half a dozen axes that papers or projects were calling "strong"/"weak" with relation to type systems. Some were not even consistent with themselves, switching definitions halfway.

Type systems are tools, whose formal properties can be described and analyzed in precise detail. Unfortunately, that kind of precision is hard, so semantically empty words like "strong" get used a lot instead. This message is intended to raise awareness about that fact.


Well one can analyze that type systems are till the hell freezes over. I personally do not care as this precise knowledge (assuming it is formalized and exists) is of zero value to me. When I want to walk I just do. I do not dwell on the details of the walking process.

Anyways most likely you do know well what I meant. JS vs C++ for example.


Neither the type system of C++ nor the type system of TS is sound, in the sense of "the type system rejects all incorrect programs".

If what you meant by "strong" is just "you can declare types on variables", then both TS and C++ qualify as strong, but JS doesn't.

Your use of the word is based on feeling, not fact. Which was my point.


Typescript compiles down to Javascript. And this is the problem. When I say uint16_t in C++ I know how it will be kept in RAM and what I can do with it, how I can pack things together etc. etc. TS lacks this abilities. You can play with terminology all you want but it does not change simple facts.

You could (probably can) compile C++ to JS. What will that be? In the end, code is just data operating on data with no semantics. Your uint16_t is only meaningful to the compiler, the same way as TypeScript types are (with a few caveat).

Then you don't understand the purpose of TypesScript - which is to retain JS syntax while grabbing the lower hanging benefits of a type system. There are many tools to catch errors (for example, unit tests), and the type system is just one of those tools.

The proof is in the pudding. I've been writing JS (and a bunch of statically typed languages) for over twenty years. Large JS codebases have become possible now thanks to TypeScript (an example is VSCode). If you look at any large JS ecosystem project today, chances are that it's written in TypeScript rather than in vanilla JS. People are adopting it because they see benefits.

If you've used TypeScript over the years, you'll also see that many major releases catch a set of new bugs in "strict" mode (which used to successfully compile earlier). That's the team fixing some of the issues you mentioned.


> Then you don't understand the purpose of TypesScript - which is to retain JS syntax while grabbing the lower hanging benefits of a type system. There are many tools to catch errors (for example, unit tests), and the type system is just one of those tools.

I understand the point. I just don't think it's a good point. If you tell me that I have a type system, then I'd expect that I could actually lean on it to catch type errors, and I'd expect it's own language features (like `readonly`) to actually do what they claim they do. I would NOT expect someone to add `readonly` to their language, but have it not actually make something a `readonly` type.

> The proof is in the pudding. I've been writing JS (and a bunch of statically typed languages) for over twenty years. Large JS codebases have become possible now thanks to TypeScript (an example is VSCode). If you look at any large JS ecosystem project today, chances are that it's written in TypeScript rather than in vanilla JS. People are adopting it because they see benefits.

I don't buy the argument that something being popular means it's actually good. There's a lot of group-think that goes into things being adopted. Not to mention that it's a Microsoft product and they have zillions of dollars to make sure the editor tools are good, lots of popular JS libraries get TypeScript types added on top, and just plain old PR advertising.

And yes, the ability to gradually switch from JS to TS makes TS an easier choice for people who are already working with JS or who are used to JS. But, that doesn't mean that Flow, Elm, ReasonML, ReScript, ClojureScript, PureScript, etc, aren't actually better tools with bigger benefits. It just means that TS got the critical mass mindshare amongst JS devs, and now has the advantage of "everyone else is writing libraries for TS, so if I pick Flow for my project, I won't have as big an ecosystem to pull from."


Your experience couldn't differ more from mine. I wonder, do you think there are bad parts of Java that if you bought into them you would hate Java and your time with it for?

To me, each of your examples of brokenness is completely outside the day-to-day which I work in (and honestly my comprehension vis a vis why you would use them), so I can't speak too much to them, but as someone who has gone from untyped JS (obviously) to strictly typing each key of each property in my program, I am finding it pretty funny to see you conclude

> TypeScript is actually so bad that it might have honestly made JavaScript worse

I couldn't disagree more. The quality of the programming, the ability to share the mental model more easily with others, the developer experience of autocomplete and declaration files; each of these things is an order of magnitude improved, and the emergent efficiency of individuals and teams follows suit.


> Your experience couldn't differ more from mine. I wonder, do you think there are bad parts of Java that if you bought into them you would hate Java and your time with it for?

Well, I literally said that I shit on Java all the time, so yeah... I hate most of Java, and my blood pressure goes up just thinking about the shitty null handling, lack of any concept of immutability, weak as hell type system, and the nightmarish combination of commonplace runtime reflection paired with type-erased generics.

> To me, each of your examples of brokenness is completely outside the day-to-day which I work in (and honestly my comprehension vis a vis why you would use them), so I can't speak too much to them, but as someone who has gone from untyped JS (obviously) to strictly typing each key of each property in my program, I am finding it pretty funny to see you conclude

It's not outside my day-to-day. I'm complaining because I'm being bitten by TypeScript's brokenness all the time. If you try to use the advanced TypeScript features, you'll hit all manner of inconsistencies and brokenness, too. I'm sure of it. And if you're only using the very simple typing, are you really doing anything that JSDoc wouldn't do for you?

I'm fairly convinced (and I don't mean this in a condescending way) that the main group of devs that really likes TypeScript must have done mostly JavaScript and/or Python before. Because I can't imagine that someone who has spent a couple of years doing C# or C++ or Rust or Swift or even Go would be comfortable with all of the holes in TypeScript's type system.


> I'm fairly convinced (and I don't mean this in a condescending way) that the main group of devs that really likes TypeScript must have done mostly JavaScript and/or Python before

I don't think this is entirely wrong, it's exactly TypeScript's retained dynamic-ism that makes it familiar and "easier"; but in my case I've spent significant time writing Java, including production scheduling software, and I just fully disagree that TS is "weak as hell" and has "type-erased generics". Like I said, I find it in me to strictly type programs including the boundary-crossing data (GraphQL w/ codegen or io-ts are leaders in their respective paradigms) and including a library of generic components which correctly can give you access to any data T you passed through to implement content generators etc..

For example, there were complaints about Record<>, but I simply use the object syntax { [key: string]: number | string }, so I can't speak to your use case, but I don't really understand why Record would be used.

I suppose I agree immutability doesn't exist without explicit use of Object.freeze, but I can't believe I would hear someone say "shitty null handling" as a TS/JS complaint when they come from Java land. To reduce my implementations to "JSDoc" is pretty funny as well

For the example about generics, I suppose I'd have to understand why it was using `K extends keyof T` instead of `K in keyof T`, I would agree the "extends" keyword is horrible for generics but I also find it difficult to conceptualize why someone would care about the keys in the superset that aren't in the default set (i.e. K extends keyof T - K in keyof T seems like a useless set to me)

To conclude, I guess I would love to see you show me what you are using "advanced TypeScript features" for, because I write fairly complex dataflow / model / graph editors & capabilities, and go to lengths to provide the capabilities generically as a component library, and I've never found an issue providing type hints. Period. I of course haven't implemented everything ever though, and I do avoid doing too much in the world of union types, so maybe this is why


It has holes, mainly because it has to be able to cover existing JavaScript codebases, which for many reasons can be an order of magnitude harder to type than code that's written from the beginning to be statically typed.

The biggest holes can be filled via compiler options, and most of the rest can be papered over with the right coding practices (use Maps instead of Records, lint against `any` and type-casting, etc).

I haven't encountered the situation you described, but you could probably solve it by making the function generic over TArray extends Animal[].

The rest of what you've said doesn't line up with my experience. TypeScript is an imperfect tool that requires some elbow-grease to fully benefit from. I would prefer a typed language that didn't have to work under its real-world constraints (but not Java, because of null-checking at the very least), but over the years it's caught more bugs for me than I could possibly hope to count.


Many of the worst holes are indeed because of JavaScript compat. But, not all of them. See some of my other comments in this thread.

There's no earthly reason that a `type` should automatically implement Record<string, unknown> and an `interface` shouldn't. That's just dumb and annoying- not necessarily "wrong". However, it's definitely wrong to allow me to pass `const o = {}` into a function that wants `Record<string, string>`. How in the world does TypeScript (with the strict flag set!) think that an empty object is able to return a string value for any arbitrary string key? That's absurd.

The issue I raised about arrays isn't just arrays, actually. It happens with object fields as well. See https://www.typescriptlang.org/play?#code/MYGwhgzhAECCB2BLAt...

And, the `readonly` keyword has nothing at all to do with JavaScript. Yet, they added this keyword that is completely and utterly useless. To the point that I actually introduced a bug in my own code because I put `readonly` and `Readonly<>` everywhere and expected it to actually help me. It did help me... most of the time. And then it didn't. Because it turns out that you can always take a Readonly value, assign it to a non-Readonly variable and then just mutate the hell out of it, and the compiler won't even bat an eye: https://www.typescriptlang.org/play?#code/JYOwLgpgTgZghgYwgA...

That's just... so bad. I don't even know what else to say. Sure, it caught some of my mistakes, but if it won't even catch all of the class of errors that it's supposed to, then I have to basically be just as careful as if I didn't use the feature at all. So, it's not reducing my mental burden at all. If anything, it's just giving us/me a false sense of security and tricking me into thinking that I DON'T need to be as careful.


> However, it's definitely wrong to allow me to pass `const o = {}` into a function that wants `Record<string, string>`. How in the world does TypeScript (with the strict flag set!) think that an empty object is able to return a string value for any arbitrary string key? That's absurd.

It's not. `Record<string, string>` doesn't mean it has a mapping for every string key (that's not possible), it means it might or might not have a mapping for any given string key; it really is analogous to a Map, and it's perfectly valid to pass an empty map to an argument that takes a map of a particular type. The only thing wrong with it is that `MyRecord[someKey]` will by default have type `string`, not type `string|undefined`. This it presumably to help with JS code that commonly indexes objects by strings without checking that the keys return values. Luckily, there's a compiler option to make it return `|undefined` (and it works on arrays too, which have the same problem by default): https://www.typescriptlang.org/tsconfig#noUncheckedIndexedAc...

> Because it turns out that you can always take a Readonly value, assign it to a non-Readonly variable and then just mutate the hell out of it

I am surprised and disappointed that this works without casting, but I think with casting it's reasonable. TypeScript has to give you trap-doors to work around its checks when they don't serve your purposes; this is part of the "being overlaid on an imperfect language" thing. The important thing is that those circumventions are explicit and stand out.

Though I will say, you definitely can't pass a readonly object/array to a non-readonly object/array function argument, so I'm doubly surprised your example works. I may dig into it further later just out of curiosity.


> It's not. `Record<string, string>` doesn't mean it has a mapping for every string key (that's not possible)

Yes, it absolutely does mean that. Just because you can't correctly implement an object that satisfies the type doesn't mean that the type means something else (`never` is also a type that you can't implement).

> The only thing wrong with it is that `MyRecord[someKey]` will by default have type `string`, not type `string|undefined`.

Um, yeah. That's what's wrong. And therefore Record<string, string> literally does mean a string value for every string key.

The problem here is that some of these "utility types" have a specific intention behind them, and you aren't "supposed to" use Record this way. You're "supposed to" give it more specific types, like Record<'a' | 'b', string>.

But the compiler should be smart enough to know that nothing can possibly satisfy Record<string, string>, so if you write the type, it should be impossible to assign to it.

> Luckily, there's a compiler option to make it return `|undefined` (and it works on arrays too, which have the same problem by default): https://www.typescriptlang.org/tsconfig#noUncheckedIndexedAc...

That's not the solution here, because it will cause a Record<'a' | 'b', string> to return string|undefined when I do `o['a']`, which is also wrong.

The solution is for TypeScript to handle Record correctly.

> I am surprised and disappointed that this works without casting, but I think with casting it's reasonable. TypeScript has to give you trap-doors to work around its checks when they don't serve your purposes; this is part of the "being overlaid on an imperfect language" thing. The important thing is that those circumventions are explicit and stand out.

I have no problem with casting. Having hard type casts does not make the language imperfect or incorrect.

> Though I will say, you definitely can't pass a readonly object/array to a non-readonly object/array function argument, so I'm doubly surprised your example works. I may dig into it further later just out of curiosity.

Definitely?

https://www.typescriptlang.org/play?#code/JYOwLgpgTgZghgYwgA...


For the Record thing, I couldn't find a clear answer one way or the other in the docs on the "intended meaning" of that type. But I will say that in practice it behaves almost exactly like the meaning I described. So you can either take that meaning and run with it and be mostly happy, or insist on a different meaning that the implementation contradicts at nearly every turn and be upset about it (or you can just not use Record).

> Definitely?

I suspect one of the compiler options is to blame, and I just don't have time to dig into it right now. But I promise you, I've (gladly) had to convert many functions to take `readonly` arrays for their arguments because otherwise I couldn't pass a readonly array to them.


> But I will say that in practice it behaves almost exactly like the meaning I described. So you can either take that meaning and run with it and be mostly happy, or insist on a different meaning that the implementation contradicts at nearly every turn and be upset about it (or you can just not use Record).

What does this even mean? "In practice"? In practice, it behaves exactly as it actually behaves. And it behaves by letting me access ANY string key and ALWAYS giving me something that it claims is a string. Then, I WILL get a runtime error when I write something like `obj['your mom'].length`, but the compiler will let it compile. So, how does it behave as you described? You said it behaves as though there "may" be a string at those keys, but if it did that, it would return a string|undefined. It doesn't. TypeScript is literally incorrect in how it handles this type. You have to either not allow me to assign a regular object to a Record<string, string> OR you have to always return a string|undefined when I do access a Record<string, string> by index.


> it would return a string|undefined. It doesn't.

It does do that, with the `noUncheckedIndexAccess` flag I mentioned. The same is true with arrays: `arr[index]` in reality may or may not return an item (the index may be out of bounds), but by default TypeScript gives it type `el` and not `el|undefined` so that every for-loop in the world doesn't break as soon as TypeScript is added. But it lets you enable this check optionally via the compiler flag. This is a good compromise IMO.


I mentioned above that I feel like the noUncheckedIndexAccess flag isn't a proper solution either. If I pass a Record<'foo', string>, then the compiler should allow me to get 'foo' out of the object as a string, not a string|undefined.

But you're right that arrays have a similar issue. But I feel like the object-like types have more information for the compiler. An Array<T> doesn't have its length as part of the type (but thankfully, TS does have tuple-types, which is neat). A Record does have information about its keys, so we should be able to infer more information about it. I feel like the Record issue is more akin to if TypeScript had an Array<T, N> type, but still let me access the 3 index of an Array<string, 3> and treat it as a string, even though it should be able to figure out that only indexes 0-2 have strings.


Update: I dug into your example, and you're right about `readonly` on objects, which is really unfortunate (but I'm glad to learn about it!). Looks like it's on the community's radar at least, so hopefully they fix it some day: https://github.com/Microsoft/TypeScript/issues/13347

Arrays are treated correctly though, at least: https://www.typescriptlang.org/play?#code/C4TwDgpgBAShCGATA9...

And despite the above, I still don't think `readonly` is useless or harmful. I benefit from it and I'm glad it exists, even if it could be better.


So, yes, arrays are treated correctly. But think about this: JavaScript has 6 or so fundamental types: undefined, null, number, string, boolean, symbol, object, and array. All but two of those are primitive and therefore already immutable/readonly. So the readonly/Readonly<> feature is specifically for two of JavaScript's types. It only works on one of the two. That's 50%. We have a feature that only works for 50% of the cases it's supposed to.

It's unacceptable that TypeScript even allows us to write `readonly` on object types. It should be an array-only feature until it actually does what it says it does. Is it really unreasonable for me to expect them to not offer a feature that literally does nothing when I use it?

> And despite the above, I still don't think `readonly` is useless or harmful. I benefit from it and I'm glad it exists, even if it could be better.

I disagree and I think you're wrong. And here's why.

You just told me a few comments above that TypeScript has caught mistakes for you when you passed a readonly array (I'm assuming an array, because that's the only way it works) to a function that took a non-readonly array parameter. That sounds like a glass-half-full situation. Great! TypeScript helped you...

But. You just learned today, from some arrogant jerk on HackerNews, that readonly does NOT work on object types. So, if I assume that you've also used readonly on object types (because why wouldn't you? You thought it worked.), it means that there is a GOOD chance that you've made the very same mistakes with objects as you admitted to making with arrays. However, TypeScript did NOT catch those mistakes for you. Instead, you thought your code was correctly handling readonly and non-readonly objects.

So, did TypeScript do you a favor here or not? I sincerely believe that TypeScript did you a DISSERVICE. It tricked you. You probably relaxed the part of your brain that worried about that kind of thing when you wrote naked JavaScript. You relaxed it because you thought that TypeScript was actually helpful. And it DID catch some things, so why would you stress about it? I don't sit here and stress over whether I'm passing a number to a number field, because I know that TypeScript will tell me if I pass something that isn't a number. But what if it just didn't sometimes?

TypeScript would literally be better if it didn't allow readonly on object types. The code that you and I, both, wrote would likely be MORE correct if we knew that TypeScript wasn't going to help us. We'd both sit at our desks and say "Oh shit, I'm passing an object to this function. I better be very careful to see if the function is going to mess up if I mutate the object after passing it in." Instead, you and I got duped, and thought that readonly actually did stuff that it says it does. So we didn't pause and think carefully.

Fuck that. I'm becoming more and more convinced that everyone in this thread has Stockholm Syndrome. Everyone has just decided that TypeScript is awesome and I could point out 1,000 issues with its type system, and people are just going to say "well, don't use that" or "just pretend this type really means something other than what the compiler literally tells us". TypeScript could murder your cat and I'm pretty sure I'd get 10 replies saying "Well, I don't have a cat, so TypeScript is great!" or "Just don't have a cat. Duh."


I understand your reasoning. I think a case could be made that TypeScript shouldn't allow `readonly` as it's currently implemented on object types (though I also think the opposite case could be made).

But here's the thing: at least in the real-world TypeScript code that I've written and seen over the years, `readonly` is usually not of crucial importance. It's a nice-to-have. I use it to remind myself "oh yeah, best not to mutate that in this section of the code". In most of those cases it's also obvious from context that we're working with the value immutably. Finally, I'm not usually mixing-and-matching mutable and immutable versions of the same type, which is where you're most likely to run into this problem. If a thing is readonly, it's probably readonly everywhere. I've had very few cases where it's "the world will blow up if this gets mutated in this one narrow situation which might accidentally happen because I passed an immutable value into a mutable argument".

That's all very hand-wavy and anecdotal. I acknowledge that. But if you want to make good technical decisions you have to weigh the tradeoffs, not just the principles, and the reality of any usable type system layered on top of JavaScript is going to be messy and have compromises. I've used Flow - which tends to be stricter about these things (to a general disregard for JavaScript's idioms and syntaxes) - and it's frankly horrible. We migrated a huge codebase at one point from Flow to TypeScript because it was simply not helping us be productive. The other alternative is to use a language designed from the get-go to support a perfect type system - which can be a great option in some scenarios - but that comes with its own trade-offs if you're targeting the web ecosystem, especially if you have an existing codebase in JavaScript. You're going to have a hard time porting a huge existing codebase to Elm, for example.

I personally believe that TypeScript is pretty darn close to a local optimum for "static type system that can be gradually applied to existing real-world JavaScript codebases".

Your individual complaints are not technically wrong, but you're taking a hyper black-and-white stance that purposely focuses on details that aren't as significant and ignores real-world tradeoffs. If you don't like TypeScript, don't use it! The rest of us will continue to enjoy and benefit from it.


I think you're overthinking this. TypeScript definitely helps catch issues - there have been large migrations to TS that were well received by engineers and project managers alike. Yes there are issues in the type system but you can find them in any language. Haskell has very strong claims but then you come across 1/0 != 1`div`0

Your Haskell example doesn't actually violate its own type system, though. 1/0 is Fractional or Double or whatever, so it gives Infinity. 1`div`0 is Integer and Infinity is not an Integer, so it explodes, instead.

Definitely confusing. Perhaps poorly designed. But it's not type-theoretically incorrect. I'm complaining about TypeScript being literally incorrect in its handling of types. Not "strange" or "weird" or "surprising", but incorrect.


> Arrays are covariant in their type param, so I can pass a `Dog[]` into a function that accepts `Animal[]`. If that function adds a `Cat` to the passed array, the compiler is perfectly happy, but we'll see a runtime error.

The same is true for Java arrays. I thought the Liskov substitution principle was tautological when I first heard about it, but it seems like the original designers of Java disagree. Another example (albeit from the standart library, not the typesystem itself) is that if you try to modify a `List` or similar collection, it may throw an `UnsupportedOperationException` because it's immutable - instead of having a superinterface with only non-mutating methods and only implementing that.


Ugh. You're going to make me defend Java? I'm going to need a drink...

First point: Yes, Java's arrays are broken and it offends me greatly. However, in practice, it's much more common to use Lists, which are not broken. In JavaScript/TypeScript "Array" is the equivalent of Java's List, so if you want a dynamically sized, contiguous, collection, you are stuck with the broken thing.

Then, the other issue is that, despite my comment's wording, this problem isn't ONLY present in Arrays. It's also a problem with objects, and (I assume) every other non-primitive type. See here: https://www.typescriptlang.org/play?#code/MYGwhgzhAECCB2BLAt...

So even a basic object field is broken in TypeScript's type system. What the hell are we supposed to do with this?

I do have thoughts and opinions about Java's Collection interfaces, the immutable implementations, etc, but that feels like a whole other topic.


Yes, TypeScript’s type system is unsound. Yes.

Flow handles some of these cases better, but the invisible keyboard of the market has decided that the extra conceptual/syntactical overhead of introducing co/contravariant types is not worth it for these corner cases. You may disagree, that’s fine. But I’d suggest treating the problem as the engineering tradeoff that it is rather than an absolute fact that every type system must not have any holes and those that do are worthless.


The invisible keyboard of the market... guided by millions of dollars from Microsoft, I assume.

> rather than an absolute fact that every type system must not have any holes and those that do are worthless.

Also, don't straw man me, please.


> So even a basic object field is broken in TypeScript's type system. What the hell are we supposed to do with this?

> So a hole exists in TypeScript's type system. What the hell are we supposed to do with this?

> So a hole exists in TypeScript's type system. How can this be useful?

> A hole exists in TypeScript's type system, therefore it is not useful.

> Every type system must not have any holes and those that do are worthless.

Where's the straw?

Regardless, the issue you pose is solved via generics, which would be the appropriate TS way to model this: https://www.typescriptlang.org/play?#code/MYGwhgzhAECCB2BLAt...

Similar constructs would solve your proposed Array issues too: https://www.typescriptlang.org/play?#code/MYGwhgzhAECCB2BLAt...

TS is explicitly designed to be easy to apply to existing codebases without terribly much work. This means it must allow unsound code in some cases. However, given it also provides mechanisms to make the code sound, I do not see why this must be a dealbreaker to you. I think if you took the time to learn it you'd find the type system is actually quite powerful.

(Disclaimer work at MSFT on their TS editor)


> Where's the straw?

Seriously? The very first transformation you did was a straw man.

There are two concepts here. Do you understand that two things can have different importance? Do you also understand that some numbers can be larger than others? For example, 10 > 1.

So, "basic object field sub-types" is a pretty damn big error. Furthermore, I didn't say that this one issue made it worthless. I do think TypeScript is nearly worthless, but if this issue were the single issue with TypeScript, I might not say it's worthless. However, we have this issue, which is a big deal, and we have MORE than one of these type issues.

So, the statement "every type system must not have any holes and those that do are worthless" is not even what I think at all. A small, niche, edge case of a type hole would not make a language as shitty as TypeScript, and therefore would not qualify as a worthless type system, automatically. No, it's the fact that TypeScript's type system has MANY (>1) type soundness/correct issues, and some of them are quite serious and fundamental.

Using generics to "fix" the problem is a work around. It does not somehow make TypeScript's compiler correct. The fact is that the type system is broken/incorrect. There should be no need to use generics, and generics aren't somehow more correct in a type-theory sense.

> I do not see why this must be a dealbreaker to you.

You don't have to. But, I've had to research strange, inconsistent, incorrect behavior in TypeScript way too many times in the last few months. This is a bad language where a signficant amount of the type system is only giving a false sense of security/correctness.

> I think if you took the time to learn it you'd find the type system is actually quite powerful.

I've spent the last year learning WAY more about TypeScript's type system than I should need to. It isn't powerful at all if it's incorrect. Readonly types don't work, sub-type relationships don't work, generics sometimes don't work right/expectedly- where's the "power" in that?


Well we both know there are only two types of languages, those that no one uses and those that people waste their time bitching about online. Glad to see TS is in the second category or else my day job would be useless :)

That ad-homonyms do a good job of convincing me this isn’t a thread worth continuing. I wish you well in finding a powerful, complete, and consistent type system. I’m sure you’ll succeed if you try hard enough.


> The same is true for Java arrays.

This is an unfortunate relic of pre-1.5 Java. The rest of the Collection API types correctly. A type-safe generic alternative is Arrays::setAll.

> if you try to modify a `List` or similar collection, it may throw an `UnsupportedOperationException`

Again, an historical artifact. `List` is from Java 1.2, you would effectively have to deprecate List<T> for that to work.


Most of these things in TS are due to historical artifacts too. Namely code patterns that would not work well without them.

I generally agree with everything you're saying, after extensive use of Typescript. Its safety guarantees usually only help within extremely tight bounds, and there's surprisingly little that it catches that couldn't be inferred directly.

However, a question:

> Arrays are covariant in their type param, so I can pass a `Dog[]` into a function that accepts `Animal[]`. If that function adds a `Cat` to the passed array, the compiler is perfectly happy, but we'll see a runtime error.

I don't think this one is true anymore, assuming you type it correctly. I've provided an example[1].

---

Separately however, I think it's worth noting that I've been playing with the idea of doing a project in plain Javascript again, to see if I feel any serious productivity losses.

- [1]: https://www.typescriptlang.org/play?#code/C4TwDgpgBAwghsKBeK...


> I don't think this one is true anymore, assuming you type it correctly. I've provided an example[1].

You are correct that using a generic does fix the issue. But you called that "correct". But why, as a code author, should it be my responsibility to understand type theory better than my compiler? Because, according to basic type theory (to the extend that I understand it), TypeScript is allowing an operation that is literally incorrect in the first example. (mutable) Array types must be invariant in the type parameter. It should bitch at us for both examples.

And this won't happen with just arrays. It'll happen if you do the same thing with an object with a field. E.g., instead of `Animal[]`, we could have the function take a `{ pet: Animal }`, and pass it a `{ pet: Dog }`, and replace the Dog with a Cat, causing a runtime error for the poor schmoe who thought he still had a Dog.


> TypeScript's type system is so utterly broken that I honestly don't know if my code is ANY more robust than if I had written it in JavaScript.

You gave three examples that seem relatively rare. At work we use TypeScript to type a codebase that was mostly plain JavaScript, and we write TypeScript as plain JavaScript. It does very well its job of offering use better tooling and compile-time guarantees. If you consider it as a way to statically-type existing JS codebases, it's good. If you consider it as a language on its own, it's not great.


I only gave three examples because it was a single Hacker News comment and I didn't think I should keep a running catalog of every single hole/issue/inconsistency that I've encountered in the last year of working on a TypeScript project.

I gave a few more examples in other comments in this thread. I can go look at the code comments I wrote in my project and come up with a longer list for you if you like.

> You gave three examples that seem relatively rare. At work we use TypeScript to type a codebase that was mostly plain JavaScript, and we write TypeScript as plain JavaScript. It does very well its job of offering use better tooling and compile-time guarantees. If you consider it as a way to statically-type existing JS codebases, it's good. If you consider it as a language on its own, it's not great.

This is a legitimately genuine question: if you're treating TypeScript as just type annotations on an otherwise regular-old-JavaScript codebase, then why are you using TypeScript rather than just JavaScript with JSDoc comments?

Also, I can't quite tell. Are you defending TypeScript's brokenness by suggesting we just don't use all of its advertised features? Isn't that... kind of weak? It's like the joke about going to the doctor: "Doctor, my elbow hurts when I do this." "Well, don't do that!" -- If TypeScript advertises a feature, shouldn't that feature, like, actually work?


> This is a legitimately genuine question: if you're treating TypeScript as just type annotations on an otherwise regular-old-JavaScript codebase, then why are you using TypeScript rather than just JavaScript with JSDoc comments?

TypeScript is way easier to write, and allows to anotate more stuff. We do have some JS that will stays as JS that's anotated with JSDoc, and it's a worse experience than TS.

> Also, I can't quite tell. Are you defending TypeScript's brokenness by suggesting we just don't use all of its advertised features?

I'm defending the usefulness of the tool, but not its marketing. I think the approach of trying to cover every single usage of JS is not the best, and leads to some complex features and weird edge cases like the ones you mentionned. But for typing plain JS, it works very well.

I guess we get a lot of value from TS because static typing is easy when your code wasn't that dynamic in the first place.

> If TypeScript advertises a feature, shouldn't that feature, like, actually work?

Fair point. I honestly don't know what a codebase that doesn't really benefits from TS looks like, but from your description I can see that their claims are a bit too much. I consider TS as part of a campaign from Microsoft trying to regain mindshare and/or control of developers, and thus always took their claims with a grain of salt.


It's a shame that something like Typescript is and will dominate the static typing JS ecosystem/industry instead of something simpler and sane like Rescript.

Rescript is sane but I wouldn’t all it simpler. It can be really hard to do trivial stuff with it. I felt I had to study OCaml to understand it.

I think it gets pretty close, but admittedly its handling of unknown objects is just downright bad.

Also, somehow, the tooling for it has one of the best features around(compiling to readable JS), but all other tooling is completely absent, and frankly, bad.


I have worked with java in the past and I work with Typescript now. My experience is in reverse. Typescript system it's limited because it has to be compatible with javacript, but working with Algebraic data types is a joy. Instead java is verbose and cumbersome.

This seems like such an odd complaint because TS is very loudly purposely unsound. So much of TS is purposely unsound. Like unless you turn on an option disabling it function parameters are contravariant! If you write a function that takes a Dog you can pass an Animal and it will type check!

It can be as loud and purposely unsound as it wants. That won't stop me from thinking it's a bad tool.

The problem is that function parameter variance is still wrong, even with all the strict flags. You just have to wrap the Dog/Animal in an Array or object. Pass in an { pet: Dog } to a function that accepts a { pet: Animal }, and the function can replace your Dog with a Cat, and TypeScript is perfectly happy to let you call `o.pet.bark()` afterwards.


Like I get you but I genuinely can not thing of a single time I’ve ever hit this problem organically. I can’t even think of a time I’ve ever even though to modify a parameter when writing JS. I think if the language made every type immutable when passed to a function I wouldn’t even notice.

It’s not an excuse because where possible I think TS should be stricter but I don’t think stuff like this really gets in the way of it providing me value when writing code.


What's an example of a language that you like?

I don't know how to say this politely, but I'm afraid to answer this question because I suspect that it will just turn into either a language war or an argument along the lines of "Touché! I can point out a flaw in your favorite language, so obviously TypeScript and LanguageX are exactly equal and you're not allowed to criticize TypeScript!"

um, no. TypeScript is fantastic.

JavaScript is most definitely not as fast as Java. Try doing some CPU- and memory-intensive Project Euler problems in both languages and you'll see a several times difference.

Java has a much better standard library - abstract data type interfaces, common and fast data structures, I/O facilities, concurrency, and more.


True but for a dynamic lang is very close[0] and if your application lives in the Java collection framework(most do) instead of using arrays the performance is even closer.

[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Well, let’s just add that TruffleJS, a js implementation in java which is part of the Graal project (which is a java interpreter written in java, running on top of java) can achieve comparable speeds to V8 for long running tasks (java’s JIT compilers “turn on” later than js’s because one has to be quite fast as soon as possible while the other has to be really fast but can warm up a bit more).

There is a slight trick here, because this java interpreter uses a few special JIT optimizations, but it is still 100% java code (and the clever reuse of the engineering marvel what the JVM is). (Also, do check out the Graal project, its AOT compilation may be the least interesting part! It can even inline python code into js and optimize them together!)

Also, Java will soon get virtual threads that will automagically become non-blocking, even if they are written with blocking code (similarly to go). Currently in incubator mode is the Vector API which let’s people write really low level SIMD code that can cleverly query the processors capabilities and will even safely fallback to for loops on ineligible CPUs. FFI will greatly improve also with project Panama, so libs like tensorflow can get better integration into the JVM.

And last but not least, Valhalla is coming with value types. Yeah, it won’t happen for a few years still, but once it does, there will hardly be a platform as performant as the JVM. So, while we have heard it plenty of times how java will soon die, I think it’s future is brighter than ever.


>Javascript being fast as Java

Perhaps for some specific use cases, but certainly not for others. I also suspect moving from a fairly broad "stdlib" and somewhat curated 3rd party packages to the wild west of npm might be a barrier for many.


In the world of multicore processors, when "Moore's law" has been dead for a decade? Rather, the future of JS outside of browsers doesn't like very bright to me.

You can't create lock-free data structures in Javascript (as of this comment), but you can do so in Java.

Javascript is singlethreaded. No locks and therefore no lockfree datastructures necessary. Even webworkers are only communicated with via copying messages.

  Javascript is singlethreaded
… which is why the root comment looks like pure trolling.

Every time I read about the insane achievements of a language like JavaScript, I am reminded of how disappointing it was to watch. Maturing JavaScript was a lateral move that cost, what, a billion man-hours?

The new cross-plat runtimes and languages we could have had by now if JavaScript didn't have a stranglehold on the available browser-side language space... wow.


> The new cross-plat runtimes and languages we could have had by now if JavaScript didn't have a stranglehold on the available browser-side language space

Probably barely working, super fragmented garbage. A standard that isn't optimal but works well enough is fantasic in comparison to everybody trying to make their own thing. And realistically, if something else would have emerged, people would whine about it just as much as they do about JS.

Nowadays, we can use pretty much any language we want and compile it down to JS/WebASM. And most just add some compile checks with TypeScript. Because that's good enough, and it works.


That's one way to look at it.

Another way is to appreciate the fact that we have an open-platform with a standards body that actually works (queue the responses about how it actually _doesn't_ work). I can write an app once and access it from any device and it will work today, and it will work 10 years from now.

No other platform has achieved what the web browsers have, so I think we might be doing something right.


In Eich’s defense, I’ve read before that he originally wanted to make a Scheme dialect instead of JS but was given the finger by higher-ups (and only two weeks to create JS 1.0 to begin with!).

Don't forget that the most complained about feature -- type coercion -- was added later at the explicit request of developers and then left in due to force from Microsoft.

..who then created Typescript


Missing "How" from title

Pretty cool article. I've been dealing with JS for years without ever thinking too hard about how it all works


It's always surprising reading articles like this.

The hard truth is that Javascript does not actually achieve great performance. It's squarely in the back-middle of the pack. It's just that it's vastly improved versus its old self, and the methodology to get there has been wildly complex.

Of the languages tested in the alioth benchmark, more than half outperform node.

This should not be surprising, given its lack of basic containers and algorithms, and the inability to make the real thing. (No, an array containing an item and another array isn't a linked list. No, Okasaki containers aren't the real thing. Your complexity guarantees undermine your confident statements.)

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


So yes but comparing to C++ is probably not the choice that will be presented by most development teams. You’re probably looking more at Python, Ruby, Go, and Java as the main alternatives and JS suddenly starts tending toward the front of the pack.

Yet the title says "great performance", not "great performance compared to python".

Go and Java have quite good performance.

If you can think it you are a great programmer. If you can implement it you are a great engineer. If you can share it you are a scientist.

Is Javascript considered to be fast?

It might be close to as fast as it could given the inherent constraints of the language, but I would not call it fast in a general way.


Javascript is actually quite fast, on par with Java[0], but it is DOM manipulation that makes it seem so slow.

[0]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I mean there are a number of benchmarks where Node outperforms java if you look at the benchmark comparison: https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... The optimization work that goes into JS is actually really impressive and comes from most of the biggest tech companies.

As per that list Java is still faster for most of the benchmarks. Massive effort has gone into the JVM too, for the last 20+ years.

Note also that these numbers include startup time, which is noticeably slower for JVM compared to node. If you're not restarting the server on every request (like the "serverless" nonsense does), a second more or less is not that important. I'd like to see sustained throughput comparison between those two.

For those tiny tiny programs 1/10th of a second JVM startup —-

“Wtf kind of benchmark counts the jvm startup time?”

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I’ve been using JavaScript since 1997. The fact that you can ask this question makes me think you weren’t using JS in the first decade of the WWW. Frankly until Google came in and shook things up, JS was incredibly slow. Think 14.4k modem versus 1 gigabit fiber optic speed difference.

Yes, it is a quite fast for a managed language. For comparison, python is roughly 10x slower than it.

It's interesting to see where they fail. I tried to parse the stackoverflow database (XML) using node. It was so slow as to be unusable (multiple hours). I did the same in python and it took minutes. Now of course I'm comparing libraries basically and I have no idea how poorly the node library was implemented but I'd be super curious to know if the this is something python excels as because of language features or if it's just poorly implemented libraries in node that make the difference.

I'd bet it's because the Python library is actually just a thin wrapper around a C library.

You can do that in node too via NAPI or you could compile to wasm and run there instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: