Hacker News new | past | comments | ask | show | jobs | submit login
Skeptical of rewriting JavaScript tools in "faster" languages (nolanlawson.com)
168 points by todsacerdoti 49 days ago | hide | past | favorite | 322 comments



> I don’t think that JavaScript is inherently slow

It is.

Brilliant engineers have spent decades making it faster than you might expect, subject to many caveats, and after the JIT has had plenty of time to warm up, and if you're careful to write your code in such a way that it doesn't fall off the JITs optimization paths, etc.

Meanwhile, any typical statically typed language with a rudimentary ahead of time compiler will generally be faster than a JS VM will ever approach. And you don't have to wait for the JIT to warm up.

There are a lot of good things about dynamically typed languages, but if you're writing a large program that must startup quickly and where performance is critical, I think the right answer is a sound typed language.


I spent years tuning a JS game engine to play nice with JIT for best performance. Then rewrote engine in Rust/WASM over a few weekends (turns out JIT friendly code is straightforward to port to statically typed language) & things are an order of magnitude faster now with the benefits of static type checking & no spooky jit perf to optimize for

Just because JS can be fast doesn't mean it's a pleasure to write fast JS


It mostly depends on the application. If you're doing complex transforms over hundreds of GBs of data- yeah, use a workhorse for that.

But the vast majority of slow JS I've encountered was slow because of an insane dependency tree or wildly inefficient call stacks. Faster languages cannot fix polynomial or above complexity issues.


> If you're doing complex transforms over hundreds of GBs of data-

It's not always about giant datasets. Latency matters too for software that is run frequently by users.

I maintain a code formatter. In a typical invocation, it's processing only a few hundred kilobytes of input data. But it is invoked every time a user hits control-S, thousands of times a day. If I make it a few hundred milliseconds slower, it materially impacts their flow and productivity.

> Faster languages cannot fix polynomial or above complexity issues.

This is true. But once you have fixed all of your algorithmic issues, if your language is slow, you're completely stuck at that point. You have hit a performance ceiling.

Personally, I would rather work in a language where that ceiling is higher than it is in JavaScript.


Yeah but the application we're talking about here is JavaScript tools, or more generally "language processors" / AST-based workloads

These are very different than your average JavaScript program

And that's exactly where it starts to be the case that JavaScript semantics are the issue

Take it from Lars Bak and Emery Berger (based on their actions, not just opinions): https://lobste.rs/s/ytjc8x/why_i_m_skeptical_rewriting_javas... :)


> I think the right answer is a sound typed language.

What do you mean by a "sound typed language". Go and Java have unsound type systems, and run circles around JS and Dart. Considering your involvement with Dart, I find contradictory information [1].

[1] - https://github.com/dart-lang/language/issues/1461


> What do you mean by a "sound typed language".

I mean that if the type checker concludes than an expression or variable has type T, then no execution of the program will ever lead to a value not of type T being observed in that variable or expression.

In most languages today, this property it enforced with a combination of static and runtime checks. Mostly the former, but things like checked casts, runtime array covariance checks, etc. are common.

That in turn means that a compiler can safely rely on the type system to generate more efficient code.

Java intended to have a sound type system, but a hole or two have been found (which are fortunately caught at runtime by the VM). Go's type system is sound as far as I know. Dart's type system is sound and we certainly rely on that fact in the compiler.

There is no contradictory information as far as I know, but many people seem to falsely believe that soundness requires zero runtime checks, which isn't the case.


> which are fortunately caught at runtime by the VM

At least one of them isn't, but that one is in a crufty old area of the code that most people don't care too much about:

https://blog.devgenius.io/java-106-why-does-sneakythrows-wor...


Wow that's a new one for me. But as far as I can tell, that isn't violating soundness because I don't think Java uses checked exceptions as static types in the type system. All this means is that some methods might unwind with an exception type you wouldn't expect it to unwind from.


Would you mind expanding on what you mean by "as static types"?

As I see it, it's definitely a part of Java's type system that things which throw a checked exception will require that you handle those checked exceptions. But now here comes `sneakyThrows` and the next thing you know your `Function#apply` (`R apply(I)`, not a `throws` to be seen in that declaration) is throwing a checked exception (not an `UndeclaredThrowableException` wrapping the checked exception, literally the checked exception ... unchecked).

But I'll admit I'm rather an amateur when it comes to such things.


To violate soundness, you'd need to get into a situation where an unexpected checked exception leads an expression or variable to end up with a value of some type that doesn't match the expression or variable's static type.

This would be easy if the hole worked in the other direction where an exception should be thrown but isn't. Because then you could probably do something like confuse the definite assignment analysis in a constructor body to make it think all fields are initialized by the end of the constructor, like:

    class C {
      final int x;

      C(bool b) {
        if (b) {
          thingThatMustThrow();
        } else {
          x = 1;
        }
      }
    }
Here, if the control flow analysis expected that `thingThatMustThrow()` will always throw an exception, then it might conclude that all paths that reach the end of the constructor body will definitely initialize `x`. But if that method doesn't throw an exception, then execution could proceed merrily along and `x` never get initialized.

(In practice, this isn't an issue because field definite initialization analysis is already unsound so the VM default initializes every field anyway.)

But in this case, it goes in the other direction. You have code that isn't expected to throw that does. I can't think of a situation where that would lead Java to end up with a value of the wrong type flowing somewhere unexpected.


Ah, "static" in the sense of "statically analyzable" because it's part of the typing phase. And "sound" here meaning only that the program can only assign values that match the statically determined types. Interesting that "soundness" doesn't include typing effects (at least in the literature), but I guess general typed effects are relatively new and we aren't yet at the point where Koka-like effect types are "expected functionality" when building a new programming language.

Thank you so much!


I’ve written quite a bit of tooling in JS, and I genuinely enjoy the language, but I feel like Rust and Go are a godsend for these types of tools. I will sometimes prototype with TypeScript, but if something requires massive concurrency and parallelism, it’s unlikely I’ll stick with it.

I wonder if the author would feel differently if they spent more time writing in more languages on tooling like this. My life got a lot easier when I stopped trying to write TypeScript everywhere and leveraged other languages for their strengths where it made sense. I really wanted to stick to one language I felt most capable with, but seeing how much easier it could be made me change my mind in an instant.

The desire for stronger duck typing is confusing to me, but to each their own. I find Rust allows me to feel far, far more confident in tooling specifically because of its type system. I love that about it. I wish Go’s was a bit more sane, but there are tons of people who disagree with me.


> The desire for stronger duck typing is confusing to me, but to each their own

I really like duck typing when I'm working on small programs - under 10,000 lines of code. Don't make me worry about stupid details like that, you know what I mean so just do the $%^#@ thing I want and get out of my way.

When I work with large programs (more than 50k lines of code - I work with some programs with more than 10 million lines and I know of several other projects that are much larger - and there is reason to believe many other large programs exist where those who work on them are not allowed to talk about them) I'm glad for the discipline that strong typing forces on me. You quickly reach a point in code where types save you from far more problems than their annoyance costs.


A language that goes from prototype-quality (duck typing, dynamic, and interpreted) to strict static compile checks would be nice.

I can’t think of any in the mainstream, however.


Isn't that TypeScript or more generically "type hints" in other languages? If all you care about is that your program is valid for some gradually increasing subset of your code at compile time then these work great.


I was envisioning a system or some other compiled language, but you’re right. I overlooked Typescript and it indeed comes close.


i don't do javascript, but I know from experience adding const to c++ is impossible I expect the similar for types


Common Lisp comes to mind, but I guess it is debatable to consider it mainstream.


> you know what I mean just do the $%^#@ thing I want

Yeah, it's just that about 10k LoC, as I've also noticed, you don't actually know what you yourself mean! It's probably because such amount of code is almost never written in one sitting, so you end up forgetting that e.g. you've switched, for this particular fields, from a stack of strings to just a single string (you manage the stacking elsewhere) and now your foo[-1] gives you hilarious results.


Types in the end are contacts you make with yourself, enforced by the compiler.

A weak type system gives you the freedom to trick yourself.

I don't feel it's a feature.


I just skimmed the article but the author had a statement about JS being "working class" in that it didn't enforce types and that he dislikes TS for that reason. Rust is completely anathema to that attitude, you have to make a LOT of decisions up front. People who don't see the value in a compiler are never going to like working in Rust. The author is completely satisfied with optimizing hacks in the toolchain.


I also though rewrites of JS projects were only in part motivated by perf gain. Sure they were the most advertised benefits of the rewritten tools, as that communicates a benefit for the users of those tools.

> I find Rust allows me to feel far, far more confident in tooling specifically because of its type system.

Usually the JS projects become really hard to work on the growing up. Good JS needs a lot of discipline on the team of devs working on it: it get messy easily and refactoring becomes very hard. Type systems help with that. TypeScript helps, but only so much... Going with a languages that both has a sound type system (like Rust) and allows lots of perf improvements (like Rust) becomes an attractive option.


>The desire for stronger duck typing

That's what C++ templates always have been, and got way, way tighter with concepts circa C++23.

Rust's traits are also strong duck typing if you squint a little.

The idea in both cases is simple: write the algorithm first, figure outwhat can go into it later — which allows you to write the code as if all the parts have the types you need.

But then, have the compiler examine the ducks before the program runs, and if something doesn't quack, the compiler will.


"Skeptical" doesn't mean completely against their usage. From the author in the comment section:

> “Embarrassingly parallel” tasks definitely make a lot of sense to do in Rust.


I noticed that, and I was left kind of confused. My experience might be limited, but operating on thousands of files and performing multiple reads or writes on them over and over seems exactly like the type of thing a lot of JS tooling does, so I’m not really sure where the author decides that JS is no longer the right fit. I don’t see why that doesn’t dispel skepticism right off of the bat.


As someone who as written a lot of Go and trained a couple of people in it.

The "more difficult" in this quote makes me somewhat angry.

`This breaks down if JavaScript library authors are using languages that are different (and more difficult!) than JavaScript.`

JS is absolutely not easy!

It is not class oriented but uses funky prototypes, it has classes slapped on PHP-Style.

Types are bonkers, so someone bolted on TypeScript.

It has a dual wield footgun in the form of null/undefined, a repeat of the billion dollar mistake but twice!

The whole Javascript tooling and ecosystem is a giant mess with no fix in sight (hence all the rewrites).

The whole JavaScript ecosystem is ludicrously complicated with lots of opinions on everything.

Tooling is especially bad because you need a VM to run stuff (so lots of rewrites).

This is why Java never got much traction in that space too.

Go for example is way easier to learn than Javascript.

Here i mean to a level of proficiency which goes beyond making some buttons blink or load a bit of stuff from some database.

Tooling just works. There is no thought to spend on how to format stuff or which tool to use to run things.

And even somewhat difficult (and in my opinion useless) features like classes are a absent.

Want to do concurrency? Just to `go func whatever()`. Want it to communicate across threads? Use a channel it makes stuff go from A -> B.

Try this in JS you have to know concepts like Promises, WebWorkers and a VM which is not really multithreaded to begin with.


Agree that JS is not easier these days. It only seems easier because we all already know it.


I think you're misunderstanding the billion dollar mistake. It was not statically tracking null-references. In JS, everything is dynamic so it doesn't apply, and TS does check them.

> My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference

https://en.wikipedia.org/wiki/Tony_Hoare

(Emphasis mine)


This author already has JS Stockholm syndrome and at least acknowledges that up front.


I don't buy the argument that a lot of the performance jumps from rewrites comes from developers writing more optimised code.

I've worked on multiple rewrites of existing systems in both JS and PHP to Go and those projects were usually re-written strictly 1:1 (bugs becoming features and all that). It was pretty typical to see an 8-10x performance improvement by just switching language.


Exactly the same experience.

For a smallish batch processing script I had written in node, I just fed it to chatgpt and got the golang version. It went from being unusable with over 100K records to handling 1M on exactly the same machine.

And only then I started adding things like channels, parallelism, and smart things.


Now that there's no perfect parity to maintain we've started optimising the Go versions as well. Multiple 2x performance improvements once we started doing this, on top of the original performance improvements. This translates to insane cost savings when you're working at scale.


Yeah, it reminds me about all those old "Haskell can be faster than C!" posts that used to be very popular. Sure, some exquisite, finely-crafted Haskell code can be faster than a plain, dumb, straightforward, boring C code. But if you compare plain, dumb, straightforward, boring Haskell code with plain, dumb, straightforward, boring C code, the latter will be faster pretty much always.


Plus it will be readable by a much larger percentage of working programmers.


I'd rather have a code base I'm going to be working on in a language I haven't learned yet, than having it be in C or C++ if it's of any significant size. Learning a new language is small thing, all things considered, especially if it's a well designed one like Haskell.

Spending a week or two getting familiar with the way things are done in a language, and then gradually become effective in it and the specific codebase I would be working on for me at least would beat having to work in an environment with 50 years worth of irreconcilable technical debt inherent to the language.


I agree with you fully, but I've also tried to onboard people to F# and Haskell and... unless you're the self selecting person that enjoys (typed) functional programming, the pushback you get from the other ~95% of developers is extremely strong.

If your stack is FP-ish, and you hire FP-ish developers, it's fine. But having non-FP devs write Haskell? Maybe I've been unlucky, but it's near impossible in my experience.


I think the importance of parallelism has been overlooked by the OP and most commenters here. Even laptops these days have at least 8 cores; a good scalable parallel implementation of a tool will crush the performance of any single-threaded implementation. JS is not a great language for writing scalable parallel code. Rust is. Not only do Rust and its ecosystem (e.g. Rayon) make a lot of parallel idioms easy, Rust's thread safety guarantees let you write shared-memory code without creating a maintenance nightmare. In JS you can't write such code at all.

So yes, you can do clever tricks with ArrayBuffers, and the JS VMs will do incredibly clever optimizations for you, but as long as your code is running on one core you cannot be competitive. (Unless your problem is inherently serial, but very few "tool"-type problems are.)


OTOH in production settings you can run 6-8 copies of your Node app to utilize the 8 physical CPU cores without (much) contention; the JS engine runs additional threads for GC and other housekeeping. Or you can use multiple "web workers" which are VM copies in disguise. The async nature of the JS runtime may allows for a lot of extra parallelism (for waiting on I/O completion) on one core if your load is I/O-bound, as it is the case for most web backends.

Th same does not hold for frontend use, as it's for one user, and latency trumps throughput in the perception of being fast. You need great single-thread performance, and an ability to offload stuff to parallel threads where possible, to keep the overall latency low. That's basically the approach of game engines.


When talking about tooling: around five years ago, I introduced parallelism to a Node.js-based build system, in a section that was embarrassingly parallel. It was really painful to implement, made it noticeably harder to maintain, and my vague memory is that on an 4-core/8-thread machine I only got something like a 3–4× speedup. I think workers are mature enough now that it wouldn’t be quite so bad, but it would still be fairly painful.

In Rust, I’d have added Rayon as a dependency to my Cargo.toml, inserted `use rayon::prelude::;` (or a more specific import, if I preferred) into my file, changed one `.iter()` to `.par_iter()`, and voilà, it’d have compiled (all the types would have satisfied Send) and given probably at least a 6–7× speedup.

Seriously, when you get to talking about a lot of performance tricks and such (I’m thinking things like the bit maps referred to at the end), even when they’re possible* in JavaScript, they’re frequently—I suspect even normally—way easier to implement in Rust.


As someone who hasn’t used Rust, I found your comment very helpful for understand the advantages compared to how JS does parallelism.


This is very correct for new development.

I only mean that utilizing extra CPU cores in JS is a bit easier for an API server, with tons of identical parallel requests running, and where the question is usually in RPS and tail latency, than for single-task use cases like parallelized builds.


Rayon abstracts away the parallelism for you. Yes Parallelism is easier in compiled languages with a concept of threads but your example doesn't do that much justice.

Someone could write "rayon" for webworkers.


> Someone could write "rayon" for webworkers.

Not true. JavaScript’s threading model is entirely insufficient for something like Rayon. At best, you could get either something that only worked with shared byte arrays, or something that was vastly less efficient due to structured clone. At best, you have something far more manual and somewhat slower, or something a little more manual and much slower.

Rayon is a magnificent example of making something that is impossible in scripting languages easy. Of making something that you can only feebly imitate with some difficulty, trivial.


> only feebly imitate with some difficulty, trivial.

My point was the API simplicity not the technical correctness, which is why my post discussed threading in the first place.

Yes Rayon isn't possible in JS, but a "rayon" api like multi-threaded library that you can reach for in cases where it makes sense is absolutely doable.


Having thought further about this, I’m going to double down on this, because it’s worse than I was even contemplating. What you describe would be as like Rayon as a propped-up, life-size cardboard cutout of a house, is like the depicted house.

Rayon’s approach lets you write code that will run in arbitrary other threads, inline and wherever you want to. That’s absolutely essential to Rayon’s API, but you can’t do that in JavaScript, at all: workers don’t execute the same code (it’s not based on forking), and interaction between workers is limited to transferable objects, or things that work with structured clone, which excludes functions.

No, you can’t get anything even vaguely like Rayon in JavaScript. You could get a feeble and hobbled imitation with untenable limitations or extra compilation step requirements (and still nasty limitations), and that’s about it.

With Rayon, you can add parallelism to existing code trivially. With JavaScript, the best you can manage, which is nowhere near as powerful or effective even then, requires that you architect your entire program differently, significantly differently in many cases, and in ways that are generally quite a bit harder to maintain.

If you wish to contest this, if you reckon I’ve overlooked something, I’m open to hearing. I’m looking for something along these lines to work:

  import { f1, f2 } from "./f.js";
  let n1 = Math.random();
  let n2 = Math.random();

  await par_iter([1, 2, 3, 4, 5])
      .map(n => f1(n + n1))
      .filter(n => f2(n + n2))
      .collect();
Where the mapping and filtering will be executed in a different worker, and collect() gives you back a Promise<Array>. The fact that f1 and f2 are defined elsewhere is deliberate—if it didn’t close over any variables, you could just stringify the function and recompile it in the worker.


They acknowledge it in the comments:

> “Embarrassingly parallel” tasks definitely make a lot of sense to do in Rust.


The sentiment of the article is 90% right. In all fairness there are opportunities for making tools faster by writing them in faster languages, but these tend to be extreme scenarios like whether you really need to send 10 million WebSocket messages in the fastest burst possible. Aside from arithmetic operations JavaScript is now as fast as Java and only 2-4x slower than C++ for several years now and it compiles almost instantly.

Really though, my entire career has taught me to never ever talk about performance with other developers... especially JavaScript developers or other developers working on the web. Everybody seems to want performance but only within the most narrow confines of their comfort zone, otherwise cowardice is the giant in the room and everything goes off the rails.

The bottom line is that if you want to go faster then you need to step outside your comfort zone, and most developers are hostile to such. For example if you want to drive faster than 20 miles per hour you have to be willing to take some risks. You can easily drive 120 miles per hour, but even the mere mention of increased speed sends most people into anxiety apocalypse chaos.

The reactions about performance from other developers tend to be so absolutely over the top extreme that I switched careers. I just got tired of all the crying from such extremely insecure people who claim to want something when they clearly want something entirely different. You cannot reasonably claim to want to go faster and simultaneously expect an adult to hold your hand the entire way through it.


But build tools in js land are quite slow. Especially when you start throwing behemoths like Nx in the mix.

Look at the performance gains in build tool land (esbuild specifically) and you’ll see the performance gains with native languages.

For most webservers and UIs it’s plenty fast though.


Much of esbuild's performance gains are in throwing out a lot of cruft. It definitely benefits from the "fresh rewrite can avoid the cruft of an organic project", including specifically benefiting a lot from ESM as the winning end goal format, and the hindsight of Webpack eventually a massive organic ecosystem of plugins towards a core set of "best practices" over a lot of versions and ecosystem churn.

esbuild versus webpack performance is never a fair fight. Most of the other behemoths are still "just" webpack configurations plus bundles of plugins. It will take a while for the build tools in that model to settle down/slim down.

(esbuild versus Typescript for "Typescript is the only build tool" workflows is a much more interesting fight. esbuild doesn't do type checking only type stripping so it is also not a fair fight, and you really most often want both, but "type strip-only" modes in Typescript are iterating to compete with esbuild in fun ways, so it is also good for the ecosystem to see the fight happening.)

I appreciate esbuild, but I also appreciate esbuild had so much of the benefit of a lot of hindsight and not developing in the open as an ecosystem of plugins like webpack did but rather baking in the known best practices as one core tool.


> Much of esbuild's performance gains are in throwing out a lot of cruft.

I don’t think there’s a great way to be sure of this. Parcel 2 (my personal favorite), for example, doesn’t include, by default, much of the cruft from mid-2010s JavaScript, but esbuild is still faster.

Theoretically, being able to use multiple cores would bring speed improvements to a lot of the tree manipulation tasks involved in building js projects.

> esbuild versus webpack performance is never a fair fight.

Yeah webpack is just the worst. Bloated from day 1


The build on my last large project took about 12 seconds total. That included installing self-signed certificates into the OS trust store, installing certificates into the browsers in Linux, compiling from TypeScript, via SWC, creating a universal command available from the command path, consolidating JS and CSS assets into single files, and some other things. I bet it could be faster, but I was happy with 12 seconds considering the project was quite large.

More than 90% of performance in JavaScript comes down to:

* comfort with events and callbacks

* avoiding string parsing: queryStrings, innerHTML, and so on

* a solid understanding of transmission and messaging. I wrote my own WebSocket library

None of that, except figuring out your own home grown WebSocket engine, is complicated, but it takes some trial and effort to get it right universally


It's maybe 90% right in general, but it's 10% right (90% wrong) for the workload of language processors in particular. Lots of tiny objects are terrible workloads for JavaScript and Python.

Related comments: https://news.ycombinator.com/item?id=35045520

Direct comparison I did between Python and C++ semantics - Oil's Parser is 160x to 200x Faster Than It Was 2 Years Ago - https://www.oilshell.org/blog/2020/01/parser-benchmarks.html

This is the same realistic program in both Python and C++ -- no amount of "optimizing Python" is going to get you C++ speed.

---

FWIW I agree with you about the debates -- many people can't seem to hold 2 ideas in their head at once.

Like that C++ unordered_map is atrociously slow, but C++ is a great language for writing hash tables.

And that Python was faster than Go for hash table based workloads when Go first came out, but also Python is slow for AST workloads.

Performance is extremely multi-dimensional, and nuanced, but especially with programming languages people often want to summarize/compress that info in inaccurate ways.



So... about performance. Performance is the difference between two or more measures. When not compared against something everything is itself fast. Out of a list of 1 item that one item will always be the 100% fastest item in the list. So, what's really important is not whether something is fast, but whether that thing is faster than something else and by how much.

I suspect Java is fast. JavaScript is also fast. They are both fast. Without comparing measures the only significant distinction between the two is the time to compile. In that case Java is slow, or at least just substantially slower than JavaScript.

Fortunately there are comparative benchmarks: The Programming Benchmark Games. It is not always the best, but it is certainly better than naught.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

Is it though? Rust/Zig/Go programs are pretty much all incredibly easy to checkout and compile, it's one of the big selling points of those languages. And at the end of the day how often are javascript developers fixing the tooling they use even when it's written in javascript?

I've always felt learning new languages give me not only new tools to use but shapes the way I think about solving problems.


It's definitely more of a pain to figure out rustup than to use the Node.js environment that's already installed. As noted in the article, you can quite literally go edit the source of your NPM packages without downloading or compiling a single thing.

Minor speedbumps like installing Rust don't stop me now, and probably don't stop you either, but they might have at the start of my career. You have to think about the marginal developers here: how many people are able to debug the simple thing who would be unable or unwilling to do it for the complicated thing? As you note, it's already quite rare to fix up one's tooling, so we can't afford to lose too many potential contributors.

I like learning new languages too, but not to the extent that I'd choose to debug my toolchain in Zig while under time pressure. This is something I've actually done before, most notably for FontCustom, which was a lovably janky Ruby tool for generating font icons popular about a decade ago.


That’s not objectively true at all. I learned to use Rust long before I ever touched a Node setup, and the first time I wanted to run a JS app it took me a lot longer to figure out how to do it than it did to type `cargo run foo`.

Neither is easier than the other. Whichever one you already know will be easier for you, and that’s it.


Sorry, I think we might be talking across each other. I am saying from the perspective of someone who is already using a full Node.js environment, adding Rust must necessarily increase complexity. I am taking this perspective because in the article we're talking about, the examples are exclusively JavaScript tooling like Rollup, Prettier, and ESLint, where the only people using those tools are JavaScript developers who are already running node.

I have absolutely no interest in getting into a pissing match about whose language and ecosystem is better, and I in fact agree that the Rust tooling is less complicated than JS to start with. Nevertheless, the article is not about choosing either JS or Rust, it's about rewriting tools for working with JS in Rust, which necessarily makes you learn Rust on top of JS if you want to modify them.


That's true of any tools, and most tools that a developer uses (e.g. `grep`) are written in C, C++, or more recently Rust. Or if they want to understand the details of how some part of JS actually works, they'll need to delve into C++. So that requirement already exists.


In theory, yes; in practice, the native libraries and tools that most JS developers used to use were stable, widely used, and well tested. Think pg, hash functions, POSIX basics like cp if they didn't need to build on windows, V8 and node, or the OS itself. Practically speaking, the native libpq is less likely to break than its wrapper, which is in turn less likely to break than a newfangled npm package with relatively few users and low test coverage.

I've dipped into V8 to understand a bug...exactly once. Even then, I didn't have to build it, which is good because building node and V8 from source used to take hours and probably still does. It's just a more stable piece of software, because Google has a very strong incentive to keep it that way.

The thing is, there is no requirement to ever touch lower level languages in order to work as a JS developer. I would hazard a guess that most JavaScript developers don't. If you need to touch C++ in order to do certain things, then most JS developers will choose not to do them. Expanding the number of tools that can't be fixed by most of their own users has downsides.


I agree with you on the easy part, but it’s definitely not as fast. In JS you get instant hot code reload and even without that the interpreter starts up pretty fast. In comparison Rust takes a while to recompile even with simple changes, and if you have more changes in many files (eg. switching between branches) then it’s really slow.


I didn't say compiling was fast, though compiling go is pretty fast. I also don't think anyone is arguing that tools need to be written in AoT languages, if you or anyone want to use js and js tools go for it.

I think having more choices is a good thing, and sometimes rewriting something from scratch will result in a cleaner/better version. The community at large is going to decide which tooling becomes the standard way to do it, so the author should make an argument on why the js tooling is better instead of weak statements like the one I quoted.


This matters, a lot, for contributing to the tool authored in Rust. But for merely installing and benefiting from it, compile-time is largely a one-time cost that pays returns quite soon.


I wonder if author is aware that Node.js is not written in JavaScript.


Probably: he is a contributor to Servo


It depends. A lot of it absolutely is, I've been through a ton of that source.

Low level stuff is mostly c++ to talk to v8 or do system calls, talk to libuv, etc... but even that stuff has a bunch of js to wrap and abstract and provide a clean DX.


> One reason for my skepticism is that I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster.

Exhausting is very exhausting, so at a fraction of that effort you could build on better foundations


I recently discovered Rspack, which is a compatible rewrite of Webpack in Rust by a guy at ByteDance. It is genuinely 5x-10x faster across the board on my large/complicated project. I've been using Webpack for 8 years, and I was absolutely blown away to be able to easily swap Webpack out for something so similar (written in Rust) and get such a massive performance improvement. This has made my life so much better.


> For another thing: it’s straightforward to modify JavaScript dependencies locally. I’ve often tweaked something in my local node_modules folder when I’m trying to track down a bug or work on a feature in a library I depend on. Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

Anecdotally I have had to do this in js a few times. I have never had to do this in Rust. Probably because Rust projects are likely to ship with fewer bugs.

Also Rust is harder to pick up but what are you going to do, use the most accessible tool to solve every problem, regardless of its' efficacy? I am not a Rust expert by any means, but just reading the Rust book and doing a couple projects made me a better programmer in my daily driver languages (js and Python).

I think speed is less important here than correctness. Every time you ship a buggy library you are wasting the time of every single end user. The correctness alone probably saves more time in total than any performance gains.


> Anecdotally I have had to do this in js a few times. I have never had to do this in Rust. Probably because Rust projects are likely to ship with fewer bugs.

Still anecdotal, but I have worked on a large Rust codebase (Materialize) for six years, worked professionally in JavaScript before that, and I definitely wouldn’t say that Rust projects have fewer bugs than JavaScript projects. Rust projects have plenty of bugs. Just not memory safety bugs—but then you don’t have those in JavaScript either. And with the advent of TypeScript, many JS projects now have all the correctness benefits of using a language with a powerful type system.

We’ve forked dozens of Rust libraries over the years to fix bugs and add missing features. And I’m know individual Materialize developers have had to patch log lines into our dependencies while debugging locally many a time—no record of that makes it into the commit log, though.


It could be that I just haven't written enough Rust to encounter this issue. Thanks for the insight!


> It’s very forgiving of types

I lost you here. JavaScript doesn't work around type issues, no language really can. It just pushes the type issues to a later time.


Indeed, and later on in the same sentence:

> this is one reason I’m not a huge TypeScript fan

Sorry but TS is the only thing making the JS ecosystem palatable. So many bugs caught thanks to typing...


I have sympathy with this viewpoint... but only to a certain limit. I've optimized JS codebases before now and ended up reaching for typed arrays, arraybuffers and even workers to get things running in parallel and yeah, it's possible. But I'd much rather just be doing in Rust or a similar language. And now that WASM is a realistic possibility I can.


The interesting part is that often you don't even need to parallelize when using a language like Go or Rust to see a speedup when compared to JS. You could start by writing everything super simple and straightforward, completely sync and already see a multiplier on the speed just due to the language being that much faster.


I was recently watching a talk about uv by one of the developers. One thing he said has really stayed with me after the talk - having tooling that works near-instantaneously unlocks whole new category of experiences. Inspired by that I gave Bun a try to see how modern front-end may look like without all the cruft, and it's just insane how much complexity you can shave off that way. For example, I would never imagine just bundling your whole project in a server handler. But it just works and if not for the lack of client auto refresh when files change (backend rebuilds just fine), I wouldn't even be able to notice the difference between that and dev server.


> a rewrite is often faster just because it’s a rewrite – you know more the second time around

I think people often overlook this factor when doing rewrites and making big claims about the results.

Chances are if you’d done the rewrite in the same language you’d get similar results.

I don’t know if it’d be possible to empirically prove that. I’ve only seen it happen a few times.


It’s like saying you’ll rewrite Romeo and Juliet because you already know they die at the end. It’s a little more nuanced than that!


Yes, I agree it is nuanced.

I have a particular stereotypical programmer in mind. The one that rewrites their entire program in X, because it's fast. Not because they understand the data dependencies and run-time performance characteristics of their program.

Typically these folks misattribute the performance gains they experience in such projects to the language itself rather than the tacit knowledge they have of the original program.


At the end of the day switching from an interpreted language to a natively compiled language will result in a faster program. Of course there are performance gains to be had refactoring with a deeper understanding of the problem. That might be enough in many cases, but if the primary goal is speed the language cannot be ruled out.


the big issue here is the debuggability by having all your dependencies in the same language, and it's not even like these rewrites will all be in the same performant language for you to learn, so essentially if you are using a wasm compiled dependency you are not likely to be able to go into that dependency's code and figure out where the library author has messed up or what you have misunderstood from the documentation.


The solution to that is dependencies that work


Does the dependency that always works and has no bugs also come with a free rainbow and unicorn?


Dependencies should be small and focused. When scope is limited, it's not rocket science to write implementations that have very few bugs.


I sure hope so - otherwise how would I know the dependency I installed was going to always work?


Ask the unicorn.


exactly, which is why I was hoping I would get one?


We used to rewrite Python code in C++ "for performance". We stopped when the equivalent rewritten version appeared 3 times slower than the original.

The very notion of "fast and slow languages" is nonsense. A language is just an interface for a compiler, translator, or interpreter of some sort. A language is only steer wheel and pedals, not the whole car, so the whole arguments which one is faster is stupid.

In our case, AOT compilation backfired. We used (contractually had to) support older architectures and our Eigen built with meager SSE2 support couldn't possibly outrun Numpy built with AVX-512.

So we stopped rewriting. And then Numba (built on the same LLVM as clang) came up. And then not one but several AOT Python compilers. And now JIT compiler is in the standard Python.


And Numpy can't come anywhere close to the performance of my C++ code. Last time I benchmarked my CPU matrix multiplication algorithm, it went 27x faster than Numpy. Mostly because Numpy only used a single core. But this was a machine with eight cores. So my code went at least 3x faster. Moral of the story: C++ isn't something you can just foray into whenever Python is slow. It's the most complicated language there is, and the language itself is really just the tip of the iceberg of what we mean when we talk about C++. Do not underestimate the amount of devotion it takes get results out of C++ that are better than what high level libraries like Numpy can already provide you. https://justine.lol/c.jpg


Yes but the amount of devotion is orthogonal to the language per se.

You can use Numba that uses the same LLVM clang does and write all the computation kernels yourself instead of using what Numpy provides. The only difference there would be JIT vs AOT compilation.

Or you can use Codon, that uses the same LLVM clang does and then there will be no difference at all.

Language is just an interface for a compiler.


Yeah that's what I was getting at with the tip of the iceberg thing.

You can of course choose a different tip for your iceberg.


It is nonsense that numpy can't use multiple cores. Matrix multiplication in numpy is largely C/Fortran code (BLAS, etc) where GIL can be released.

https://stackoverflow.com/questions/75029322/does-numpy-use-...


I’m continually surprised at JavaScript’s speed. Seeing JS sometimes nipping at the heels of C/rust/etc in performance benchmarks blows me away. V8 is such an incredible piece of engineering.

In my work, it’s hard to justify using something other than JS/TS — incredible type system, fast, unified code base for server/mobile/web/desktop, world’s biggest package ecosystem for anything you need, biggest hiring pool from being the best known language, etc.

It’s just such a joy to work with, ime. Full-stack JS has been such a superpower for me, especially on smaller teams.

The dissonance between how the silent majority feels about JS (see, e.g the SO yearly survey), vs the animus it receives on platforms like HN is sad. So here’s my attempt at bringing a little positivity and appreciation to the comments haha.


I want more people to understand how much people are working on JS performance. Google, Apple, Microsoft, Mozilla - all have incentive to make it as fast as possible.

Too bad JS is not the best candidate for many optimizations.

I wonder if we'll get to the point of having a compiled version of JS that allows more static optimizations to be done.

WebAssembly might occupy that niche if it gets nice standardized runtime.



It's important to recognize how seldom this carries out in practice.

I've rewritten some ~10 small node servers to Go, Java and C#, and they've always been >10X faster without changing algorithms.

Even in the few cases where dynamic languages catch up, they're often written in an unidiomatic style (read: optimized) and compete with unoptimized/naive C/C++.


This is often overlooked but is actually a big deal:

> ...it’s straightforward to modify JavaScript dependencies locally. I’ve often tweaked something in my local node_modules folder when I’m trying to track down a bug or work on a feature in a library I depend on. Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

I too often find myself inserting `console.log` inside node_modules to figure out why the toolchain doesn't work as I'm expecting it to. It has gotten me out of some very nasty situations when StackOverflow/Google/GPT didn't help at all.

Had it been written in Rust, I wouldn't have had a chance.


> I should also acknowledge: there is a perf hit from using Wasm versus pure-native tools. So this could be another reason native tools are taking the CLI world by storm, but not necessarily the browser frontend.

I didn't know about this before, I wonder how much overhead?

The reason I am reluctant to rely on JS tools for anything CLI is because of Node.js instability due to version sensitivity and impossible-to-fix-without-reinstalling-the-os low level LibC errors.

Compared to go, rust, or python, the odds that any given CLI.js program will run across my (small) fleet of machines is very low, by factor or 10x or more compared to alternatives. Some boxes I don't want to reinstall from scratch every 4 years, they're not public facing and life is too short.


wasm itself is a bit slower than code compiled for a particular CPU. However, there is significant overhead when it comes to the browser. This is because, now, wasm has to use JavaScript to talk to the browser. The performance gain/loss will depend on the type of operations you are doing. There are plans, however, for wasm to have direct access.


Deno is fixing this with their standard library and JSR.


Sure, but ESBuild is here, it works, and the subjective speed improvement is just fucking massive.

I’m sure you could get something with similar performance in JS. I’ve messed around with JS daemons, so you don’t care about startup time for programs like tsc and whatnot. The problem is that it’s just a pain in the ass to get any of this to work, whereas ESBuild is just fast.

Maybe these problems with JS will get solved at some point, because we haven’t exhausted all of the possibilities for making JS faster (like the author says). However, when you write the tools in Rust or Go or whatever, you get a fast tool without trying very hard.


One part I took issue with is “elite priesthood of Rust and Zig developers” .. I love Rust and hope everyone working on interpreted / high level / “easy” languages knows Rust is accessible and doable for most developers, especially in sync land.

You can benefit from 1000x (!) speed ups just rewriting sync Python in sync Rust, in my measured experience, because the compiler helps exponentially more the more abstract your code is, and Rust can absolutely do high level systems.

The main blocker is when you’re missing some library because it doesn’t exist in Rust, but that’s almost always a big opportunity for open source innovation


> exponentially more

No.

The word means something.

It's bad enough when it gets misused colloquially e.g. by folks on Twitter and clueless podcasters trying to spice up their talking points, but in a thread like this one, it has no place getting dropped into the discussion except if talking about something that actually fits an exponential curve.


most of the python code that would benefit from goign faster is already written to use libraries written in C anyway, I doubt you would get x1000 out of them like that (I'd be happy to be proven wrong if you have examples though)


I kind see the points he´s making, however I think there's something subtle here that's worth talking about:

> Rather than empowering the next generation of web developers to achieve more, we might be training them for a career of learned helplessness. Imagine what it will feel like for the average junior developer to face a segfault rather than a familiar JavaScript Error.

I feel this slightly misses the point. We should be making sure that the next generation of Software Engineers have a solid grounding in programing machines that aren't just google's V8 Javascript Engine, so that they are empowered to do more, and make better software.

We should be pushing people to be more than just Chrome developers.

Also, while I understand what the author is getting at, referring to lower level developers as demigods is a little unhelpful. As someone who switched careers from high-level languages to a C++ engineer, I can attest to the fact that this stuff is learnable if you are willing to put the time and effort in to learning it. It's not magic knowledge. It just takes time to learn.


In my experience with Node, these "familiar JavaScript errors" were extremely cryptic and had little to do with the actual issue most of the time. And talking about segfaults is pure FUD - it's not as if a build tool written in another language will throw a segfault at you if your JS code is broken. And if it does, you should file a bug report - same as you would for Node (maybe some JS developers have the knowledge to hunt for bugs in JS-written build tools, but I doubt even those have a particular desire to actually do it).


Segfaults are the easy bugs! You get a handy stack trace right where the problem is, all the variables intact, it's great! It's the ones where you mess up a data structure, or overwrite an array that kill you, with the symptom occurring long after the problem was caused. Much like React, in fact, where problems in React usage don't get reported until the event loop, long after your function creating your component has finished executing. So maybe those developers will be right at home after all.


Don't bother telling me you're rewriting something for performance if you haven't profiled the existing solution and optimized based on that.


The biggest reason to be skeptical is that these tools are not open to extension in the same way that JavaScript is.

Webpack has an enormous community of third-party plugins, it would be very hard to do something similar with e.g. Go or Zig.


Right, because tooling is standardized in eg Go. There’s no custom build pipeline, transpilation hell, or experimental language features that are selectively enabled randomly. I’m not even against JS, like at all, and I think the majority of perf issues can be resolved. However, JS tooling is the prime example of where things get truly nightmarish from a software development perspective. Webpack being a perfect example of this horror.


You can ship a 20MB Go program and no one blinks.

Go programs start at 20MB. The Go AWS Terraform provider is something like 300MB.

A massive amount of the complexity/difficulty in webdev build tools space has to with optimizing delivery sizes on the web platform.

Node.js tooling is straightforward comparatively.


To be clear: Server-side Node.js tooling is relatively simple. It the web tooling (Webpack, etc) that is complicated.


It's also a breeding ground where the best ideas often end becoming a sort of standard not only for javascript devs but for other langauges as well.


Well curious as to what some of these ideas might be.

NPM has done a pretty great job of showing everyone else what to avoid doing.

The mere mention of “web pack” sends most of the FE devs I’ve met into borderline trauma flashbacks.

There’s seemingly half a dozen package managements tools, some of which also seem to be compilers? There’s also bundlers, but again some of these seem integrated. Half of the frameworks seem to ship their own tools?


This is funny to me. Go and zig are built with the Unix shell in mind - the most extensible and modular system around.

The webpack ecosystem on the other hand is it’s own OS.


Maybe for some the appeal of JS is in (hopefully) never having to learn Unix?

I’ve heard several folks say that about Kubernetes, but in my experience the *nix core always resurfaces the second things get weird.


That certainly can be a benefit. But as we see here it also limits your thinking to that ecosystem.


To the article's point, many/most JavaScript projects are not optimised and better performance can be achieved with just JavaScript, and yes, JavaScript engines are becoming faster. However, no matter how much faster JavaScript can get, you can still always get faster with other system languages.

I work on high-performance stuff as a C++ engineer, currently working on an ultra fast JSON Schema validator. We are benchmarking against AJV (https://ajv.js.org), a project with a TON of VERY crazy optimisations to squeeze out as much performance as possible (REALLY well optimised compared to other JavaScript libraries), and we still get numbers like 200x faster than it with C++.


JavaScript is a terrific language: more ubiquitous than BASIC ever was; nearly as easy to learn and use as Python; syntax that is close to Java/C/C++. And it only uses 10x the CPU and memory of C or C++.


Running a blank .js file in node took 66 milliseconds. An optimized binary I wrote in rust takes 2 milliseconds to execute. So, I think there's a cap there on how fast JavaScript tools can be


This is addressed in the article. Node doesn't cache an JS compilation data by default. There's an environment flag to turn that on, and the startup time drastically reduces with it on (on the second+ run of the file).

Also both Deno and Bun have more optimized startup times in general by default, some of that startup time is just Node, not a reflection of the language itself.


Which env flag are you referring to? "--experimental-vm-modules"? Or maybe "--experimental-policy"?


`export NODE_COMPILE_CACHE=~/.cache/nodejs-compile-cache`


Had no idea about this, thanks!


This post doesn’t make any sense. If a rewrite in another lang being faster is just due to rewrite then surely someone would attempt rewrite in JavaScript and come out just as fast. So far there hasn’t been any case of such happening


> I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster

Rewriting in more performant languages spares you from the pain of optimization. These tools written in Rust are somehow 100× as fast despite not being optimized at all.

JavaScript is so slow that you have to optimize stuff, with Rust (and other performant languages) you don't even need to because performance just doesn't bubble up as a problem at all, letting you focus on building the actual tool.


I think there’s a lot of bias in the samples one tends to see:

- you’re less likely to hear about a failed rewrite

- rewrites often gain from having a much better understanding of the problem/requirements than the existing solution which was likely developed more incrementally

- if you know you will care about performance a lot, you hopefully will think about how to architect things in a way that is capable of achieving good performance. (Non-cpu example: if you are gluing streams of data with processing steps together, you may not think much about buffering; if you know you will care about throughput, you will probably have to think about batching and maybe also some kind of fan-out->map->fan-in; if you know you will care about latency you will probably think about each extra hop or batch-building step)

- hopefully people do a bit of napkin math to decide if rewriting something to be faster will achieve the goals, and so you only see the rewrites that people thought would be beneficial (because eg you’re touching a lot of memory so a better memory layout could help)

I feel like you’re much more likely to see ‘we found some JavaScript that was too useful for its own good, figured out how to rewrite it with better algorithms/data structures, concurrency, and sims instructions, which we used rust to get’ than ‘our service receives one request, sends 10 requests to 5 different services, collects the results and responds; we rewrote it in rust but the performance is the same because it turns out most of what our service did was waiting’.


Only semi-relevant, but there's also the fact that lower level languages can auto-optimize more deeply -- but that's also more my intuition (would love to get learnt if I'm wrong).

For example, I'd expect that Rust (or rustc I guess) can auto-vectorize more than Node/Deno/etc.


Ahead of Time, perhaps. (Of course the benefit of AOT is that you can take all the time in the world and only slow down the developer cycle without impacting users. In theory you can always build a slower AOT compiler with more optimizations, even/especially for higher level languages like JS. You can almost always trade off more build time and built executable size for more runtime optimizations. High level languages can almost always use Profiler Guided Optimization to do most things low level languages use low level data type optimization paths for.)

A benefit to a good JIT, though, is that you can converge to such optimizations over time based on practical usage information. You trade off less optimized startup paths for Profiler Guided Optimization on the live running application, in real time based on real data structures.

JS has some incredible JITs very well optimized for browser tab life-cycles. They can eventually optimize things at a low level far further than you might expect. The eventually of a JIT is of course the rough trade-off, but this also is well optimized for a lot of the browser tab life-cycle: you generally have an interesting balance of short-lived tabs where performance isn't critical and download size is worth minimizing, versus tabs that are short-lived but you return to often and can cache compiled output so each new visit is slightly faster than the last, versus a few long-lived tabs where performance matters and they generally have plenty of time to run and optimize.

This is why Node/Deno/et al excel in long-running server applications/services (including `--watch` modes) and "one-off"/"single run" build tools can be a bit of a worst case, they may not give the JIT enough time or warning to fully optimize things, especially when they start with no previous compilation cache every time. (The article points out that this is something you can turn on now in Node.)


Javascript should introduce integers and structs and it will have 10-100x the performance it has today without spending another $100 billion on VM optimization.



I really don't think the problem is JavaScript in all these cases. I've seen codebases using webpack where the JS was being run through babel twice in a row, because webpack is a complicated nuisance and nobody on the team had gotten around to fixing it. You can't blame that on V8 or node being slow.


It's also fascinating how many developers got so burnt on IE11- compatibility issues and feel a need to use Babel as a comfort blanket still. Babel does so very little now with reasonable, up-to-date Browserlist defaults (but still takes a lot of time to do so very little), but the number of developers unwilling to drop Babel from their build pipelines is still to me so surprisingly high. Babel was a great tool for what it did in the "IE11 is still an allowed browser" era, but you most probably don't really need it today.


Being a statically-typed compiled language has its perks (especially when doing systems programming). Regardless, JS runtimes can and will push forward (like JVM / ART did), given there's healthy competition for both v8 & Node.


JavaScript, Python, Lua, I don't see any dynamic language with good performances. Do you have examples?


Javascript is screamingly fast compared to the vast majority of other dynamic languages (scripting type, not something like Objective C). This is with the V8 engine of course. I’m not sure where you’re getting that it’s slow?



"Good" compared to what? All the mentioned languages keep getting more performant year-over-year, but in the medium future scripting languages are unlikely to reach the performance levels of C, Rust or other low-level languages.

Wouldn't it be amazing though? Maybe some combination of JIT and runtime static analysis could do it.

Personally, I never assign different types to the same variable unless it's part of a union (e.g. string | HTMLObject | null, in JS).

It would probably require getting rid of `eval' though, which I am fine with. On average, eval() tends to be naughty and those needs could be better met in other ways than blindly executing a string.


Lua with LuaJIT has pretty good performance. With that said, I spent today writing in C++, so I do agree with the overall sentiment.


Then normal Lua by itself would probably be fastest for you. Nothing makes me happier than writing native extensions for Lua. Its C API is such a pleasure to work with. With Python and JavaScript, writing native extensions is the worst most dismal toil imaginable, but with Lua it's like eating dessert. So if you know both Lua and C++ really well, then you're going to be a meat eating super programmer who builds things that go really fast.


I prefer using Python with pybind11. It makes writing new modules or embedding the whole interpreter quite simple.


Common Lisp (SBCL)?


Don't forget Scheme (Gambit, Chez, Racket).


That's true, though I'd argue these are not as dynamic.


Someday soon I hope webasm gets another decent compiled language targeted for JS speedups. Something interoperable with JS.

For analogies, look no further than ASM in the early days and the motivations that brought us C, but with the lessons learned as well.

Rust is fine for this, except for interoperability.


It looks like Rust can interop with JS via WebASM?

https://stackoverflow.com/questions/65000209/how-to-call-rus...


Of course that's why we're discussing it. I'm referring to stronger interop like how you can embed ASM in C, and therefore CPP.

Like a JS/TS that can have compiled blocks specified in the same language, preferably inline? I'm reaching here.


But why use JavaScript at all if you can just compile it all to WebAssembly?


Because I have better use of my time than waiting for 2 minute compile times on every little change. And believe it or not - I actually like JS with object destructuring, Promises and async/await.


>JavaScript is so slow that you have to optimize stuff

This raises the question, is JavaScript more prone to premature optimization?


Well, can we really call it premature optimization if it's needed?


Reminds me of using Ruby ten years ago and having to contend with folks who wanted to default to using the string literal style over another because it was known to be more performant at scale. That awkward stuff surfaces earlier with some languages than with others.


I guess if folks write JS with the idea that optimisations are needed in mind, then the chances of premature optimisations may go up along with those of the required ones


IMO the biggest win when Phoenix switched to esbuild wasn't about _speed_ exactly, it was about not having to install&debug things like node-gyp just to get basic asset bundling going.


> For another thing: it’s straightforward to modify JavaScript dependencies locally. I’ve often tweaked something in my local node_modules folder when I’m trying to track down a bug or work on a feature in a library I depend on. Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

Yeah, JavaScript is sloppy, but you can always monkey-patch it by modifying tool-controlled files. Great idea. Not.

JS is just not a good language. The JIT and the web of packages made it slightly more usable, but it's still Not Good. There's no real way to do real parallel processing, async/await are hellish to debug, etc.

It's unavoidable in browsers, but we _can_ avoid using it for tools. Look at Python, a native PIP replacement improved build times for HomeAssistant by an order of magnitude: https://developers.home-assistant.io/blog/2024/04/03/build-i...


> Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

Or you could use the source code already downloaded by a package manager and do similar tweaks locally with the build manager picking them up and compiling for you


I also love JavaScript.

It's true, it has some really bad parts but you can avoid them.

If I could design the perfect language for myself, it would have the syntax of JavaScript and the portability of JavaScript but it would use Python's strong duck typing approach.


What have static type systems ever done to you, that you avoid them so much?


Not the OP, but the appeal of languages like JS has a lot to do with developer productivity. I write gobs of JS and Python code and the finished programs and libraries can be strongly and statically typed end-to-end. I just don't want to be forced to do it in cases when it doesn't really make a difference, and I don't want to waste time on it when I'm still figuring out the design.

My hope is one of the Next Big Things in programming languages is the widespread adoption of incremental typing systems.

So during the early stages of dev you get the productivity benefits of dynamic and loose/duck typing as much as you want, and then as the code matures - as the design firms up - you begin layering in the type information on different parts of the program (and hopefully the toolset gives you a jump start by suggesting a lot of this type info for you, or maybe you specify it only in places where the type info can't be deduced).

Then those parts of the program (and hopefully eventually the entire program) are strongly and statically typed, and you get all of the associated goodies.


most static type systems are verbose, probably due to linguistic verbosity, so one obvious thing that static type systems have probably done to a lot of people is given them pain from typing so much.


I don't feel it's so much typing. Especially for the clarity and, most importantly, safety and correctness I get back. I'd rather type 3 1/2 seconds more than debug a dumb type issue for half an hour.

It gets really old to get something like "NoneType does not have blah" in a deeply nested, complicated data structure in python, but obviously only at runtime and only in that hard to hit corner case, when all you did is forget to wrap something in the right number of square brackets in some other part of the code.

I haven't fully given up on python, but I only deal with it using mypy, which adds static typing, anymore.


A bit of extra verbosity as added by static typing can also be immensely helpful for trawling through and/or learning an unfamiliar codebase, especially in the absence of an IDE or debugging environment (e.g. browsing code on GitHub or in a filemanager).

For instance, take function definitions. By just adding types to the function's arguments, you're potentially saving the reader a ton of time and mental overhead since they don't have to chase down the right the chain of function calls to figure out what it is exactly (or is supposed to be) that's getting passed in.


Not sure what languages you are thinking to with "most static type systems", but in languages like TypeScript or Rust (and I guess modern Java/C#, haven't touched those in a while), most of the types are inferred by the system such as you don't need to write them. You type your functions arguments (and return values in Rust) and that's about it.


Ok I was thinking Java / C# which I haven't touched in a while either and they were verbose, Typescript may be able to infer types but every place I've used it we write just everything out, and as such there is quite a lot of extra declaring of things that could be inferred, that may be cultural, but it seems pretty ingrained culture.


When it comes to static typing itself, I only have a minor philosophical objection to it and it's subtle enough that it wouldn't (on its own) prevent me from embracing it. The issue I have is that it generally doesn't align with my coding philosophy which resolves around simple function/method interfaces.

My focus is message-passing, as opposed to instance-passing. Passing around instances can lead to 'spooky action at a distance' if multiple parts of the code hold on to a reference of the same instance, so I avoid it as much as possible.

The main advantage of static typing is that it helps you to safely pass around complex instances, which I happen to avoid doing anyway. So while I don't see static typing as inherently harmful, it offers me diminishing returns as my coding style improves.

In JavaScript land though, TypeScript forces me to add a transpilation step which forces bundling of my code and adds complexity which causes a range of really annoying problems in various situations. As people like DHH (founder of Ruby on Rails) have shown, we have the opportunity to move away from bundling and it yields a lot of benefits... but it's not possible to do with TypeScript in its current form.

It's particularly difficult for me because I actually like the syntax of TypeScript and its concept of interfaces. Interfaces can be consistent with the idea of passing simple objects which serve as structured messages between functions/components; rather than live instances instantiated from a class. I can treat the object as named parameters and not hold on it.


JavaScript isn't really all that portable? Heck just making it run on the different JS engines and runtimes is a big pain sometimes


I can't even figure out how to write typescript that conditionally uses browser-only or node-only libraries depending on which environment it's in. My current best guess is to write 2 completely independent typescript projects that happen to point to the same source files?

Let me cross-compile a C++ project any day ...


I'm not sure if JavaScript supports it, but some Python libraries allow you to choose whether to install a more optimized binary version or the pure Python implementation.

For example, if you install psycopg you'll get a pure Python implementation which is easy to debug and hack. But you can also install psycopg[binary] to obtain a faster, compiled version of the library. https://www.psycopg.org/psycopg3/docs/basic/install.html


That typically means two totally different implementations, and pure Python versions are often unusably slow, so it doesn't help much to hack that.


This is a huge boon for Python.

I would blame numpy for Python's popularity today. Writing coffee as fast as c or fortran in Python is awesome (and keeps me employed).


With WebAssembly getting better both on the client side and the server side, when WebAssembly achieves better performance than JavaScript, and when with WebAssembly you have the opportunity to use almost any other language, why would you use JavaScript?


Because it has terrific data notation, pleasant to use with Promises + async/await, has a ton of modern QoL improvements and slapping TS on top of it automagically turns it into one of the most modern and advanced languages out there with adequate performance.

Add to that amazing tooling with hot reload (bye bye 2-5 minutes compile times), billions of investments from Big tech to make it better and faster, ability to reuse same code between mobile/backend/frontend, integration into browser and you’ll quickly find that JS literally has no rival.


Except after billions of investments from Big Tech, existing JavaScript applications get double the performance by switching to WebAssembly:

https://web.dev/case-studies/google-sheets-wasmgc

And with the advent of WebAssembly, any language integrates with the browser.

So why am I using JavaScript again?


Oh God, narrow use case within Sheets that fits perfectly for WASM is only twice faster than JS, JS in shambles. Why won’t they rewrite everything in WASM?

I’m not sure what you were aiming for here, but you only reinforced me that JS is amazing if rewriting calculation worker yields only 2x improvement.

> So why am I using JavaScript again?

Re-read my comment, it’s all there.


A spreadsheet is a narrow use case? Excel won't die: https://www.economist.com/business/2024/10/15/why-microsoft-...

Try not to worry about it. Welcome your WebAssembly overlords and be happy.


> A spreadsheet is a narrow use case?

No, it’s not a narrow use case. I wake up my phone to spreadsheet calculation, I open HN - a little bit more spreadsheet, my kettle heats water via power of spreadsheet algorithm. Amazon purchase? Only via spreadsheets.


On the server, you wouldn't choose wasm, you'd choose a language that compiles to wasm. Which is really just saying "choose another language": there's no point in compiling (e.g.) Rust to wasm just to run it with Node on the server.


You'd choose wasm on the server if you're using a framework that supports it. For example:

https://blog.nginx.org/blog/server-side-webassembly-nginx-un...

https://github.com/WebAssembly/wasi-http

Write in any language, compile to WebAssembly, have it run on the server no matter what the server's CPU architecture, achieve better performance with high compatibility.


Why in the world would you compile to wasm and make your code slower instead of just compiling to native code? To your example: you don't need nginx to talk to wasm, it already talks to your code running on a local socket.

The only reason for wasm is portability. If you can't compile your code for the server you're going to be running it on, then the original argument of choosing wasm over JavaScript is already moot.


> instead of just compiling to native code?

Because the managed service you signed up to doesn't offer that to you. They like WebAssembly modules because it's a sandboxed runtime.


It's an awfully silly choice to use a managed service that only offers JavaScript and wasm if you don't want to use JavaScript and you care enough about performance that you're willing to accept the fairly meager benefits of wasm over JS. The real reason you'd choose wasm in this case is "I don't want to use JavaScript and I'm otherwise forced to use this service".


Or the price is right.

Come on down! Use WebAssembly. You'll love it.


Use Plinko https://github.com/jart/cosmopolitan/tree/master/tool/plinko

Back in the 1980s it was my greatest ambition to go on The Price Is Right and play Plinko. However all I could accomplish was making this cursed programming language instead. You'll love it.


A quick skim suggests that this framework re-inititalizes the sandbox on each request (so there's no shared context across requests). That's not going to achieve better performance.


Phew! Thank goodness you skimmed it! For a minute there I thought the NGINX developers knew something you didn't.


I'm basing my statement on what they wrote:

> The WebAssembly sandbox’s linear memory is initialized with the HTTP context of the current request and the finalized response is sent back to the router for transmission to the client.

They can feel free to clarify that multiple requests can concurrently use a shared context as well if that's true. Or if that's not true, then the thing will of course be slow assuming it needs to do some kind of IO like a database request.

Note that major FaaS implementations like AWS Lambda don't let you have concurrent requests that share context, so it's not exactly crazy to think this wouldn't either.


i share some of this sentiment as well and i think a lot of my hesitance is that these solutions seem born of the popularity of rust. we have had c and c++ for as long as javascript has been a full-stack workhorse. is it just the barrier of entry / novelty of rust has prompted longtime js devs to make the leap into building tooling? along with it, it seems the "new framework every week" jab at javascript can be applied to the build system as well. in any case, i welcome the speed improvements and this certainly does not preclude me from using these new tools where i'm able.


Yes. The somewhat nice property of rust having guaranteed memory safety has been blown out of proportion so much that even though C++ with smart pointers and a bit of bounds checking is quite likely not to have memory safety issues, the comunity has decided that anything less than a guarantee means the language is unfit for any purpose and no new projects should ever be started in it. As if Java/JS/C# don't have null reference exceptions occurring all the time and to me those seem quite similar to segfaults. But I guess people are only specifically alergic to memory unsafety.


If you want to steel man _for_ writing in Rust or Zig or Go, previous discussion here: https://news.ycombinator.com/item?id=35043720


Honestly if you’ve used esbuild (you have if you use Vite) once, you cannot legitimately be skeptical of rewriting JS tools in faster languages. The difference is so huge it’s not even funny.


Seems to me that the article and many of the comments conflate JS with Node. Personally I abhor Node and work with both Bun and Deno. In both cases avoiding the Node compatible bits.


Yeah, i agree. I think there is a time where rewriting in a faster language is useful (just like how handcrafted assembly is still a thing), but most of the time you are very far away from the point where that is neccesary.

I also think there is an element of, "rewrite in rust" is just easy to say, where changing data structures or whatever requires analysis of the problem at hand.


It is analogues discussion to C vs Rust. Sure Rust is memory safe, but whole ecosystem I am using today is C based. Compiler, SDK, drivers, RTOS, ... Nobody sane is going to rewrite it for the sake of rewriting it into a different language.


I would disagree with that comparison. Rust really does provide an improvement in memory safety that is hard to achieve by other means. That's not to say you should always rewrite in rust, there are plenty of situations where that doesn't make sense. However its not analgous to the performance situation in my opinion.


Sometimes it seems that people who write these kinds of pieces forget that not everyone in the world does web or even web-adjacent work, and node.js is something we don’t even consider to be part of our ecosystem. Rewriting useful things in non-JS has the benefit of letting folks like me who avoid JS like the plague use useful tools. Stop assuming everyone wants to get anywhere near the JS ecosystem: I’ve gone 30 years without touching it, and plan to continue that streak. Rewriting stuff is great from my perspective.


> Stop assuming everyone wants to get anywhere near the JS ecosystem

I have been dragged, through straight misrepresentation, into the Node.js world.

OMG, awful hardly begins to touch it.

I have not used Go, but as far as I can tell every thing the Node.js people do is done better in Go.

I do not recommend Rust. I have a lot of experience with Rust, and unless you actually need the real time responsiveness it will bog you down.


> Stop assuming everyone wants to get anywhere near the JS ecosystem

The author is writing about JS ecosystem tools.


I can partly understand the poster above: Some people (including me) for example want and have to use JavaScript, but simply don't want to get dragged into that whole node.js/npm ecosystem for various reasons.

I avoid any tool which forces me to pull in a gazillion npm packages, while I gladly use esbuild for example because it looks and feels like a nice little compact tool.


I feel the same way about python, but I don’t blame python authors for using the python ecosystem to write python tools when I am forced to use a little python. I consider python to be an essential part of any build system since it’s used in so many places, as much as I don’t like it.

Maybe the problem people have is that node/npm are becoming a similarly “essential” build system piece much like python. That much I can certainly understand.


As I have (admittedly snarkily) put it many times "every line of JS is tech debt"


You can erase JavaScript from this title and have equally valid points.


I find PyPy gives me enough speedup for my Python scripts to leave it.


Honestly I think the future of languages is strong and sound type inference. Writing Dart, F#, Swift, Crystal, and even modern C# has shown me to varying degrees how good a language can be at type inference, and what the tradeoffs are. I much prefer it to the gradually typed approach, but I've found that library authors in gradually typed languages usually type the entire library and as a developer I always appreciate it.


Can you now write client and server js code without installing npm? I guess you'd be reinventing a lot of wheels that the packages provide.


You can already do that if you don’t need bundling and transpilation.


> I’ve written a lot of JavaScript. I like JavaScript. And more importantly, I’ve built up a set of skills in understanding, optimizing, and debugging JavaScript that I’m reluctant to give up on.

It's not that hard to do the same for a less terrible language. Choose something markedly different, i.e. a low level language like rust, and you will learn a lot in the process. More so because now you can see and understand the programming world from two different vantage points. Plus, it never hurts to understand what's going on on a lower level, without an interpreter and eco-system abstracting things away so much. This can then feed back into your skills and understanding of JS.


I think we’re reading too far into the authors impostor syndrome.

He’s making contributions in Rust already. His opinion isn’t invalid just because he has a bias, he opens by acknowledging his bias.


> It's not that hard to do the same for a less terrible language.

I miss that brief era when coding culture had a moment of trying to be nice, of not crudely shooting out mouths off at each other's stuff crudely.

JS, particularly with typescript, is a pretty fine language. There's a lot of bad developers and many bad organizations not doing their part to enable & tend to their codebases, but any very popular language will likely have that problem & it's not the languages fault.

It's a weakness & a strength that JS is so flexible, can be so many different things to different people. Even though the language is so so much the same as it was a decade & even two ago, how we use it gone through multiple cycles of diversification & consolidation. Like perl, it is a post-modern language; adaptable & changing, not prescriptive. http://www.wall.org/~larry/pm.html

If you do have negative words to say, at least have the courage & ownership to say something distinct & specific, with some arguments about what it is you are feeling.


I’d normally agree with you, but JS is more or less designed to be terrible. It was hacked together by Brendan Eich in literally 10 days, who originally wanted to do something more Scheme-like. It was a quick and dirty hack that got stretched way beyond what it was even meant for.

It then literally had decades of ECMAscript committee effort to shape it into something more useable.

I could repeat the numerous criticisms, but there’s enough funny videos about it that make a much better job pointing out its shortcomings and, sometimes, downright craziness of it.

> but any very popular language will likely have that problem & it's not the languages fault.

No, sorry, just no. I get where you are coming from, but in the case of JavaScript, its history and idiosyncrasies alone set it apart from many (most?) other languages.

Perl for example was made with love and with purpose, I don’t think it’s comparable.


JS wasn’t created in 10 days. It was prototyped in 10 days, and the prototype contained very little of the stuff people complain about.

Hillel Wayne posted about this recently:

https://www.linkedin.com/posts/hillel-wayne_pet-peeve-people...


Okay, I stand corrected. So this prototype didn’t ship, or did it ship and evolve?

Brendan Eich himself calls JS a “rush job” and with many warts though, having had to add aspects that in retrospect he wouldn’t have. This snippet from your link is consistent with that:

    Also, most of JavaScript's modern flaws do *not* come from the prototyping phase. The prototype didn't have implicit type conversion (`"1" == 1`), which was added due to user feedback. And it didn't have `null`, which was added to 1.0 for better Java Interop.

   Like many people, I find JS super frustrating to use.


The implicit type conversion is good for a very funny conference video ("wat") but man, it's just so overplayed as a weakness especially versus how much real world impact it has on anyone.

And with TypeScript or linting, many of the strange comparison/conversion issues go away.

I struggle to find any substantial arguments against the js language, in spite of a lot of strong & vocal disdainful attitudes against it.


> I struggle to find any substantial arguments against the js language

The biggest problem with JavaScript is that it's an extremely footgunny language. IMO, of the C++ variety, but probably worse.

1. The type system is unsound and complicated. Often times things "work" but silently do something unexpected. The implicit type conversion thing is just one example, but I know you've seen "NaN" on a page or "Object object" on a page. Things can pass through and produce zero errors, but give weird results.

2. JS has two NULLs - null and undefined. The error checking around these is fragile and inherently more complex than what you'd find in even C++.

3. JS has an awful standard library. This is footgunny because then basic functionality needs to be reimplemented, so now basic container types have bugs.

4. JS has no sane error handling. Exceptions are half-baked and barely used, which sounds good until you remember you can't reliably do errors-as-values because JS has no sane type system. So it's mostly the wild wild west of error handling.

5. The APIs for interacting with the DOM are verbose and footgunny. Again things can look as though they work but they won't quite. We develop tools like JSX to get around this, but that means we take all the downsides of that too.

6. Typescript is not a savior. Typescript has an okay-ish type system but it's overly complex. Languages with nominal typing like C# are often safer (no ducks slipping through), but they're also easier to work with. You don't need to do type Olympics for most languages that are statically typed, but you do in TS. This also doesn't address the problem of libraries not properly supporting typescript (footgun), so you often mix highly typed code with free-for-all code, and that's asking for trouble. And it will become trouble, because TS has no runtime constraints.


The implicit coercion and its weird behavior is absolutely a major footgun, not just fodder for the “wat” video. It’s something that can get you into serious trouble quite easily if left unchecked, for example by just looking at a list wrong. For someone to say that it has never caused them surprising pain in plain JavaScript is probably disingenuous. This is something that most other languages plainly don’t have as a problem, at least not as baffling.

Other things worth mentioning are the unusual scoping (by default at least), prototypes, “undefined”, and its role versus "null"... the list goes on.

I give TypeScript a lot of credit for cleaning up at least some of that mess, maybe more. But TypeScript is effectively another language on top of JS, not everyone in the ecosystem has the luxury of only dealing with it, and across all layers and components.

Is my knowledge about JavaScript outdated and obsolete? Certainly. Is the above stuff deprecated and turned off by default now? Probably. I left web development more than 10 years ago and never looked back. I’m a bit of a programming language geek, so I’ve used quite a few languages productively, and looked at many more. But not many serious programming languages have left quite the impression that JavaScript and PHP have.

In the meantime, I have always remembered that one conversation I had with someone who was an ECMAscript committee member at that time: They were working really hard to shape this language into something that makes sense and compiles well. Maybe against its will.

EDIT: Dear god, I completely forgot about JavaScript’s non-integerness, and its choice of using IEEE 754 as its basic Number type. Is that still a thing?


This anecdote about the double equality operator might have originated from Eich's chat with Lex Fridman where he states (at about 5 minutes and 26 seconds) that during the original 10 day sprint JavaScript didn't support loose equality between numbers and strings: https://www.youtube.com/watch?v=S0ZWtsYyX8E&t=326s

The type system was weakened after the 10 day prototyping phase when he was pressured by user feedback to allow implicit conversions for comparisons between numbers and serialized values from a database. So it wasn't because he was rushing, it was because he caved to some early user feedback.


I swear some JS devs will go out of their way to avoid learning anything else, whilst simultaneously and breathlessly espousing that we rewrite everything else in JS.


> Any application that can be written in JavaScript, will eventually be written in JavaScript. - Jeff Atwood (2007)


The game is changing with WebAssembly though. Large JavaScript applications are replacing JavaScript bits with wasm:

https://web.dev/case-studies/google-sheets-wasmgc

Any application that is written in JavaScript will have more and more of it replaced with WebAssembly.


> Any application that is written in JavaScript will have more and more of it replaced with WebAssembly.

Which is not a lot when it comes to web. Sure some algo heavy stuff like Figma will benefit from it, but GUI around it is still written in what?


Just do the whole thing in WebAssembly. Minimal dependencies and pixel perfect in all browsers. The distinction is applications versus documents.

Works for Dart with Flutter:

https://flutter.dev/

https://www.youtube.com/watch?v=Nkjc9r0WDNo

https://www.youtube.com/watch?v=qx42r29HhcM

https://wonderous.app/web/

Works for C# with Avalonia UI:

https://avaloniaui.net/

https://www.youtube.com/watch?v=6mwQDPlbF5Y

https://solitaire.xaml.live/

And so on.


Amazing, just awesome. Long standing applause.

Let’s regress to level of native apps without benefits of said native apps. No standardization, no performance, no unified integration. Let’s get rid of browser plugins that allow us to fight invasive ads and malicious JS scripts, let’s dump decades of expertise and optimizations, let’s undo all advancements of web just to be able to write same old <div> in C#. Nothing better than a single blob of <canvas>.

It’s ironic that some of people in this thread convict JS devs of using only JS and then you use those “frameworks” as an example of a good thing when they don’t even have a separation of presentation (like HTML and JS) that would allow other languages tap into it.

And all of this is with much worse performance and stability.*

* - for now


> No standardization, no performance, no unified integration.

WebAssembly has all of these things. WebAssembly already there, lurking your browser. That's why it will succeed.

It's interesting how threatened you are by WebAssembly. But change is normal. Embrace the change.


> WebAssembly has all of these things.

Is that why Flutter demo you’ve linked takes 7 seconds to load on Firefox on iPhone 14 Pro and then barely works skipping frames?

I can’t even select text on the page, since it’s just a big canvas, lmao.

> WebAssembly has all of these things. WebAssembly already there, lurking your browser. That's why it will succeed.

You mean how every of those frameworks that you’ve listed have to reimplement a11y every team, since WASM is pure logic? How all of them have to reimplement OS shortcuts and OS integrations? Is that what you call “unified integration”?

> It's interesting how threatened you are by WebAssembly. But change is normal. Embrace the change.

Why did I even bother replying to you, sigh.


Ah, so what you want is Uno. It uses WebAssembly and the DOM: https://platform.uno/

> Why did I even bother replying to you, sigh.

I think it's because you're overwrought. Don't fear WebAssembly.


That’s actually good. If only it didn’t use XAML, but I guess you have to appeal to C# devs.

> I think it's because you're overwrought. Don't fear WebAssembly.

Not sure what’s you deal with these comments, as I’m not even a JS dev by trade, but okay.


I can only judge you by your actions. You have a lot of anxiety.


i swear some non js devs will go to extreme lengths to demonstrate solutions that will never run on another machine instead of writing js


Why would they never run on another machine? It's not that hard to write portable code, and done very often. Nowadays for example, you rarely ever think about whether you're on arm or x86.

If you write non-portal code, there might be an important reason (like writing OS components, which you won't do in JS).


almost every time code doesn’t run on my machine, the root cause is a political disagreement with a c-compiler author three layers below my actual problem.

javascript doesn’t have a compiler is my main point.


Bit rich to complain about that when all the major browsers have just as significant differences, and that’s before we bring node into the equation, let alone talking about a good 30% of websites I visit with any quantity of JS in them are either perpetually broken in some way, or so jank as to be effectively broken.


totally agreed about all of the above and i take credit for none of that code

i write plaintext at uris, progressively enhance that to hypertext using a port with a deno service, a runtime that unifies browser js with non browser js.

that hypertext can optionally load javascript and at no point was a compiler required aside from the versioned browser i can ask my customers to inspect or a version of deno we get on freebsd using pkg install.

node is not javascript would be my biggest point if i had to conclude why i responded.

microsoft failed at killing the web with internet explorer and only switched to google’s engine after securing node’s package manager overtly through github and covertly through typescript.

microsoft is not javascript is my final point after circling back to my original point of microsoft is also one of the aforementioned reasoned c-compilers are politically fought over instead of things that just work.


I encounter far more issues compiling C code than JS issues in the web, just saying.


It's usually the opposite. And the post is specifically about making JavaScript tools, why would you not expect them to be written in JS? I guess not making tools for say, c# devs in c# would also be bad?


> It's usually the opposite. And the post is specifically about making JavaScript tools, why would you not expect them to be written in JS?

Take a look at rollup, vite, etc. These tools are essentially replacing webpack, which is written in JS. Modern Rollup (^4) uses SWC (Rust-based bundler), and vite is currently using a mix of esbuild (Go) and Rollup. I think they're switching to SWC in v6 though.

The point here is that for certain operations JS is not nearly as fast as lower-level languages like the aforementioned. Bundling is one of those performance-critical areas where every second counts.

That said, as a TypeScript developer I agree with the sentiment that JS tools should be written in JS, but this isn't a hard and fast rule. Sometimes performance matters more. I think the reasonable approach is to prefer JS – or TS, same difference – for writing JS tools. If that doesn't work, reach for something with more performance like Rust, Go, or C++. So far I've only had to do the latter for 2 use cases, one of which is hardware profiling.


Presumably because, apart from Python (see Ruff, uv, etc) most languages aren’t running into such major issues with their own “self hosted” tooling that it’s worthwhile to rewrite several of them in a completely different language.


Yes I agree! And JavaScript also isn't really at that point yet. Python is really in a class of its own here... sadly enough.

Though I don't see an issue with tools for JS built without JS. It's just that I don't think that it's a bad thing for a JavaScript dev to want the ecosystem around JavaScript to be written in JS. JS is orders of magnitudes faster than python in any case.


God what I’d do if someone wrote build system in Rust for JVM and freed us from Maven and Gradle.


It's funny you mention C# since VS Code is a perfect example of JS devs rewriting existing tools in JS.


> I swear some JS devs will go out of their way to avoid learning anything else, whilst simultaneously and breathlessly espousing that we rewrite everything else in JS.

The JStockholm syndrome.


OP is a Servo contributor


Hopefully the lot includes that writing stuff in low level languages isn't worth the pain most of the time.


Curious what you mean by "most" (I'm agnostic/unlearned on the statistics tbh). I "feel" like it doesn't happen too often when it's not either already low-level or the supposed extra performance is likely worth it.

Like, I can't imagine most people using Javascript would want to rewrite in Rust without some decent reason.


This guy is not competent to talk about what he's talking about.

>"JavaScript is, in my opinion, a working-class language. It’s very forgiving of types (this is one reason I’m not a huge TypeScript fan)."

Being "forgiving of types" is not a good thing. There's a reason most "type-less" languages have added type hints and the like (Python, Typescript, etc) and it's because the job of a programming language is to make it easier for me to tell the CPU what to do. Not having types is detrimental to that.


> There's a reason most "type-less" languages have added type hints and the like (Python, Typescript, etc)

I would like to clarify that even without typing python is a LOT less "forgiving of types" than javascript. It has none of the "One plus object is NaN" shenanigans you run into with javascript.


sure. one is strongly typed and the other weakly typed.


Types are guidelines and strictly useful and a good thing. That said, one can wonder why languages like basic, python, scheme or php (dynamic, implicit types) have grown popular. Maybe for bad reasons but there IS an added value for implicit types. C++ (maybe even C !) has grown the auto keyword and other typed language have type inference. Which is not the same as "typeless" (it always is typed) but it defeats one of the "double check" security of types. And it's sometimes not needed (yes, if I initialize it with "abc" it may be a string)


It's not forgiving of types at all. Reality is not forgiving of type errors. The only thing JavaScript does is move the moment where you find out reality is not forgiving of type errors to when your code is running in prod rather than at compile time, and makes them more implicit. That doesn't make it a bad thing per se to be forgiving of type errors. For example, if you really like fixing errors in production rather than before pushing them to production, this faux forgiveness is precisely what you should be looking for. It's all up to personal preference. Personally, I prefer knowing early on if there's problems with my code, and having a higher degree of certainty regarding it working or not.

All of this is under the assumption that whatever you're writing has some degree of complexity to it (an assumption which is satisfied very quickly). Five line python glue scripts don't necessarily benefit from static typing.


Python is not "type-less" it is strongly typed. It will raise a TypeError if you do something like 1 + "1".


> the job of a programming language is to make it easier for me to tell the CPU what to do. Not having types is detrimental to that.

JavaScript and Python have types, and Python has always been strongly typed (type hints have not changed that). Neither TypeScript or Python use type hints at runtime to help tell the CPU what to do.

What type hints in these languages do is make it easier for you to describe more specifics of what your code does to your tooling, your future self, and other programmers.


Please, preferring dynamic typing is not a sign of "incompetence". Stop this nonsense. Also, I won't question your competence because you called Python and JavaScript "type-less". The type-less languages (other than assembly) that were ever used were BCPL and B (predecessors of C).


while I'm a fan of TypeScript and using type hints in Python from an autocomplete and linting perspective, I am curious...

... has either language leveraged these to better tell the CPU what to do? presumably for perf.


PHP does but the types actually mean something. If your types can be stripped out to make the program run, I have a hard time believing that there is any optimization occurring there.


python ignores type hints


There are plenty of good things written in languages with weaker type systems than TypeScript (Linux, your browser, HN). Using C/C++ or a dynamic language doesn't immediately make you incompetent.


This, like most articles dealing with "JS", is really more about the things you'll find yourself futzing around with when you're in the NodeJS and NPM ecosystem.

You wouldn't conflate Windows development with "C" (and completely discount UNIX along the way) just because of Win32. But that's about how bonkers it is when it comes to JS and people do the same with its relationship to Node—not only was JS not created to serve the Node ecosystem, the prescriptions that NPM and Node programmers insist on often cut against the grain of the language. And that's just when we're focused on convention and haven't even gotten to the outright incompatibilities between Node and the language standard (or Node's proprietary APIs).

node_modules, for example? That has fuck-all to do with ECMA262/JS. Tailwind, Rollup, Prettier, all this other stuff—even the misleadingly named ESLint? Same. You're having a terrible experience because you're interacting with terrible software. It doesn't matter that it's written in JS (or quasi-JS). Rewrite these implementations all in other languages, and the terrible experience will remain.

Besides, anyone who's staking out a position that a language is slow, and that JS is one of them is wrong two ways, and you don't have to listen to or legitimize them.


Anyone who has done a programming contest, advent of code, etc knows that the language doesn’t matter so much as your algorithm.

Yes, the language can bring a nice speed up, or might give you better control of allocations which can save a lot of time. But in many cases, simply picking the correct algorithm will deliver you most of the performance.

As someone who doesn’t JavaScript a lot, I’d definitely prefer a tool written in go and available on brew over something I need to invoke node and its environment for.


> Anyone who has done a programming contest, advent of code, etc knows that the language doesn’t matter so much as your algorithm.

This is one of the biggest falsehoods in the software engineering I know.

Language is a collaboration glue and influences way of thinking guiding solution development. As an analogy: you can make a statue from a glass or from ice, but while both can be of the same shape and be equally awed upon, the process and qualities will differ.

For the prototypes and throwaways context doesn’t matter - That’s why all short lived contests, golfs and puzzles ignore it. Yet, when software is to be developed not over the week but over the decades and (hopefully) delivered to thousands if not millions of computers it’s the technological context (language, architecture, etc.) that matters the most.


Lots of very smart people have worked very hard on Python tools written in Python, yet the rust rewrites of those tools are so much faster. Sometimes it really is the programming language.


In the JavaScript world a lot of speed up comes from 3 major things as far as I can tell:

- easier concurrency. - the fact that things are actually getting rewritten with the purpose of speeding them up. - a lot of the JS tooling getting speedups deals with heavily with string parsing, tokenizing, generating and manipulation of ASTs. Being able to have shared references to slices of strings, carefully manage when strings are copied, and have strict typing of the AST nodes you enable things to be much faster than JavaScript.


Python is really really slow compared to JS though.


Node is so slow to start that python script can complete before Javascript even begins to execute.


For extremely simple scripts maybe. I get around 70 ms difference in startup time.

  $ time python3 -c "print('Hello world')"
  Hello world

  real 0m0.017s

  $ time node -e "console.log('Hello world')"
  Hello world
  
  real 0m0.084s


I once worked on a Python system that had 50 machines dedicated to it. We were able to rewrite it in a more performant language such that it easily ran on one machine. This also allowed us to avoid all the issues distributed systems have.

So yeah, Python is not great for systems programming


CPython is (though it's slowly getting better). Pypy is amazingly fast


This is a very nice counterexample, but it's not actually a counter example without an example.

Also, this was a thing before Rust. I've rewritten several things in C or Cpp for python back ends, and most pytbon performance-critical code is already an API to a shared library. You'd be surprised to run OR tools and find Fortran libraries loaded by your python code.


Ruff is one example https://astral.sh/ruff


But can I write plugins for it? My understanding it is only implements a subset of the common plugins (and does not do any of the linting that pylint is useful for), so it avoids scanning the filesystem for plugins?


> Lots of very smart people have worked very hard on Python tools written in Python

Yes, I agree that is very sad

Python is achingly slow. I know the Python people want to address this, I do not understand. Python makes sense as a scripting/job control language, and execution speed does not matter.

As an application development language it is diabolical. For a lot of reasons, not just speed


Choosing the right algorithm effectively means optimizing runtime complexity. Then, once runtime complexity is fixed with the right algorithm, you're still left with a lot of constant factors that O-notation deliberately ignores (it's only about growth of the runtime). Sometimes, optimizing those constant factors can be significant, and then the choice of language matters. And even some details about the CPU you are targeting, and overall system architecture.


Often languages like Javascript and Python don't allow optimal runtime complexity, because the types baked in to external interfaces fundamentally disallow the desired operation. And these languages are too slow to rewrite the core logic in the language itself.

(but of course, the vast majority of the code, even in widely used tools, isn't properly designed for optimization in the first place)

I only dabble in javascript, but `tsc` is abominable.


> Lots of very smart people have worked very hard on Python tools written in Python, yet the rust rewrites of those tools are so much faster.

So?

Some tool got written and did its job sufficiently well that it became a bottleneck worth optimizing.

That's a win.

"Finishing the task" is, by far, the most difficult thing in programming. And the two biggest contributors to that are 1) simplicity of programming language and 2) convenience of ecosystem.

Python and Javascript are so popular because they tick both boxes.


Don’t disagree about finishing the task, but personally I don’t find more performant languages any less productive for the sort of programming I tend to do.


Congratulations on being a programming god. This discussion isn't for you.

From my point of view, I'm happy if I can convince my juniors to learn a scripting language. Okay? I don't care which one--any one. I'd prefer that they learn one of the portable ones but even PowerShell is fine.

I have seen sooooo many junior folks struggle for days to do something that is 10 lines in any scripting language.

Those folks who program but don't know a scripting language far outnumber the rest of us.


> I have seen sooooo many junior folks struggle for days to do something that is 10 lines in any scripting language.

> Those folks who program but don't know a scripting language far outnumber the rest of us.

What domain are you in? This sounds like the complete inverse of every company I've ever worked at.

Entire products are built on Python, Node ect, and the time after the initial honeymoon phase (if it exists) is spent retrofitting types on top in order to get a handle, any handle, on the complexity that arises without static analysis and compile time errors.

At around the same time, services start OOM'ming left and right, parallellism=1 becomes a giant bottleneck, JIT fails in one path bringing the service performance down an order of magnitude every now and then etc...

> Congratulations on being a programming god. This discussion isn't for you.

On the behalf of mediocre developers everywhere, a lot of us prefer statically typed languages because we are mediocre; I cannot hold thousands of implicit types and heuristics in my head at the same time. Luckily, the type system can.


Quite the opposite, for most cases you don't hit the scale where asymptotic algorithmic performance really makes a big impact (e.g., for many small set comparisons, iterating over a list is faster than a hash set, but only by 10-50% or so), vs switching to a compiled language which instantly gets you 10x to 100x performance basically for free.

Or perhaps another way to look at it, if you care enough about performance to choose a particular algorithm, you shouldn't be using a slow language in the first place unless you're forced to due to functional requirements.


> knows that the language doesn’t matter so much as your algorithm.

I know what you’re referring to but these problems have also taught me a lot about language performance. python and JS array access is just 100x slower than C. Some difficult problems become much harder due to this limitation.


JS array access is a lot faster than Python array access. JS is easily within magnitude of C and can be even about as fast with typed arrays or well JITable code.


> JS is easily within magnitude of C

Typed arrays help a lot, but I’m still doubtful. Maybe it all the processing is restrict to idioms in the asm.js subset? And even then you’re getting bounds checking.


In benchmarks JS is usually well within magnitude (i.e. under 10x slower).

E.g. C++ vs Node.js here: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Couldn't find C vs JS easily with the new benchmarksgame UI.


> Couldn't find C vs JS easily

Try:

A) Find JS in the box plot charts

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

or

B) Find JS in the detail

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I guess so. I clicked on the code for the first one. It’s using a C library to do the computation:

> mpzjs. This library wraps around libgmp's integer functions to perform infinite-precision arithmetic

And then the “array”:

> Buffer.allocUnsafe

So is this a good JavaScript benchmark?


That's the only benchmark on the page that uses such wrapper.

Buffer.allocUnsafe just allocates the memory without zero-initializing it, just like e.g. malloc does. Probably usually not worth it, but in a benchmark it's comparable to malloc vs calloc.


Yeah and using byte buffers isn’t JavaScript array access. But it is for C.

The n-body looks most like canonical JS to me. It’s a small array, but’s it’s accessed many times.

Unfortunately the c++ version is simd optimized, so I don’t think that’s a fair comparison.


There are plain C++ programs: n-body C++ g++ #3

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I'd guess using typed arrays or even normal arrays wouldn't slow the code much, and the slowdown will be probably a small constant factor.

If the JIT detects the array as homogenous it will compile it to low level array access. JS JITs are very good.


> I don’t think that’s a fair comparison

Shouldn't simd use make it less likely JS would be within magnitude, so if compared against simd programs and the JS is still less than 10x slower then that strengthens jampekka's point?


In higher level languages your code may allocate memory or trigger a GC pass or other smartness in unexpected places. This may cause slowdowns you may not have control over or may change from version to version or vendor to vendor. It is often easier to manage in "faster" languages. Good algorithm may not be enough.


Here’s the thing: languages like C#, Java and Rust all have extensive libraries and packages that implement many common data structures and algorithms well. With all due respect to the incredible work that goes into projects like lodash, JavaScript does not. (Nor does C, for that matter.)


The types of problems in those contests are meant to highlight algorithms. In the real world you might have a trivial algorithm but a huge input size, where the constant factor matters much more.


And anyone who expands the horizon to the real world, instead of focusing on the artificial one of contests, knows that the language matters a great deal


I have done so, and one has to be out of their mind to attempt in java instead of c++ in those contests.


That's why almost every thing important in python is in C


It's not just a matter of “picking the correct algorithm”. Algorithmic-interview exercises are algorithmic-interview exercices. They are barely related to real-world software engineering.


While picking the right algorithm seldom comes up in most programmers day-to-day activities, being aware of big-Oh and the performance guarantees/characteristics of the libraries you use most certainly should.

I don't care if you don't know how to write a merge sort from scratch. I do care about you knowing not to write an O(n^2) loop when it can be avoided.


I don't.

Let me rephrase that.

I do, but only in very, very rare circumstances. Basically only when you a) know that the typical use case is going to involve large ns, like millions to billions, b) the loop body takes a long time per invocation, or c) have profiled a performance issue and found that improving it would help.

If you're working with sets of 10 items, just write the damn nested loop and move on. Code jockeying is unlikely to be faster, and even if it is, it doesn't help enough to matter anyway.

Computer science theory likes to ignore constants. Big-O notation does that explicitly. But in the real world, it's usually the constants that kill you. Constants, and time to market.


> If you're working with sets of 10 items

If you are working with a hardcoded 10 items, and you are certain that won't change significantly, sure.

If not I strongly disagree, because I've seen way too often such cases blow up due to circumstances changing.

Now, if it is very difficult to avoid a nested loop then we can discuss.

But it can simply be due to being unaware that some indexed library call is in fact O(n) or something like that, and avoiding it by using a dictionary or some other approach is not hard.

While constants matter to some degree, the point of big-O is that they don't so much if you get handed two orders of magnitude more data than you expected.

I'll gladly sacrifice a tiny bit of performance for code that doesn't suddenly result in the user not being able to use the application.


It doesn't need to be hardcoded. You should have a reasonable sense on how much data is going to go through a certain piece of code.

Wasting time on optimising cases that don't occur is just wasteful. Go solve something that's a real problem, not some imagined performance problem.


> You should have a reasonable sense on how much data is going to go through a certain piece of code.

I've been in several past-midnight war rooms due to exactly that mindset.

Customer suddently gets a new client which results in 10x as much data as anyone imagined and boom, there you are getting dragged out of bed at 2am and it's not even your fault.

I'm not saying you should spend time optimizing prematurely.

I'm saying you should be aware of what you're doing, and avoid writing bad code. Because almost always it does not take any significant time to avoid writing bad code.

If you know that indexed library function is O(n) and you need to check all items, don't write a for loop but use a while loop using its .first() and .next() functions which are O(1).

Or reach for a dictionary to cache items. Simple stuff, just be aware of it so someone isn't dragged out of bed at 2am.


We're probably talking at cross purposes. Being aware is exactly my message as well.

I've been on way too many overdue projects where people went "but this doesn't scale" for stuff that's just not going to happen. Don't waste time on avoiding so called bad algorithms if there's just not that much data going through it. But yeah, don't write a badly scaling algorithm if it does.

Most lists are just way too small to care about the difference. Literal single digit number of items, and a small loop body. You can go up 3, 4, 5 orders of magnitude before you can even measure the nested loop being slower than lower big-O solutions, and a few more before it becomes a problem.

But if you have that one loop that goes into the millions of items and/or has a big body, you'd better be thinking about what you're doing.


Choosing the right algorithm is usually the prerequisite for fast code. Optimizing the constant factors is often pretty useless if you pick an algorithm with a runtime that grows quadratically, when there are much better options available.


What makes you think that the sluggishness of these tools is in any way related to not “choosing the right algorithm”?


What makes you think they're not? I don't know why these tools are sluggish, but I disagree with the notion that algorithms don't matter for "real-world software engineering".

The world is full of slow software because one chose the wrong algorithm: https://randomascii.wordpress.com/2019/04/21/on2-in-createpr... https://randomascii.wordpress.com/2019/12/08/on2-again-now-i... https://randomascii.wordpress.com/2021/02/16/arranging-invis... ...


Exactly. What do you do when you have the right algorithm and it’s too slow (very typical for linear problems that require visiting each item).


You optimize the constant factors, e.g. the runtime of the inner loops. But this requires you to choose a sane algorithm in the first place.

Some problems are much more complicated, where you have to take, for example, locality (cache hierarchy etc.) and concurrency considerations like lock contention into account. This may affect your choice of algorithm, but by the time you reach that, you've almost certainly thought about the algorithm a lock already.


What makes me skeptical is reading

> I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster

and then

> Sometimes I look at truly perf-focused JavaScript, such as the recent improvements to the Chromium DevTools using mind-blowing techniques like using Uint8Arrays as bit vectors, and I feel that we’ve barely scratched the surface.

Bit vectors are trivial?

I think the author is too ignorant about those "faster languages". Sure, maybe you can optimize javascript code, but the space of optimizations is only a small subset of what is possible in those other languages (e.g. control over allocations, struct layout, SIMD, ...)


If JS runtimes take type hints into consideration, then they can be much faster. SIMD is not really hard to support, though value types can be hard to retrofit.


> That said, I don’t think that JavaScript is inherently slow,

That said, I don't think the author understands performance when it comes to language details. There are several layers of untapped performance, all which JS makes hard to access - optimal vectorization, multi-threading, memory/cache usage, etc.


Here you have benchmark of programming languages: https://benchmarksgame-team.pages.debian.net/benchmarksgame/

On top of that some languages don't have support for SIMD/NEON and parallel libraries or GPU processing libraries - those things can significantly improve performance


Or, you know, use a language that doesn't require all of that tooling? It might seem like blasphemy but hear me out, I've worked in web dev for 10 years, now developing a game in the Odin lang for a few months.

It's incredible how I don't need tooling at all, except for a basic IDE integrated language server. No package manager, no transpiler, no linter/formatter, no extensive configuration files. Need to add a dependency? Just copy paste the code you need from a github repo. It's still readable and editable if you need since it's the source code, not some transpiled/minified/optimized mess.

Ever had ESM/CommonJS dependencies conflicting with your tsconfig.json, right when you need to deploy an urgent hotfix? Forget about that madness. It is such a great and simple DX compared to JS.

Edit: Before I'm dismissed, I'll add that my Odin project is becoming as complex as any other JS website I've worked on and it can run in a browser thanks to wasm compilation. So I'm not even comparing apples and oranges.


Do you use WebGL/WebGPU or the DOM?


WebGPU


[flagged]


I’ve worked on some huge nodejs projects, it’s a great language for “serious projects”. It has costs and benefits like any other language. If you use TypeScript then IME many of the costs are mitigated.

What exactly makes JavaScript so unsuitable?


> What exactly makes JavaScript so unsuitable?

If you look at JavaScript's history (especially for backend development), it reads like a series of accidents: First, the JS language was hacked together at Netscape in the space of a few months in 1995, and after that it was quickly baked in into all web browsers and therefore became very hard to change in a meaningful way. Then, Google developed the V8 engine for Chrome, and someone thought it would be a great idea to use it for running JS in the backend. My thoughts on that were always: "just because you can do something doesn't mean that you should"...


You didn’t list any issues of JS.


> What exactly makes JavaScript so unsuitable?

Pretty much all the usual, boring offenders everyone's familiar with: truthy/falsey, errors passing silently, exceptions, and differences in importing behaviour with bundlers and runtimes. These things are admittedly quite simple to fix when it's your code, but when you multiply that by 1000 dependencies, which is a conservative number for a JS project a whole host of difficult to detect issues will rear over time.

> If you use TypeScript then IME many of the costs are mitigated.

TS meaningfully helps, but it still falls short of the mark imho. Turning on 99% of TS lints to error is the only solid way I've found to prevent a lot of the issues I've encountered. But that's really hard to introduce into existing codebases. It's doable, but with a lot of friction and effort.


In addition to what is mentioned by the other comment, also threading and io.


While the skeptical's stall in decision making, new comers who do not think twice about rewriting. Rewrite software on a smaller budget in experimental attempts. Then management decides to chose the cheaper option available, because lean startup is promising. Then in next 10 years the new comers become the new skepticals. I don't know if there's a case that this cycle is not repeated, at any time frame.


I've read your comment several times and still don't know what it means. Can you clarify?


I've rewritten it for clarity: "While the skeptics stall in decision making, newcomers will not think twice about rewriting software on a smaller budget as an experiment. Then management will decide to choose the cheapest option available, because the lean startup was more competitive. Then in the next 10 years, the newcomers become the new skeptics. In my experience this cycle is always repeated, albeit across different time frames."

I made the difficult choice to rewrite it in English again, even though French might have been more performant.


Thank you so much, you taught me how to express this "albeit across different time frames". It is difficult to use any language to simplify complex narrations.


Imagine developers as the choosing (management) part and if you are involved with webdev in last decade. Compare webpack and vite. As pricing parameter consider the time to bundle (faster=cheaper). So, what webpack did not do by not rewriting itself, vite came in offering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: