It will be awesome when this gains support for custom rules as I have a bunch of custom ESLint rules. The thing that annoys me the most about ESLint is that it has too many NPM dependencies.
This feels like the most important thing about new linters (including the one Bun has and others).
If you just use linting for checking a bit of stylistic policy, any replacement might be fine. However, linting is much more than that and if you are depending on third party rules or writing your own (https://kristiandupont.medium.com/are-you-using-types-when-y...), there is no way around ESLint.
Not sure if I’d be comfortable taking as far as your example.
Adding logic into linters blurs separation of concerns, adding unnecessary complexity akin to an extra programming language.
Linting in essence should be orthogonal to development — a layer that enhances code quality without being fundamental to the code’s functionality. By overextending linting, we risk creating a maintenance burden and an additional learning curve for developers.
Linting is a great tool and but as with any great hammer it’s easy for lots of things to start to look like nails.
eslint and typescript are the de-facto static analysis tools for JavaScript. TypeScript isn't extensible. So if you want to do any custom static analysis, you're doing it as a custom eslint plugin.
It might be better to have some other tool to do pluggable static analysis, but the fact is that there isn't one. And eschewing project-specific static analysis entirely would be giving up far too much.
I use custom linters as a “continuous codemod” that transform old code and engineer habit in form X to new form Y. Combined with a way to “ratchet” the number of rule violations towards zero, we can gradually and relatively painlessly roll out any number of whole codebase migrations in parallel over weeks or months.
Two examples:
- we have an API like dbModel.getValue() that subscribes the current view to any change in an entire database row. We noticed this lead to UI performance issues from components over-rendering. To deprecate, I wrote a rule to transform dbModel.getValue().specificProp to dbModel.getSpecificProp(). We can’t remove the getValue method since there’s times you really do need it, but we can automatically switch new code written to a more performant specific call for many cases.
- We use a lint rule to enforce that API endpoints and queue worker jobs have ownership and monitoring rules specified. We could use the type system to strictly enforce this, but we want to support gradual migration as well as suggest some inferred values based on the identity of the author. Using a lint with ratcheting means newly added cases are enforced but easy to add, and old cases can/will adopt over time.
I want to find the time to write a blog post about this, I think it’s a pretty handy pattern.
>Linting in essence should be orthogonal to development
I guess that's what I disagree with. Yes, it adds complexity of its own, just like types do. And I still favor solutions that are based on types for most things, but more and more I try to go this route. They are surprisingly easy to write.
Isn’t that an NPM/Node thing? I mean I sometimes look at two React Native projects and a web project. The dependency situation there is downright anxiety inducing and that I am saying as an Android developer so please know that I am kind acquainted with dependency mess.
Not only that, but you also need deps to get eslint to support your specific flavour of pre-transpiled JS. Not only typescript, but new standard JS syntax (like ?. or ??) often requires updating the eslint parser.
I would really like to speed up my workflow with a faster ESLint alternative, but my ESLint configs are often very customized, with rules and plugins that are not available (yet) in the alternative solutions, making them a non-starter for me. It'll take a while for these alternatives to reach plugin/rule parity.
Yeah. There have been lots of Rust or Go linters popping up with impressive benchmarks, but I don't think any will take over the world until they have drop-in parity
Would you consider removing your customisations to be closer to the workflows supported by these tools? One of the great things about go is that you're free to have an opinion, but if you disagree with go fmt or go build, your opinion is wrong.
No. A linter does more than formatting. Besides, some rules may simply not be relevant to what I'm working on while other rules are. Prettier works well enough for most people because it only covers syntax, and not whether or not you can use await in a loop, or should add tracks to your video element, or if jsx should be in scope, etc.
Python and JavaScript have similarly good formatters (as long as your idiot colleagues don't insist on using yapf instead of Black, despite yapf producing non-deterministic output!). In fact I would say Rust is probably behind Prettier in terms of auto formatting. The rustfmt output is less pretty (subjective I know), the devs have made several strange decisions and it seems to be semi-abandoned (maybe partly because the devs were ... shall we say not as friendly and welcoming as the Rust community likes to bleat on about).
I'd like to highlight dprint [0]. It is not as opinionated as Prettier, and its AST-node-specific configuration is awesome [1]. Deno uses it under the hood for `deno fmt` (and switched from Prettier [2]), and the TypeScript team uses it for formatting their code base (switched from formatting by ESLint [3]).
> Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker[…]
So it's in fact 18000 times faster on this embarrassingly parallel problem (but doing less for now).
Do you have some source files that are somehow exempt from bugs and would be a waste of the linter's time?
Probably not, but it's a trick question: if you try to look for exceptions to the rule, you have already wasted so much time that running a linter on all files would be faster.
What if the diff adds a new linter rule, should we only run it on the linter config file?
What if the linter uses more context than a single file, a type-checker for example or even just checking the correct number of arguments (regardless of type) are passed to an imported function - or that that symbol is indeed even callable? Should we only run the linter on the caller's file, or the callee's, when they haven't both changed?
Run the linter on the code base then, when you make the change? Not every check-in on the off chance a rule changed. Or, add some logic that the CI runs it against the whole code base only when the rules changed, otherwise just the relevant files to the commit/pr
Also, ESLint doesn't do type checking. That's typescripts job, and apparently typescripts runtime isn't an issue.
If a different (unchanged) file depends on the one you changed, you could have changed the API in a way that makes the unchanged file unacceptable to your linter.
Yes, because you lint everything in CI. Otherwise, linter warnings will start creeping into your codebase immediately, and the tool becomes much less useful.
I think if my CI was taking 45 mins to lint I'd look at linting only the files changed since the previous build instead of splitting it across 40+ workers. Or writing a new linter in Rust.
But I'm generally working in a (human & financially) resource-constrained environment.
Typescript lints are type-aware so you can’t just lint changed files, you have to relint the entire codebase to check if any type changes have impacted the unchanged code.
I'm not sure if Eslint has this, but there could be cross-file lints (eg. unused variables). If some file changes, you may need to relint dependencies and dependent files. This could recursively trickle.
I'm not sure if Eslint does this either, but indices or some incremental static analysis sounds like it could help the linter minimize rechecks or use previous state.
You can tell eslint about globals in it's config. But if you're using variables that arn't declared in the file somehow, that might be an issue you want to look at in general. That's a potential foot gun a linter should be balking at.
if you have one file that every single file across the repo imports in a way and you make changes to that file, you might run the linter for the entire repo. But again, how likely is this scenario?
If the index or incremental static analysis object was designed well enough, I don't think you would need to lint every file, you would just need to look at files that consume that variable. Maybe you would look at every index?
I'm not sure how well this could scale across (600- 1000?) different lints though. I should look into static analysis a bit more.
As the sibling comment mentions, you may have lint rules that depend on checking for the existence of, or properties of, another file. A popular set of rules comes from https://www.npmjs.com/package/eslint-plugin-import which validates imports, prevents circular dependencies, etc
If a file has been linted, is unchanged since it was linted, there's literally no need to lint it again. Much like if you only need to process one record, you don't query the whole table to get the record.
Disagree on ESLint vs Typescript. ESLint and TyepScripts jobs should have minimal overlap.
ESLint primary job is linting. It should be finding 'foot guns' and code style issues. Things that are absolutely valid in the language, but could lead to potential issues. Because of that, it's totally valid that you're not finding as much value in it. It depends on the rules you enable in it, etc. And yeah, it can feel super nitpicky when it's yelling at you for not having a radix in parseInt().
Typescript's 'compile' step or whatever, it doing type checking and making sure your code is valid. If you're using bare JS, your IDE should be doing this job, not eslint.
(but yes, anything more than a few minutes to lint even a large code base is insane.)
I think the typescript-eslint plugin in particular has some high value eslint rules that complement TypeScript.
For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.
Other rules can be used to increase type safety further. There are various rules relating to `any`, like no-unsafe-argument[1], which can be helpful to prevent such types sneaking into your code without realising it; TS has `noImplicitAny`, but it'll still let you run something like `JSON.parse()` and pass the resulting any-typed value around without checking it.
> For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.
Is there a fast linter that checks for this? I find this error easy to make as well, and it usually causes weird runtime behaviour that's hard to track down.
I get a ton of value from ESLint with TypeScript, and in particular from @typescript-eslint. And yes, 75 minutes is absolutely bonkers. It would have me rethinking a lot of things well short of that time. But automated quality checks wouldn’t be anywhere near the top of that rethinking list. And partly, but not only, because of irrelevant nitpicks. Having humans do those nitpicks is vastly worse in time elapsed, and likely in compute time in many scenarios as well. The more human time is spent on the things linters help with, the more that time is not spent on reviewing and ensuring correctness, performance, design, maintainability, user- and business-implications, etc.
If you leave dependencies off a hook by mistake you’re creating bugs, unintended behaviour and making your code a nightmare to modify. That’s always going to be worth linting for. Can you over lint a codebase, sure I guess so, if your lint stage is taking hours it probably needs optimization, but your assertion that type checking is enough is incorrect.
Having to specify every single closure in dependencies does nothing. Not every dependency has to be specified. Not specifying a dependency is not always a bug.
Honestly if not exhaustively putting some closures in your useEffect deps list is your idea of a “maintenance nightmare” then maybe you should stay away from real production code bases? There are plenty of hairier mistakes and patterns out there than that.
This can only be good news. Normally I, like anyone else experienced with the JS ecosystem, despair when new tools come out like this. However, consider:
- setting up eslint isn't actually that simple
- if you're using typescript you need eslint-typescript too
- there are sets of rules in both eslint and eslint-typescript that conflict with each other, so I have countless rules in my config like this:
- then if you're doing React there's another set of JS and TS rules to apply, I still never figured out how to correctly apply airbnb rules
- this is a pretty garbage developer experience
- you can quite literally spend hours or days getting a "good" linting/formatting configuration setup, and you often can only use pieces of the configs you wrote for other repos because over time the rules and settings seem to change
- I hope this will eventually support things such as .astro files which is actually a combination of TypeScript and TSX blocks
> At this stage, oxlint is not intended to fully replace ESLint; it serves as an enhancement when ESLint's slowness becomes a bottleneck in your workflow.
I also hope that eventually it does become a full replacement. I like eslint, but holy shit, I cannot bring myself to create a new config from scratch that wrestles all the required extras and the frequently changing dependencies.
Also, wanted to give a sort of shout out to Deno here. Deno comes with a linter/formatter built in that is barely configurable (just double vs single quote, 2 or 4 space indentation, minor things) and it too is very fast and simply "just works".
---
Update: I just gave it a quick try and I am immediately impressed by it. Not only was it incredibly fast like it claims, it appears to already have all of the rules I was complaining about built in.
eslint-plugin-react(jsx-no-useless-fragment): Fragments should contain more than one child.
╭─[src/design/site/preact/MobileNavigationMenu.tsx:18:1]
18 │ return (
19 │ <>
· ──
20 │ <MenuButton isOpen={isOpen} onChange={setIsOpen} />
Finished in 17ms on 90 files with 70 rules using 16 threads.
Found 13 warnings and 0 errors.
You're right that initially setting up the rules takes time, that won't go away with any linter though. But once I set up my company's rules 4 years ago, it's just been adding the odd rule every year or so, and upgrading various dependencies, then publish. I use it across work and personal projects, never really noticed "only use pieces of the configs you wrote for other repos because over time the rules and settings seem to change"
Anyone who picks and chooses from the entire list of rules is doing it wrong.
Pick a config that roughly matches your ideals and just use it. On older projects you’ll have to customize it a bit, on new ones you’ll probably just adapt to it.
I’ve been using eslint-config-xo-typescript for several years, plus some plugins with their “recommended” presets.
editorconfig is mostly about formatting. Parts of the JavaScript ecosystem have converged on prettier for that.
These linters do checks on the abstract syntax tree, and so they can statically analyze that e.g. you don’t use certain unsafe APIs or do things that might introduce performance issues or bugs.
"ruff" for Python which is displacing the flake8 linter (and in fact the "black" code formatter too) shows that this kind of thing can work fantastically well.
Have you by any chance used Pyright? If not, I can highly recommend it. The VS Code extension makes writing Python almost as if it's a statically typed language (+ there is a CLI if you want to check types in CI). The docs are claiming that it's 3-5x faster than mypy - I haven't run performance benchmarks myself, all I can say is that for all my code bases it is very fast after the first cold start.
they've successfully replace pylama and black so yeah I really hope it's their next target (though handling types is a whole different beast altogether)
Ruff is better than flake8 for reasons other than speed.
1) it works better as an lsp/vscode plugin, so I don't need to save to get errors popping up.
2) it respects pyproject.toml and doesn't need to liter my root dir with another dot file.
3) as an intangible, its errors just feel better.
I want to complain but this richness in developer availability to implement the same thing many times is why we get a linter that’s 100x faster or a webpack alternative that’s 50x faster.
Sure, but if you don't know what all the ways are you'll be prone to "just follow instructions" and you may not notice that a few years apart your followed instructions regarding different ways of installing or uninstalling things and now your system is a mess
Becoming aware of different ways to do things costs time (to read about it and form opinions on things like which ones are/might be useful to you) and space (in your brain to remember these options and opinions). It's not necessarily bad, but it's a cost.
Biome implements more ESLint rules than OXC: Biome implements about 90 ESLint rules [0], while OXC implements about 60 rules. This brings Biome closer to parity with ESLint. However, Biome has changed some rule names, uses camel-case rule names instead of kebab-case names, doesn't provide some rule configurations (to avoid configuration nightmares), and slightly changes some rule behavior (as OCV does).
If I understand it right, we have 3 large projects that aim to replace most of JS tools on their own: Bun[0], Oxc[1] and Biome[2]. Bun's package manager is great, Biome formatter recently reached 96% compatibility with Prettier, and now Oxlint is apparently good enough to replace ESLint at Shopify. Exciting times ahead.
But it's giving the impression that these projects perhaps could be better off collaborating instead of each of them aiming to eat the world on their own?
EDIT: I'm not saying it's wrong to write competing tools, it's open source anyway, so please do whatever you like with your time and have fun. But it looks like out of these 3 projects, 1 has a startup behind it, and 1 receives funding from bigger company. I assume that money will stop coming in if these tools don't gain adoption fast enough, and nobody would want to see that happen, especially with so much potential here.
To clarify: I'm also not advocating for merging the codebases together, that would be mostly counterproductive (especially since Bun is in Zig, and Oxc and Biome in Rust).
When I think why Rust was successful at establishing community-accepted standard tooling (clippy, rust-lsp), 2 things come to mind:
- Project developers were always promoting each other's tools, pointing them out in docs or blog posts
- Good tools were being pulled into rust-lang GH org (for visibility) and rustup CLI distribution (for ease of system-wide installation)
Both of these things are not technical challenges, they are rather more "political" (require agreements between parties). In JS ecosystem, what would it take for Oxc to say on their website "we are not writing a formatter, please install Biome" and for Biome homepage to say "we are not writing a linter, please install Oxlint"?
Biome is the continuation of Rome Tools. It exists since several years and always featured a linter and a formatter.
If I remember correctly, OXC was born out of its author's desire to learn Rust and his feeling that Rome Tools/Biome had made complex technological decisions (mainly the use of CST instead of an AST). Rome Tools/Biome chose a CST to bring first-class IDE support: you can format and lint malformed code as you are writing it.
I hope more collaboration between Biome and OXC in the future. However, the inherent difference comes from technological choices.
Happens loads of times. There is some in-built human condition that folk basically see a thing that they could improve but then decide to go off and build their own moon-base rather than work on someone elses project.
In my experience, project maintainers are frequently uninterested in changes to their project, especially if those changes are a significant departure from their current vision or if it involves pivoting away from tools that they like. You're often expected to make years of contributions to the project to earn the rapport to bring significant suggestions before the maintainers. It's often just easier to 'build your own moonbase' instead of politicking.
Just a couple days ago, the curl maintainer published a blog post about why he wouldn't rewrite curl in Rust and a big part of the reason was that he and the other maintainers weren't good at it and weren't the right people to lead a project that used it--he said that he encouraged other people to start their own project in Rust. But then when people follow that advise, they're chided for not contributing to the more established project! To be clear, I'm not a "just rewrite it in Rust" guy, but I think people underestimate the difficulty and frustration involved in petitioning an established project to make the reforms necessary for significant improvements.
Except that this is not as good as the original by their own admittance. If they had collaborated they could likely get more done in the same amount of time. (not twice as much, but more)
Maybe this is a better design than the other projects. Maybe people cannot get along and so they are forced to fork. There are many other good reasons to not contribute to an existing project. However we should always look at skepticism on such claims: it is easy to start you own project and you are in control so the amount of work you get done is higher. However working together, while it makes everyone slower normally results in many more features and higher quality code over the long term.
So please when you have an itch technology can solve look to see if you can contribute to someone else's project first. It won't be as fun, but the world and you will be better for it.
I'm its author and focus solely on the collaboration picture. I don't generate much press because I only build internal APIs for tooling and language authors, where the projects you've shared all opted to prioritize fulfilling specific real use cases over generalizing their core technology.
Cruel as it is, I think all of them have planted the seeds of their own failure by failing to protect their organization's mission and day-to-day work from being jailed by a set of specific opinions about code style, which cannot possibly be "right" or "wrong" but must instead by argued about forever.
I see the core challenge as shifting all editors and tools to share a common DOM representation and be interoperable in a per-node way, where the current solution is to use siloed and reimplemented tools which interoperate mostly in a per-file way, with each tool parsing the text, doing some work, then emitting text for some other tool to parse...
"The Oxc AST differs slightly from the estree AST by removing ambiguous nodes and introducing distinct types. For example, instead of using a generic estree Identifier, the Oxc AST provides specific types such as BindingIdentifier, IdentifierReference, and IdentifierName."
Already this is getting into matters of style! It is one style, yes, but Javascript's shorthand syntax `({ foo })` already breaks the mental model: the identifier `foo` is technically doing the work of both an IdentifierName and an IdentifierReference. OXC chooses IdentifierReference so any system built on top of it would need to contain additional logic in order to be able to identify all sites in code that are used as identifier names.
They exist because it's significantly easier to distribute js apps than it is to distribute a compiled app. npm install works on Linux, Mac and windows, regardless of what libc, Msys, crt, you have installed. It could be python, but pip is a usability nightmare
Not particularly true especially in this case. You can get a rust binary and run it anywhere regardless of libc or having cargo installed on the users machine. A Javascript CLI requires nodejs and npm to be installed before running it.
Same goes for Go BTW. I even find it easier to install Go (haven't done it for Rust that often yet) and compile a binary (of a "pure" project that doesn't involve C libraries or other complications) than installing node/npm/nvm/whatever to get something up and running...
For their main use case they do package it up for npm, but the crates folder have each portion available to build/distribute as a stand alone binary you can run against javascript without node or npm installed.
I've had significantly fewer issues with `cargo [b]install`ed compiled Rust programs than `npm install`ed ones. Getting nodejs/npm installed (and at an appropriate version) is not always trivial, especially when programs require different versions.
OOTH, Precompiled Rust binaries have the libc version issue only if you're distributing binaries to unknown/all distribtuions, but that's pretty trivially solved by just compiling using an old glibc (or MUSL). Whereas `cargo install' (and targetting specific distributions) does the actual compiling and uses the current glibc so it's not an issue.
I wish the nix programming language wasn't so rough because it can be pretty great at this problem. Being able to compile from source while just listing out package dependencies is powerful.
Cargo and crates.io is easily as simple as npm for installation and distribution. I find it to be more reliable than npm in general. Generally it’s very easy to write system agnostic software in Rust, as most of the foundational libraries abstract that away.
So when you say “compiled app” you might be referring instead to C or C++ apps, which don’t generally have as simple and common a distribution model. Rust is entirely different, and incorporated a lot of design decisions about how to package software from npm and other languages.
> I just tell people, first install rust, then just `cargo install`
local compilation may work for you and other individuals, but "just cargo install" can immediately run into issues if you're trying to deploy something to things that aren't dev workstations
> npm and cargo are absolutely the same category of tool
as a dev tool? absolutely. as a production distribution solution? definitely not
> as a production distribution solution? definitely not
If you’re talking about distributing Rust projects, sure it’s fine. Generally though, if you’re orchestrating a bunch of other things outside the rust software itself, I’d turn to just.
npm is still mainly used in JavaScript and Typescript scenarios, so I think you’re kinda splitting hairs if you’re suggesting it’s a general purpose tool.
I actually recommend cargo install cargo-binstall first, then cargo install <crate>. This is because it is quite annoying to compile packages every time you want to install something new whereas binstall distributes binaries instead, much faster.
I think more like tools for a language tend to be written in that language. Obviously the author needs to care enough about the target language and if they support plugins then it’s also desirable for the plugins to be written in the target lang.
Bun, Oxc and Biome are all great but a typescript rust compiler is something I'm really looking forward to. Right now my web application I've been building just crossed 25k lines of TS code and running `tsc` is becoming a pain point. What used to take 2-3 seconds now takes upwards to 10s, even with incremental compilation enabled in some cases.
Semantic nit: STC is a type checker, SWC already compiles TypeScript well. TSC does both (unless flagged to do one or the other) so it depends on what needs replacing.
Why it matters: in GP’s case it sounds like compiling is the problem, so migrating to using SWC as the compiler but keeping TSC as the checker (noEmit flag) in a lint step may ease that pain a bit. Though it might be nicer to migrate both in parallel.
Can you elaborate? Typescript has existed for a long time and has been the standard over vanilla js for a long time. Bun, oxlint, and biome are all replacing existing tools with build steps. How could it be that their popularity signifies some new appreciation of compiled languages?
Typescript is not a compiled language. It is a "transpiled" language. Transpiled to another interpreted language Javascript which in turn again is not a compiled language.
If going with that lax definition and concept wrangling, Python is also a compiled language. Python source code can be compiled and byte code can be cached and then Python runtime can load it.
Just like Typescript compiles the source to Javascript which is then loaded by the V8/Node etc.
And thus programming languages can be only of one type - Compiled.
Being compiled or not isn't a property of the language. It's a property of whether you compile it or not. Pure interpreters can exist. They're not very common for "practical" languages. Parse to AST, then call evaluate(ast). No target language necessary.
JS seems to be like a great language for discovering what's worth to be written. I think rewriting stuff in some compiled language is a sweet spot of "build first one to throw away".
The spate of rewrites of JS tools in compiled languages continues. Here's my problems with them:
1. The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt. This community needs complexity analysis tools (and performant alternative libraries) more than it needs accelerated parsers that sweep the complexity problem under a rug.
2. (more oft cited) The most commonly and deeply understood language in any language community is that language. By extension, any tools written in that language are going to be considerably more accessible for a broader range of would be contributors. Learning new languages is cool but diverging on language choices for core language tooling is a recipe for maintainer burnout.
> The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt.
I don’t think this is the right way to look at it. The issue is that JavaScript developers have been writing servers, build tools, dev ops tools, etc, in JavaScript because that’s the language they are expert in, but JavaScript was never the right choice of language for those types of programs. The whole industry is caught in a giant case of “If all you have is a hammer…”.
I do web development in JavaScript because JavaScript is the language of the browser. But I write all of my own build and devops tools in Java, including SaSS compiling, bundling, whatever you want. There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.
I think it’s backwards to see this as a 50-100x performance boost because Rust was used. That same performance increase could be had in a number of languages. The real issue is a 50-100x performance hit was taken at the outset simply by using JavaScript to write tooling.
Edit: just to put it in perspective, a 50-100x speed up in build time means that what would currently take a minute and a half using JS tooling could be accomplished in a second using a fast runtime. A minute and a half of webpack in the blink of an eye.
As I almost always think to myself whenever I see some program braying about its 25x speed improvement in some task, the reason you can have a 25x speed improvement is because you left that much on the table in the first place.
I don't want to be too hard on such projects; nobody writes perfect code the first time, and stuff happens. But this does in my mind tend to tune down my amazement level for such announcements.
And your last edit is really the important point. That level of performance improvement means that you are virtually certain to move up in the UI latency numbers: https://slhenty.medium.com/ui-response-times-acec744f3157 Unless everything you were doing is already in the highest tier, this kind of move is significant.
> There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.
I don't mean to be facetious here, but... citation needed.
There are a lot of assumptions about language performance being made throughout comments threads on this page that seem more based on age-old mythology rather than being grounded in reality.
JavaScript is ~8x slower and Python ~30x slower on average vs Java / Go / C++ that are all quite close.
A funny aside: I always believed that Java is slow because I heard it repeated so many times. I internalized that bit of age-old mythology. But lately as I’ve gotten more focused on performance, I’ve come across a lot of hints in various talks and articles that Java has become one of the go-to languages for high-performance programming (e.g. high frequency trading). So, I hear you about the mythology point.
How often does an average X developer delve down to compiler details and contribute to static analysis tooling ?
Metaprogramming and compilers/language analysis tooling is a jump above your run of the mill frontend code or CRUD backends.
Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.
And Rust is really tame compared say C/C++. Borrow checker is a PITA, but it's also really good at providing guardrails in the manual memory management land, and the build tooling is really good. Don't know enough about Zig but I get the impression that rust guardrails would help developers without C/C++ background contribute safe code.
You could argue Go is an alternative for this use case (and similar languages) but it brings it's own runtime/GC, which complicates things significantly when you're dealing with multi language projects. There's real value in having simple C FFI and minimal dependencies.
> Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.
Not elitism, just an honest appraisal, though I think flawed as competency isn't linear it's heterogeneous - you'll find the most surprising limitations accompanying the most monumental talent. Language fixation is a common enough one, but even beyond that, the beginner-expert curve on each language shouldn't be underestimated regardless of talent or experience.
In particular when it comes to Javascript there's a tendency to believe the above by virtue of the community being very large & accessible - bringing in a lot of in-expert contributors, especially from the web design field. This isn't fully representative of the whole though: there are significant solid minorities of hard JS experts in most areas.
> How often does an average X developer delve down to compiler details and contribute to static analysis tooling?
I've done this a few times for Go. One of the nice things about Go is that this is actually pretty easy. I've written some pretty useful things with this and gotten good mileage out of it. Any competent Go programmer could do this in an afternoon.
I don't really know what the state of JS tooling on this is, but my impression is that it's a lot harder, partly because JS is just so much more complex of a language, even just on the syntax/AST level. And TypeScript is even more complex.
To be fair the AST structure can also be implemented more efficiently without better control over memory layout. The JS ecosystem standardized on polymorphic ASTs, which in retrospect seems dumb, but is not a result of any fundamental limitation in JS.
E.g. in ESTree evaluating such a common expression as `node.type` is actually really expensive -- it incurs the costs of a hashmap lookup (more or less) where you'd expect it to be able to be implemented using pointer arithmetic.
I get what you're saying but you've missed my point.
You're optimising your execution but there's trade-offs: you need to think about optimising your software development model holistically. There's little point in having the most efficient abandonware.
A JS tool may be technically suboptimal but that's not a problem unless AST size is a bottleneck.
> AST data structures can be implemented much more efficiently with better control over memory layout
I assume you're right but I'm not sure I fully understand why this is the case - can you give examples of how a data structure can be implemented in ways that aren't possible in JS?
Disagree with 1. Most large JS projects I’ve worked on have been relatively high in necessary complexity; probably because many JS projects are relatively simple applications and relatively new (by the standards of enterprise software).
There is also abundant complexity analysis tooling for JS too. When I worked as an architect at a large telco we had this tooling in CI. It revealed some code smells and areas needing refactoring but didn’t really signal anything especially terrible.
Software tooling is more productive than ever and product requirements have grown to use that capacity. It’s definitely not a load of tech debt.
Not sure where you've worked or what you've worked with but everything you've described is the opposite of the JS projects I've encountered (multiple companies, multiple 100s JS projects).
> There is also abundant complexity analysis tooling for JS too.
I would highly appreciate recommendations here; I wonder does your review indicate the projects being analysed had little wrong, or that the tools were not very good at identifying problems.
> it serves as an enhancement when ESLint's slowness becomes a bottleneck in your workflow
Well, when I need to batch fix errors in files, yes it can take a while to run eslint. But that almost never happens. I have the plugin and fix errors as I go (which I believe is what most people do), and I never feel performance is an issue in this workflow. I really doubt how (actually) useful this is.
which is a really weird problem to have. Only lint files that have changed? How hard that is? our monorepo is 3m lines of code and running lint is not a bottleneck by any means...
And once in a while that we have to run lint for entire repo (ESLint upgrade for example) we can afford to wait 1 hour ONCE
> Only lint files that have changed? How hard that is?
Quite hard, especially since type-aware rules from e.g. https://typescript-eslint.io/ mean that changing the type of a variable in file A can break your code in file B, even if file B hasn't changed.
True, but eslint energy use would be one of the last things I worry about if I am looking for a longer battery life. Chances are that TypeScript service used for Intellisense costs more electricity.
Faster by how much in absolute time? Currently I'm not feeling ANY delay in the IDE, so I assume for a regular size file linting takes less than 50ms -- likely much shorter than that. Let's say it reduces 50ms to 2ms. Guess what? It still has absolutely no effect on my everyday work.
Say what you want about the "rewrite in rust" meme, but it really seems that rust started a trend of really caring about performance in everyday tools.
I think its amazing that all these tools being rewritten in rust are likely being done by people that likely do not typically code in C/C++ but did code something like this in rust.
To be that says Rust is more easily accessible as a language than C/C++ and that's great for our environment (speed is green) and our joy in computing (speed is happiness).
Why would a team of talented engineers focus on solving ESLint performance issue? Where is the value in this? If your project is small, ESLint is fast enough. If it's super large like ours (3 millions LOC) then you spend a little time making local and CI linters smarter to run only on changed files. Rewriting in Rust seems cool and novel but now you lost the entire wealth of ESLint plugin ecosystem, you have to keep up with maintaining this new linter which has to be updated very frequently for at least new syntaxes and so on...
We could put this effort into looking into why ESLint is not fast enough and fix the bottlenecks IF we had extra time in our hand...
If it was my team, I would not let them spend time on this. I don't see the value to be honest.
They developed a new tool that reduces their CI from 75 minutes to 10 seconds and are offering it for free and open source and you really don’t see the value? I know you warned this is HN but I find this posture ridiculous. If you don’t find value for yourself, that’s one thing but I honestly don’t get this place sometimes.
It’s because everyone time someone says “performance”, something clicks in most developers brains, forcing them to respond with weird things like “being 50x slower is actually fast enough”.
I'm not complaining about free software offered to world for free. I'm curious how a leader would justify an investment like this? I have engineers reaching to me and asking me to all sort of things. My job is to justify it for the business. Making this costs more than a million dollar if their engineers are paid like ours. Then how you do this? How do you get budget for this?
I'm not a fan of trying to put hard numbers on unknowns like this because it biases against uncertainty, but if they shaved ~74 minutes off their CI time and assuming it runs multiple times a day that very quick equates to a small teams cost savings over a year.
However, I think trying to find the actual numbers is dumb because there's also the intangibles such as marketing and brand recognition bump by doing this both for the company and individuals involved.
That's not to say all greenfield endeavors should be actioned, but ones with substantial gains like this seem fine given the company is big enough to absorb the initial up front cost of development.
How big is your business? Facebook has poured immense resources into speeding up PHP. It makes sense for them. It doesn't even remotely make sense for me.
However, people tend to underestimate this sort of thing in general. Even since before programming... we have adages about the importance of sharpening the axe precisely because people have been hacking away with metaphorical and literal dull axes hoping to avoid needing to sharpen them for a long time. Sometimes you just won't be able to convince a business person of the importance of stopping work for a moment to sharpen the axe, because all they see is the work stopping. I don't have a solution to that level of lack of wisdom in a leader. These are the people who save $200 per programmer on computer hardware at the cost of 5 hours of productivity lost... per week. Some battles just come pre-lost.
75 minutes to 10 seconds is no joke in terms of speedups. Imagine that this time is saved for a small team of 4? 10? people who can then inspect/qa/iterate on the build in a PR-preview staging environment. Imagine this kind of time saving across many teams at Shopify’s scale.
Imagine that your pushes to production can happen an hour faster. At Shopify’s scale.
That's not the point. It has value, but it's in comparison to rewriting the actual codebase in an actually appropriate language rather than fixing tooling for a language that was a mistake to use in the first place
Right, but it's only recently that better alternatives have become widely available. Previously your options were:
- Something like Java/C#, but that would have required users of the tool to manage a second toolchain / runtime just for developer tools - quite a hard sell for widespread adoption
- C/C++ which are quite hard to learn (and use correctly once learnt) for users of interpreted languages which ends up being a barrier to their use.
Now we have Go and Rust which are fast, compile to a single binary, and are much easier to learn than C/C++, which is leading to a whole new generation of tools.
The main problem I had when I tried to learn C++ was getting started with the build system. I could compile a single source file easily enough but adding others, and especially integrating with 3rd party libraries was difficult. And there was extra pain if I wanted it to work portably across platforms. Whereas with Rust, building is a simple `cargo build`. And integrating libraries is as simple as adding a line to the manifest file.
The borrow checker wasn't trivial to learn, but I could at least bash my head against the compiler, and be pretty sure that once it compiled my code was correct. With C++ it is much harder to get that feedback while learning as it's very easy to compile something that segfaults, crashes, or has Undefined Behaviour.
It's more that, after years of programming in C++, when I run a program I'm still not confident it's not going to have major bugs, and if a bug shows up, I know I'll be in for a world of pain trying to track it down.
After a few months of coding in Rust, I stopped having that problem altogether. I still ran into bugs, mind you, but solving them usually became painless.
Really? Since about c++14, the number of memory errors I've experienced in practice is asymptomatically trending towards 0. When they do come up, it's usually in some awkward C library that has been badly wrapped, and would likely necessitate unsafe in rust anyway. I genuinely can't remember the last time I introduced a memory stomp, use after free, double delete or slicing issue in a modern c++ Codebase.
It's not about memory safety, exactly. It's hard to convey the difference in how I experienced coding in Rust vs in C++/JS/Python/etc.
The big thing is that the absence of shared mutability means you can be very confident that when you "have" a reference to a thing, you "own" it, and it's not going to change value under you. When a bug came up in a C++ codebase I often felt like it could come from anywhere. In Rust the suspect list was much shorter, which made for a more tranquil debug experience.
(I'm told this is also the experience of people writing Haskell, but I've never managed to read any Haskell code, so Rust it is for me.)
TBF, when a 1.0 is released doesn't mean it's viable right way for things like this. It takes a certain level of market adoption and ecosystem buy-in first.
Also, Zig still isn't 1.0 so if we're measuring languages from when they first became public, I believe those others in your list are much older as well.
Sure, but when these languages first came out they didn't have anything like the library ecosystems they have today. In 2023, you can add a JavaScript or CSS parser to your app with a single line of code. Back then you'd have had to write your own.
No, it’s the other way around. Write everything in a safe language so that you don’t have to worry too much about crashes and other problems, but choose a language that is _fast to write_ for your first attempt. When you write that first program you will not know all of the answers yet, or even all of the questions. You want to explore the space of possible solutions quickly and efficiently, so a dynamic language like Javascript (or Lisp, Python, whatever) is the best choice.
Later once you have figured out how the program should be written, that’s when you go back and rewrite it in a language that is _fast to run_, like Rust. Sure, if you had written it in Rust to start with it would have been fast to start with. The problem is that the exploration would have been far slower, taking years instead of months. And because you haven’t done the exploration, it is unlikely that you will start with the right architecture. That means a lot of factoring and refactoring once you figure out what the right architecture is.
In most cases you can gain far more by writing the program quickly than you can by writing a quick program.
It has always been possible to do this kind of things (we have C, C++, Java) but I don't think people have been this semi-successful with reimplementing a bunch of common tools. Where are all the X re-implemented in C/C++/Java?
Languages aren't "optimized compiled" or "interpreted". This is nonsense classification.
The words you are looking for are "language runtime". And even if you used that, you'd still be wrong. Java is exactly that: "even if with a dynamic JIT", and it does perfectly fine and even sometimes is the fastest solution for a problem (I think Java beat everyone on fastest HTTP server with largest number of simultaneous connection, where second in class was an Erlang program iirc).
Because you don't understand the problem, you are trying to offer a wrong solution: compiling doesn't do anything to speed up programs, for example. What people who want better performance need is:
* Tools to analyze program performance.
* Tools to alter program runtime ahead of time and during the program execution.
* Access to runtime primitives as far down to the "metal" as possible with as little undue effort as possible.
---
The problems with current JavaScript runtimes are that they aren't designed for performance-minded developers. The developers are given highly engineered "primitives" to work with, which make them commit to certain solutions which in turn will make automatic or manual optimizations very hard, next to impossible.
But it doesn't have to be like that. For example, a variant of JavaScript, the version 4 a.k.a. ActionScript had an "escape hatch" -- if you wanted to optimize a program you had simple unchecked memory access with primitive memory operations, which would allow you to side-step all the "bloat" of object lifecycle management. This library was often used to implement various online tools for dealing with a lot of data-processing (eg. image or video compression) and they did just fine.
Current version of JavaScript doesn't have anything like that. But it could as the evidence of its previous version doing it successfully shows.
I have a Informatics Engineering degree with major in compiler design and systems programming, thanks for the lesson regarding how languages are supposed to be.
Well, it just means that your study was bad... if all you have to say about the subject is that you've completed it.
A competent in the subject person would've had something to say relevant to the subject.
The problem is that CS studies are, in general bad. Not just your specific case. In other fields something would've pinched your bubble by now and you'd start wondering what other things you might have possibly missed, but because CS studies are so universally bad, virtually everyone you interact with professionally will share your misconceptions.
Ironically, the "hard sciences" as well as math like to pet themselves on the back about how these disciplines open students to critical thinking, requiring proofs and soundness of definitions, and yet your whole taxonomy of the thing you interact with professionally is ridiculously wrong, contorted and full of magical thinking.
During your studies, I'm sure, you were given a straight up definition of what a programming language is. You must've taken at least one semester of automata theory -- it's hard to imagine a CS degree w/o it. The premise of this discipline is that there are languages, and throughout the course you discuss their properties, various ways to define them, operate on them etc.
And then one day you take an "intro to CS with language X 101" course. And that b/s course tells you with a straight face that language X is "object oriented" or "functional" or "compiled" or "dynamically typed". And you just eat it up. You never connect the dots between what you've studied in automata course and this intro b/s course. You never ask the question like "so how do I get from states, transitions, initial and final state to... objects?.. or w/e other b/s property the course ascribes to that language.
And now you are waving your diploma in my face and making a fool of yourself... you should probably ask the academic institution who handed you this diploma to reimburse you for the time you wasted there instead. Alas, they won't do it. They won't so much as understand the reason why what they did was a disservice to you. Well... life's unfair.
This is cool of course. But so was Rome. Which only existed for about two years. It’s one thing to build a cool tool, it’s something else entirely sustain one over time. I need a bit more proof that this is sustainable before I rebuild our toolchain, _again_.