Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also - for anyone writing code where performance matters, the question isn't when browsers support the syntax, it's when each JS engine's optimizations support it.

E.g.: until some months ago just putting "let foo" into a function would cause V8 to bailout (meaning the whole function gets executed slowly, even if the actual let statement gets removed as dead code).

Unfortunately I've never found any good references on the optimizability of ES5+ features, so I've been avoiding them so far.



This is a real concern, but it definitely carries the usual caveats about premature optimization and needing to measure regularly to confirm that it is a real concern and that the performance landscape hasn't shifted since the last time you measured it.

The best suite I've seen is https://kpdecker.github.io/six-speed/ which measures node and the various modern browsers which Sauce Labs supports and appears to be run semi-regularly.


Must be careful with the results on that page. It shows map-string as being slower for ES6 (using `new Map()`) compared to ES5 (using `{}`), and yet I found the opposite, that ES6's Map() is faster in my benchmark[1].

[1] https://gorhill.github.io/obj-vs-set-vs-map/


That is a great reference, but in general I don't find myself caring much about the raw performance of individual statements that way. My concern is that this or that new syntax will prevent a function from getting inlined, or prevent the engine from guessing type information it otherwise would have guessed, or whatever - just because those bits of the optimizing compiler are newer and less robust.


I agree that this is a valid concern. And I do not trust the six-speed test to do the right thing here. See for example https://github.com/kpdecker/six-speed/pull/42 where the test claims to be measuring the speed of destructuring, but in Firefox the result is entirely due to the effects of destructuring on the engine's ability to eliminate dead code. While that is relevant to performance, all it means in the end is that if you destructure something and then pointlessly throw away the result, that it will run much slower than using an ES5 assignment to pull out the field and then pointlessly throw it away. It says nothing about actual code that destructures and then uses the result vs ES5 code that pulls out the field and then uses the result. And that PR was closed because it's showing up an optimization gap, and kpdecker wants to force vendors to implement optimizations -- which is fine, except this is an optimization for something that is irrelevant to production code.

This might just be an isolated incident, but it shakes my confidence in the utility of the six-speed suite. I actually do want to know whether there's a speed difference between const { a } = obj vs const a = obj.a, and the suite does not test that. (Worse, it kind of claims that it does, but reports something else instead.)

If 'let' prevented inlining, I would want to know, but I'd have to look very closely at the six-speed benchmarks to figure out whether it's detecting that. And the range of subtle reasons for deoptimization is vast, so despite working on a JS engine myself, I doubt I'd be able to tell whether a given microbenchmark is meaningful or not.

(Note that the Firefox devtools does have a "Show JIT Optimizations" that can tell you why things aren't getting optimized, but it's incredibly cryptic, undocumented, and scaremongering.)


I'd assume the author would be receptive to pull requests for things like the `let` deoptimization.


`let` doesn't deoptimize anymore - actually V8 has made great strides here, and a lot of new syntax will go through the optimizer. More generally though, I wouldn't think the stuff I'm worrying about would show up in microbenchmarks.


https://docs.google.com/document/d/1EA9EbfnydAmmU_lM8R_uEMQ-...

That's the planning doc for v8 optimization of ES2015+ and an interesting read.


This is an important distinction, and as far as I know, none of the new features are very optimized. If you really want to write performant JS without a build step, you basically have to write it ES3-5 style, use for loops over maps and foreaches, etc


> none of the new features are very optimized

I think for-of is getting pretty good. It's a bit of a pain to optimize because the iteration protocol in ES6 is designed in such a way that you have to do heroics (scalar replacement) to have any hope of optimizing it well. But engines are getting there.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: