Every few years, it’s proposed engines offer a way to precompile scripts so we don’t waste time parsing or compiling code pops up. The idea is if instead, a build-time or server-side tool can just generate bytecode, we’d see a large win on start-up time. My opinion is shipping bytecode can increase your load-time (it’s larger) and you would likely need to sign the code and process it for security. V8’s position is for now we think exploring avoiding reparsing internally will help see a decent enough boost that precompilation may not offer too much more, but are always open to discussing ideas that can lead to faster startup times."
Surprised there was no mention of webassembly, which does exactly this.
I feel like there may be enough internal pressure from groups in Apple, Google and Microsoft to get DOM integration available, so that they can experiment with compiling the likes of Swift, Dart and C# to it, that it will become a priority once the MVP has shipped.
PoC mentioned in the wasm FAQ.
How do you know when to release those handles so the GC can clean them up?
For something more general purpose I'd probably just use a destructor (whether C++ or Rust) with the same guarantee of single-ownership of the underlying ID, and the DOM node will automatically be freed once the native handle goes out of scope.
Edit: Of course, you'd have to prohibit copies on the native side.
"bytecode can increase your load-time (it’s larger)"
how come bytecode is larger than plain text?
I don't know the specifics of the V8 bytecode, but for the AVM2 which load pre-compiled ActionScript as bytecode, the load time is faster, faster than the JVM for example.
One thing I've heard recently is "My app is big, so it has a lot of code, that's not going to change so make the parser faster or let me precompile".
The problem with this is thinking that an app is a monolith. An app is really a collection of features, of different sizes, with difference dependencies, activated at different times. Usually features are activated via URLs or user input. Don't load them until needed, and now you don't worry about the size of your app, but the size of the features.
This thinking might stem directly from misuse of imports. It seems like many devs think an import means something along the lines of "I'll need to use this code at some point and need a reference to it". But what an import really means is "I need this other module for the importing module to even _initialize_". You shouldn't statically import a module unless you need it _now_. Otherwise, dynamically import a module that defines a feature, when that feature is needed. Each feature/screen should only statically import what it needs to initialize the critical parts of the feature, and everything else should be dynamic.
In ES2015 this is quite easy. Reduce the number of these:
import * as foo from '../foo.js';
const foo = await import('../foo.js');
The approach you advocate can have adverse side-effects to web performance. It would help with the initial load time due to reduced initial JS, but if you end up loading a few or dozens of additional JS modules asynchronously you're talking about a lot of extra HTTP requests. Over HTTP 1 that's a big problem, and even over HTTP 2 each additional asset sent over a multi-plexed connection has overhead (~1ms or more).
Of course, you could always do server push, but hey...that's pretty close to what a single bundled file is :)
Please do. The fastest code to parse is the code that doesn't exist.
Most likely, compressed opcode.
It's been a bit, and I'm going from memory here, so forgive me if i'm wrong, but...
V8 is introducing a new "interpreter" mode to help here so that the page can start being interpreted ASAP and no JIT overhead needs to force the system to wait until it's done a first pass. And in the long run they want to pull out 2 of the JITs in their engine to simplify the process and speed up the first execution (along with reducing memory usage, and simplifying the codebase to allow for faster and easier additions and optimizations).
It's a great move, but it means that things are going to get slightly worse before they get better.
The "old" V8 had 3 compilers, "Fullcodegen", "crankshaft", and "turbofan" . The current V8 has those 3 + Ignition , so it's just adding more on now. But over time they will be removing crankshaft and fullcodegen and it will leave them with a really clean and fast engine .
If anyone is interested,  is a fantastic talk on this and other plans they have for V8, and it's very accessible for those who don't know a thing about JS engines.
(sorry about the links to google sheets here, it's the only place I can seem to find the infographics)
edit: Removed comment about edge, it was more assumption than anything.
I am not aware that Edge is "stupidly fast" on startup. Safari though, is indeed currently leading the field.
As you correctly outlined, V8 is indeed transitioning to a world with an interpreter+optimizing compiler only. If you are using Chrome Canary, there is a chance that you are already using the new pipeline :-).
Full disclosure: I work on the V8 team.
ECL and CLISP both settled on this model. CLISP started out as a bytecode interpreter and added GNU Lightning for native code generation. ECL started out as a source-to-source compiler to C and added a bytecode interpreter; both follow C calling convention as much as possible, at first for easy interop with C code and then for easy interop for the bytecode interpreter, which avoids the C++ interop problems mentioned in the talk.
How much of what's remaining is performance versus stability/correctness?
As for the Ignition+TurboFan setup, are you really that far along already?
Last I heard a few months ago it still sounded like it was gonna be a while before TurboFan was fast enough in most cases to be able to handle it.
If so that's awesome!
Granted, this isn't a strict apples-to-apples comparison, but the differences are so drastic and always in Apple's favour. That, combined with the actual, physical differences in speed of the processors on the phone itself, indicates that it's not just Safari.
Also that even the bitcode model is compiled at the store, instead of frying eggs on the device.
Doing real-time audio applications on Android is a pain, from the presentations I have watched, even with the NDK.
 They are always too busy saying, "Just use React".
The author made a statement
They chose to post on medium, they could have posted the content anywhere. They chose to give us the above lesson, while ignoring it. Do as I say, not as I do.
Who cares? Why not simply stop using such shitty services?
I didn't, please check your facts. Once again someone tells someone not to do something they're not doing. Well done.
Time is this really scarce thing that technically apt people often have a limited supply of. He can spend days rolling his own blog app from his own super custom optimized framework and then do all the additional work of getting that content indexed on Google and sprinkle SEO black magic all over it, or he can just put up a blog post on a service where somebody else does all of that for him.
Unless you're really into that sort of thing or are not fully employed, the time saving option is the most sensible one to do if it's "good enough." Which medium is, as evidenced by the fact that we're all talking about it.
As well as several blog like posts at google plus https://plus.google.com/+AddyOsmani
There are numerous official google blogs like the chromium blog https://blog.chromium.org/
As to why he chose medium over his existing blogs only he can tell you. My guess would be that he is using Medium to reach a bigger audience.
The confusing thing is when he he says we and us (talking about his team) it's confusing since it's on Medium. This really should be up on the Google dev blogs if it's official.
If we're talking about having to conserve time, that seems like a clear waste, either way.
It was, of course, a retorical question. We all know the answer.
People publish on Medium because we messed up. RSS died/was killed and we turned to these centralized solutions, instead. Even its name "Medium" is telling.
When your blog isn't your main job (or anything close to it), I really don't see the problem with using a service like Medium. We're all busy.
When it operates in exactly the manner the author tells us it shouldn't, don't use it.
The blog post repeatedly makes the point that you should measure everything you do and that you shouldn't apply blanket rules to your coding. I think the same logic applies here. The post being on Medium just isn't that relevant.
Posting on a site which insists on breaking the authors first rule just looks silly.
Even if one agrees with that attitude, which I don't, I'd argue it is the author's job to care.
Contributing to a bloated centralized service is not healthy for the Web. Making sure it remains a relevant platform is in his best interests, even from a purely egoistic point of view.
No need to implement yet another one.
Or use one of the existing Google portals they use for sharing content about their products, research, technical findings...
> Staff Engineer at Google working with the Chrome team
Obviously they have the resources to self host in a sane manner.
"Half a megabyte of packed JS" still sounds way overkill for delayed image loading.. =)
I scroll through the page and no more data is transferred.
I guess what I'm curious about is how you see images at all since I don't get them unless JS is turned on.
Well I guess Medium isn't far from the median.
edit: Chrome actually does that, as mentioned in the article - but what about Chrome Mobile and Firefox/Safari/IE?
"Chrome 42 introduced code caching — a way to store a local copy of compiled code so that when users returned to the page, steps like script fetching, parsing and compilation could all be skipped. At the time we noted that this change allowed Chrome to avoid about 40% of compilation time on future visits, but I want to provide a little more insight into this feature:
1. Code caching triggers for scripts that are executed twice in 72 hours.
2. For scripts of Service Worker: Code caching triggers for scripts that are executed twice in 72 hours.
3. For scripts stored in Cache Storage via Service Worker: Code caching triggers for scripts in the first execution.
So, yes. If our code is subject to caching V8 will skip parsing and compiling on the third load."
I'd understand if nobody could make JS go fast, but clearly Apple and MS are proving in real-world-ready code that JS can be quickly parsed and executed.
However, Edge doesn't appear to have an advantage - the graph shows chrome on a Thinkpad T430 beating Edge on the same hardware.
Also, it shows the Nexus 5X (Qualcomm 808 processor) beating the Pixel XL (Qualcomm 821 processor).
Is this accurate? Seems fishy, but I haven't run any benchmarks.
Not that it discounts the massive advantage to apple on perf.
Basically the state of the art of Android is where the iPhone 5S was. Is that phone even on sale now?
The problem with Android for the past few years has been that it hasn't really followed that rule: single core performance on current Android phones is not significantly better than it was on the phones of 3 - 4 years ago. This stands in stark contrast with the iPhone.
Now clearly this isn't just about raw processor performance: it's about the software running on those processors and here Safari clearly wins over Chrome.
Say my iPhone 5S, which is still my everyday phone after more than 3 years of heavy use, is (or should be) about the equivalent of owning a low-end Android device. Well, the problem is that in terms of performance, at any rate, it's not: it's streets ahead.
Now we all know that sooner or later Apple will run into a single-core performance wall and will have to scale outwards and, hopefully, when that happens they'll invest in a way that gets a better experience than developers and users have had with Android to this point.
(Also, hopefully things will significantly improve on Android - the article suggests so - because enough people have certainly been bellyaching about it, including me.)
Sort of gives added point to the thesis, by providing a marvelous example of something you should never, ever do.
Looking at how HTTPS adoption has grown with Google giving HTTPS sites an SEO boost and Chrome giving scary warnings, I wonder if the solution isn't to do the same with JS. Throw in some SEO incentives for pages with minimal JS and see what the market does.
A decent first step would be for nodejs to deprecate commonjs and only use es6 modules, and force libraries to update or be deprecated. So lets have this discussion 10 years from now.
How do ES6 modules solve this? You do realise they are mostly sugar?
This means if your application is only using ES6 modules then newer bundlers such as Webpack 2 and Rollup can perform "tree-shaking" - statically analysing your codebase to determine what code paths are used, and then pruning dead code from the final bundle.
So if your code imports something like Lodash with hundreds of functions but you only call one of them, then only that one function will be in your final bundle.
Bundling everything together is best, unless the cases where it isn't and you need some kind of file to be shared globally. Not necessarily in a global WEB context; just the site's own context works too. You want assets to be cacheable.
Regardless, big libraries on CDN's don't make as much sense nowadays as it did maybe 5 years ago. It's not like everybody is still using jquery. There's too many different mainstream libraries, with too many versions.
This is not mentioned or discussed despite all the thoughts on how to speed things up. Anyone know what Safari is doing?
The gap between all recent Apple phones and even the best of the best Google and their partners offer with respect to mobile web perf is staggering.