
JavaScript Start-up Performance - chriswwweb
https://medium.com/@addyosmani/javascript-start-up-performance-69200f43b201
======
hacker_9
_" Precompiling JavaScript?

Every few years, it’s proposed engines offer a way to precompile scripts so we
don’t waste time parsing or compiling code pops up. The idea is if instead, a
build-time or server-side tool can just generate bytecode, we’d see a large
win on start-up time. My opinion is shipping bytecode can increase your load-
time (it’s larger) and you would likely need to sign the code and process it
for security. V8’s position is for now we think exploring avoiding reparsing
internally will help see a decent enough boost that precompilation may not
offer too much more, but are always open to discussing ideas that can lead to
faster startup times."_

Surprised there was no mention of webassembly, which does exactly this.

~~~
tomdale
WebAssembly doesn't give you access to DOM APIs, so it's not like you could
rewrite Angular in wasm, for example. Most load time/parse time discussions
are in the context of web frameworks, but they're typically not able to
benefit from the performance improvements of WebAssembly, particularly given
the overhead of moving data into and out of the wasm context.

~~~
stupidcar
Access to DOM APIs isn't part of the MVP, but it is listed as a high-level
goal[1] and is specced as a future feature[2].

[1]
[https://github.com/WebAssembly/design/blob/master/HighLevelG...](https://github.com/WebAssembly/design/blob/master/HighLevelGoals.md)
[2]
[https://github.com/WebAssembly/design/blob/master/GC.md](https://github.com/WebAssembly/design/blob/master/GC.md)

~~~
tomdale
I'm excited for this and eagerly look forward to adopting it in Ember as soon
as it's mature, but I'm guessing this is still at least a few years out.

~~~
Bahamut
Even when it's out and mature, there is some complexity around gracefully
degrading when it is not available - it is going to increase frontend
complexity dramatically I suspect, at least until browser support dictates
that graceful fallback isn't necessary for the feature.

~~~
callahad
Most current WebAssembly compilation paths produce asm.js as an intermediate
artifact. In the near term, backwards compatibility should be relatively
simple: save both and load the asm.js code as a fallback.

~~~
mrec
Or you can do it on the client side by hooking into the module loader and
decompiling the wasm back into asm.js. See for example the

[https://github.com/lukewagner/polyfill-
prototype-1](https://github.com/lukewagner/polyfill-prototype-1)

PoC mentioned in the wasm FAQ.

------
spankalee
I'm always shocked at how reluctant sites are to actually ship less code, and
I think some of this comes down to a needed shift thinking in what an
application is, what modules are and how to use imports.

One thing I've heard recently is "My app is big, so it has a lot of code,
that's not going to change so make the parser faster or let me precompile".

The problem with this is thinking that an app is a monolith. An app is really
a collection of features, of different sizes, with difference dependencies,
activated at different times. Usually features are activated via URLs or user
input. Don't load them until needed, and now you don't worry about the size of
your app, but the size of the features.

This thinking might stem directly from misuse of imports. It seems like many
devs think an import means something along the lines of "I'll need to use this
code at some point and need a reference to it". But what an import really
means is "I need this other module for the importing module to even
_initialize_". You shouldn't statically import a module unless you need it
_now_. Otherwise, dynamically import a module that defines a feature, when
that feature is needed. Each feature/screen should only statically import what
it needs to initialize the critical parts of the feature, and everything else
should be dynamic.

In ES2015 this is quite easy. Reduce the number of these:

    
    
        import * as foo from '../foo.js';
    

and use these as much as possible:

    
    
        const foo = await import('../foo.js');
    

Then use a bundler that's dynamic import aware and doesn't bundle them
unnecessarily. Boom, less JS on startup.

~~~
calvin
You still need good solutions for bundling assets because network connections
aren't free.

The approach you advocate can have adverse side-effects to web performance. It
would help with the initial load time due to reduced initial JS, but if you
end up loading a few or dozens of additional JS modules asynchronously you're
talking about a lot of extra HTTP requests. Over HTTP 1 that's a big problem,
and even over HTTP 2 each additional asset sent over a multi-plexed connection
has overhead (~1ms or more).

~~~
ehsanu1
Where's this ~1ms overhead for an additional asset on a HTTP2 connection
coming from? Do you have a reference to a benchmark or something that
demonstrates it?

~~~
paulddraper
IDK what calvin had in mind, but the client pull model you suggest can require
a lot of round trips. Request a, parse a, execute a, request b, parse b,
execute b, request c, parse c, execute c.

Of course, you could always do server push, but hey...that's pretty close to
what a single bundled file is :)

------
SCdF
> Ship less JavaScript.

Please do. The fastest code to parse is the code that doesn't exist.

~~~
amelius
Except it doesn't make sense if the JS code expands to a number of machine
instructions that would take longer to transfer over the network than the
transfer and parsing of the JS code combined.

~~~
diogofranco
Although it is hard to believe that JS code would be exactly the best
compression for those machine instructions.

~~~
dualogy
How about minified JS? What _would_ be the best? Consider the massive amount
of machine (or even IR) code that each identifier, built-in function, loop,
switch and all other "syntax sugar" aka high-level-language-construct
represents..

~~~
marcosdumay
> What would be the best?

Most likely, compressed opcode.

~~~
dualogy
Could well be, depending on the app, dunno.. it's still the case that on the
whole practically almost each lexeme in a high-level language expands into a
giant ball of opcodes.. ;)

~~~
dualogy
I mean you _can_ compress those identifiers further into "op-codes-of-sorts"
by renaming each to a 1-char but then that's what any minifier does

~~~
flamedoge
why stop there? just compress the entire dang thing and add decompressor in
the execution.

~~~
dualogy
"We're" (many) already doing that _too_ via gzipped responses (faster to
decompress than to compress == neat for web uses). That's my point, a higher
language _is_ already "compressing" machine representation (all abstractions
kinda do), minification turns its lengthy identifiers into minimal "codes",
then the gzip.

~~~
flamedoge
hm.. I assume gzip is doing a lot of entropy reduction already.. curious if
minification has reduced or nil effect after gzip.

------
geocar
I am surprised at just how much faster the iPhone is than the next nearest
Google device (a laptop). I haven't used an Android device for a long time,
but I hear the "Apple is overpriced" so often that I assume someone has been
checking this out.

~~~
M4v3R
Actually this is probably thanks to Safari's JS engine. From the graph [1] it
seems that Safari is roughly 3x faster than Chrome parsing and compiling JS
code on the same Macbook Pro.

[1] [https://cdn-
images-1.medium.com/max/2000/1*dnhO1M_zlmAhvtQY_...](https://cdn-
images-1.medium.com/max/2000/1*dnhO1M_zlmAhvtQY_7tZmA.jpeg)

~~~
Klathmon
Yeah, Safari has stupidly fast startup times compared to Chrome. It's one of
the big things that they are working on in their engine.

It's been a bit, and I'm going from memory here, so forgive me if i'm wrong,
but...

V8 is introducing a new "interpreter" mode to help here so that the page can
start being interpreted ASAP and no JIT overhead needs to force the system to
wait until it's done a first pass. And in the long run they want to pull out 2
of the JITs in their engine to simplify the process and speed up the first
execution (along with reducing memory usage, and simplifying the codebase to
allow for faster and easier additions and optimizations).

It's a great move, but it means that things are going to get slightly worse
before they get better.

The "old" V8 had 3 compilers, "Fullcodegen", "crankshaft", and "turbofan" [0].
The current V8 has those 3 + Ignition [1], so it's just adding more on now.
But over time they will be removing crankshaft and fullcodegen and it will
leave them with a really clean and fast engine [2].

If anyone is interested, [3] is a fantastic talk on this and other plans they
have for V8, and it's very accessible for those who don't know a thing about
JS engines.

(sorry about the links to google sheets here, it's the only place I can seem
to find the infographics)

[0]
[https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6H...](https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6HaCIu9wpZsrzqpIVIwQSuiXQ/edit#slide=id.g1453eb7f19_0_359)

[1]
[https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6H...](https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6HaCIu9wpZsrzqpIVIwQSuiXQ/edit#slide=id.g1453eb7f19_5_97)

[2]
[https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6H...](https://docs.google.com/presentation/d/1OqjVqRhtwlKeKfvMdX6HaCIu9wpZsrzqpIVIwQSuiXQ/edit#slide=id.g1453eb7f19_0_391)

[3] [https://youtu.be/r5OWCtuKiAk](https://youtu.be/r5OWCtuKiAk)

edit: Removed comment about edge, it was more assumption than anything.

~~~
natorion
Great summary with a few nits!

I am not aware that Edge is "stupidly fast" on startup. Safari though, is
indeed currently leading the field.

As you correctly outlined, V8 is indeed transitioning to a world with an
interpreter+optimizing compiler only. If you are using Chrome Canary, there is
a chance that you are already using the new pipeline :-).

Full disclosure: I work on the V8 team.

~~~
Klathmon
I gotta be honest, I've only heard that in comments and "not so reliable"
sources, so I should probably remove it while I can.

As for the Ignition+TurboFan setup, are you really that far along already?

Last I heard a few months ago it still sounded like it was gonna be a while
before TurboFan was fast enough in most cases to be able to handle it.

If so that's awesome!

~~~
natorion
It is progressing quite nicely. We still don't know yet when it is ready to
graduate from Canary/Dev though. Lot's of stuff to do!

------
yunolisten
> Ship less JavaScript

posted on a site shipping 456.1KB of packed JavaScript

~~~
M4v3R
Good point, but to be fair, author probably doesn't have any control over
this, as this is default for all Medium blogs. It's another question why
Medium decided it is a good idea to ship almost 80k lines of Javascript code
for a page whose only purpose is to display a blog post.

~~~
tambourine_man
And yet another question is why would a technically apt person, and one
obviously concerned with page optimization, choose to give up control of how
their content is served.

~~~
thehardsphere
Really?

Time is this really scarce thing that technically apt people often have a
limited supply of. He can spend days rolling his own blog app from his own
super custom optimized framework and then do all the additional work of
getting that content indexed on Google and sprinkle SEO black magic all over
it, or he can just put up a blog post on a service where somebody else does
all of that for him.

Unless you're really into that sort of thing or are not fully employed, the
time saving option is the most sensible one to do if it's "good enough." Which
medium is, as evidenced by the fact that we're all talking about it.

~~~
dalore
You know he actually does have his own blog:
[https://addyosmani.com/blog/](https://addyosmani.com/blog/)

As well as several blog like posts at google plus
[https://plus.google.com/+AddyOsmani](https://plus.google.com/+AddyOsmani)

There are numerous official google blogs like the chromium blog
[https://blog.chromium.org/](https://blog.chromium.org/)

As to why he chose medium over his existing blogs only he can tell you. My
guess would be that he is using Medium to reach a bigger audience.

The confusing thing is when he he says we and us (talking about his team) it's
confusing since it's on Medium. This really should be up on the Google dev
blogs if it's official.

~~~
yunolisten
> This really should be up on the Google dev blogs if it's official.

Exactly.

~~~
shurcooL
Whoa, I didn't realize this was an official blog post.

------
mschuster91
Hmm. Usually browsers cache the raw CSS and JS assets - could that be improved
so that browsers cache the compiled CSS/JS? That wouldn't help for the first
load, of course - but quite a lot for sites like newspapers which are not SPAs
but bundle a metric ton of JS cr.p for each page load.

edit: Chrome actually does that, as mentioned in the article - but what about
Chrome Mobile and Firefox/Safari/IE?

~~~
hacker_9
From the article:

 _" Chrome 42 introduced code caching — a way to store a local copy of
compiled code so that when users returned to the page, steps like script
fetching, parsing and compilation could all be skipped. At the time we noted
that this change allowed Chrome to avoid about 40% of compilation time on
future visits, but I want to provide a little more insight into this feature:

1\. Code caching triggers for scripts that are executed twice in 72 hours.

2\. For scripts of Service Worker: Code caching triggers for scripts that are
executed twice in 72 hours.

3\. For scripts stored in Cache Storage via Service Worker: Code caching
triggers for scripts in the first execution.

So, yes. If our code is subject to caching V8 will skip parsing and compiling
on the third load."_

~~~
underwater
Soemthing I've wanted for a long while is the ability to warm the codegen
cache. It should be possible to instruct the browser to load and parse
JavaScript so that on subsequent page loads execution can begin immediately.
This would work really well with the model that Sevice Workers are moving
towards.

------
atonse
> Ship less JavaScript.

Not at all the message I got from this. Because judging by Safari on mobile
and desktop, clearly the issue isn't Javascript, it's that for whatever
reason, with all the insane resources behind V8 and Android, they're simply
unable to get their interpreters to reach the speed of Safari or Edge.

I'd understand if nobody could make JS go fast, but clearly Apple and MS are
proving in real-world-ready code that JS can be quickly parsed and executed.

~~~
kbutler
Yes, Safari seems to beat everybody, and Apple's mobile processors excel at
single-thread performance.

However, Edge doesn't appear to have an advantage - the graph shows chrome on
a Thinkpad T430 beating Edge on the same hardware.

Also, it shows the Nexus 5X (Qualcomm 808 processor) beating the Pixel XL
(Qualcomm 821 processor).

Is this accurate? Seems fishy, but I haven't run any benchmarks.

[https://cdn-images-1.medium.com/max/800/1*dnhO1M_zlmAhvtQY_7...](https://cdn-
images-1.medium.com/max/800/1*dnhO1M_zlmAhvtQY_7tZmA.jpeg)

~~~
rtpg
given Android's execution model (closer to a desktop OS, with many things
running in userspace, constant context switching) compared to iOS's "one thing
running at a time" model (closer to a game console OS), my guess is that
Android benchmarks are less reliable.

Not that it discounts the massive advantage to apple on perf.

------
kebolio
Would have like to have seen Firefox's particular handling of this, but it
seems like the author's definition of "browsers" (plural) == Chromium.

------
throwanem
I love that this article, which is quite good, appears on a site whose every
page crashes and boot-loops multiple times in iOS Safari and has to be
repeatedly reloaded by the browser, presumably running less of its code each
time, in order to correctly render some text with images in it.

Sort of gives added point to the thesis, by providing a marvelous example of
something you should never, ever do.

~~~
HalfwayToDice
The website crashes this Macbook Air Mavericks too. Absurd.

~~~
throwanem
I mean, if I were on something older than an SE, I'd shrug and figure it was
fair enough, since iOS devices do seem to age badly in my experience. But
seriously...

------
r1ch
I don't think things are going to get better until browser makers (or Google)
start forcing things. It's fine to say "Use less JS" but without an incentive,
things aren't really going to change.

Looking at how HTTPS adoption has grown with Google giving HTTPS sites an SEO
boost and Chrome giving scary warnings, I wonder if the solution isn't to do
the same with JS. Throw in some SEO incentives for pages with minimal JS and
see what the market does.

~~~
moxious
The market is already incentivized; if the performance of your site is lousy,
people won't like it. I think fiddling with SEO incentives is too big of a
hammer to swing at what is basically a very narrow technical problem, overall
JS parse volume.

~~~
r1ch
The sad thing is people are so used to this modern web crap that I don't think
they will stop using it. The Facebook "we crashed the app and users never
stopped coming back" experiment comes to mind. Sure, technical users like
anyone on HN will probably know what it could be like, but a lot of users
likely have no idea how fast the web can be. A site that takes 2 seconds to
load is "fast". 5+ seconds to load an article on mobile while the page is
jumping all over the place is "normal".

------
czbond
My take on the JS performance 1) wait for CPU’s of mobile to catch up with
current JS flow [not acceptable, but easiest path ] 2) Web Assembly to create
native experience or 3) something like Elm as a JS replacement (front end
compilation). We know interpreted langs are slow - however JS is slow now
where people see it most [Front end]… they don’t see the slow of “back end”.
For example, if a user had to wait for “npm install” when they used the server
the first time. The JS community has a great habit of adding “all the things”
for significant bloat. [module for leftpad, and lots of go-arounds due to the
language being bolted on for it’s current use rather than designed from the
ground up]. It’s 6am in Cali…so take it with a grain of salt.

------
saosebastiao
> Ship less JavaScript.

Except for the fact that it is nearly impossible without a dedicated team for
optimizing your code. Because javascript is so hard to optimize and eliminate
dead code. With es6 modules that could potentially get better...but good luck
with getting the rest of your libraries (and the 5 bajillian level 2+
dependencies that they require) on board with that.

A decent first step would be for nodejs to deprecate commonjs and only use es6
modules, and force libraries to update or be deprecated. So lets have this
discussion 10 years from now.

~~~
rounce
> A decent first step would be for nodejs to deprecate commonjs and only use
> es6 modules...

How do ES6 modules solve this? You do realise they are mostly sugar?

~~~
spiralx
No, ES6 modules aren't just syntactic sugar over CommonJS modules, they are a
different type of module entirely. One big difference is that ES6 modules have
a static structure so that you can determine imports and exports from the
source code alone, unlike AMD or CommonJS modules which determine their
imports and exports dynamically at runtime. Import and export statements can
only occur at the top-level of a module and don't accept expressions of any
kind, so they can't be conditional or accept parameters.

This means if your application is only using ES6 modules then newer bundlers
such as Webpack 2 and Rollup can perform "tree-shaking" \- statically
analysing your codebase to determine what code paths are used, and then
pruning dead code from the final bundle.

So if your code imports something like Lodash with hundreds of functions but
you only call one of them, then only that one function will be in your final
bundle.

------
afghanPower
I think more people should look into Clojurescript with it's advanced
optimizations which I believe could help achieve the goal of "shipping less
javascript".

------
txprog
I'm confused.

One of the idea behind CDN for javascript/css is to leverage caching by
reusing the same resources across websites. But then optimization tools said
we should bundle everything into one javascript, which delay the loading time,
but defeat the initial purpose.

I wonder if the caching could be more intelligent, by recognizing libraires
bundled into the "big" javascript that website delivers, and parse only the
new content.

~~~
whatever_dude
It's not black-and-white. More of a balance kind of thing.

Bundling everything together is best, unless the cases where it isn't and you
need some kind of file to be shared globally. Not necessarily in a global WEB
context; just the site's own context works too. You want assets to be
cacheable.

Regardless, big libraries on CDN's don't make as much sense nowadays as it did
maybe 5 years ago. It's not like everybody is still using jquery. There's too
many different mainstream libraries, with too many versions.

------
isoos
Slightly related: pre-parsing code and loading already initialized application
state is available for the Dart VM for a long time now, and the technology
yields faster startup times:

[https://www.dartlang.org/articles/dart-
vm/snapshots](https://www.dartlang.org/articles/dart-vm/snapshots)

~~~
dchest
and in V8 [https://v8project.blogspot.ch/2015/09/custom-startup-
snapsho...](https://v8project.blogspot.ch/2015/09/custom-startup-
snapshots.html)

------
gleb
I am surprised iPhone 7 looks to be 4x faster than 6s. Is this running the
same iOS version?

------
hawski
I'm wondering how desktop and Android Firefox compares to provided performance
figures.

------
abalone
Worth noting: Safari 10 is 3X faster at parsing than Chrome 55, according to
their chart. (See the MacBook Pro lines.)

This is not mentioned or discussed despite all the thoughts on how to speed
things up. Anyone know what Safari is doing?

------
steveadoo
I'm not sure the Angular 2 AOT bit belongs here. Angular 2's AOT just parses
your HTML into the backing angular component classes. It's still all
Javascript that has to be loaded and parsed.

------
tbrock
Wow, I definitely feel this. I just switched from an iPhone to an Android
device and it does seem like a second class phone experience.

The gap between all recent Apple phones and even the best of the best Google
and their partners offer with respect to mobile web perf is staggering.

------
jijji
Why not have the js engine, i.e v8, parse and compile one time, and then refer
back to this object code each time to eliminate the startup delay... Either an
http header cache-control flag or something similar.

~~~
Klathmon
TFA says they are doing something like that already.

------
efxzsh
What about [https://svelte.technology/](https://svelte.technology/) ?

------
jerianasmith
Right,Generally programs store the crude CSS and JS resources - could that be
enhanced with the goal that programs reserve the accumulated CSS/JS? That
wouldn't help for the principal stack.

~~~
cramforce
The article talks about that in great detail.

