
A Guide to Fast Page Loads - nateberkopec
http://www.nateberkopec.com/2015/10/07/frontend-performance-chrome-timeline.html
======
bzbarsky
There's a lot of good advice here, but also some misinformation.

First of all, a script tag will block DOM construction, but in any sane modern
browser will not block loading of subresources like stylesheets, because
browsers speculatively parse the HTML and kick off those loads even while
they're waiting for the script. So the advice to put CSS before JS is not
necessarily good advice. In fact, if your script is not async it's actively
_bad_ advice because while the CSS load will start even before the JS has
loaded, the JS will NOT run until the CSS has loaded. So if you put your CSS
link before your JS link and the JS is not async, the running of the JS will
be blocked on the load of the CSS. If you revers the order, then the JS will
run as soon as it's loaded, and the CSS will be loading in parallel with all
of that anyway.

Second, making your script async will help with some things (like
DOMContentLoaded firing earlier and perhaps getting something up in front of
the user), but can hurt with other things (time to load event firing and
getting the things the user will actually see up), because it can cause the
browser to lay out the page once, and then lay it out _again_ when the script
runs and messes with the DOM. So whether it makes sense to make a script async
really depends on what the script does. If it's just loading a bunch of not-
really-used library code, that's one thing, but if it modifies the page
content that's a very different situation.

Third, the last bullet point about using DomContentLoaded instead of
$(document).ready() makes no sense, at least for jQuery. jQuery fires
$(document).ready() stuff off the DomContentLoaded event.

The key thing for making pages faster from my point of view is somewhat hidden
in the article, but it's this:

> This isn't even that much JavaScript in web terms - 37kb gzipped.

Just send less script. A lot less. The less script you're sending, the less
likely it is that your script is doing something dumb to make things slow.

[Disclaimer: I'm a browser engine developer, not a frontend or full-stack web
developer.]

~~~
nateberkopec
> In fact, if your script is not async it's actively _bad_ advice because
> while the CSS load will start even before the JS has loaded, the JS will NOT
> run until the CSS has loaded. If you revers the order, then the JS will run
> as soon as it's loaded, and the CSS will be loading in parallel with all of
> that anyway.

Scripts without an async attribute will cause the browser to stop where it's
at in the DOM and wait for the external script to download. The browser will
not start downloading any stylesheets that come after the script tag until
that's done. This is all straight from Google and can be verified in Chrome
Timeline.
[https://developers.google.com/speed/docs/insights/BlockingJS](https://developers.google.com/speed/docs/insights/BlockingJS)

> So if you put your CSS link before your JS link and the JS is not async, the
> running of the JS will be blocked on the load of the CSS.

Downloading CSS isn't required to fire DomContentLoaded, so most browsers will
download it in a non-blocking fashion. Chrome does not behave in the way in
which you describe, which is shown plainly in the article. Again, this can be
verified in Chrome Timeline.

All that said, I'm curious as to what browser engine you work when where what
you said is the case - I don't doubt it, but you're not describing the way
Chrome (and, I suspect, Webkit) works.

Right-o on the bit about DomContentLoaded, will fix.

~~~
evmar
There are two kinds of blocking that are easy to confuse. One is network:
whether the browser will wait for a resource to _fetch_ before fetching
another. The other is execution: whether the browser will wait for a resource
to execute (which itself depends on fetching) before executing the next.

Script async controls the latter. WebKit (and consequently Chrome) have a
"preload scanner" (you can Google for those words to find posts about it) that
attempt to parallelize the former in all cases. That is to say, a <script>
followed by a stylesheet should always fetch both in parallel. The "async" tag
controls whether the browser waits for the script to finish loading and
executing before rendering the rest of the page.

I think this is a relevant snippet of Chrome code:

[https://code.google.com/p/chromium/codesearch#chromium/src/t...](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/Source/core/html/parser/HTMLPreloadScanner.cpp&q=preloadscanner&sq=package:chromium&l=364)

That's from the preload scanner, where it's deciding whether to fetch a URL
referenced on the page. As far as I understand it this runs in parallel with
any script execution blocking etc.

(Disclaimer: I worked on Chrome but never on this part.)

~~~
paulirish
(Mostly agreeing, just adding some more)... I suppose theres 3 kinds of
blocking really:

1\. Network blocking. Requests must wait until the previous one finished.
Browsers in 2005 might have done this but not anymore. Even in terrible
document.write() scenarios, browsers will still try to do additional
work/requests.

2\. JS Execution blocking. The difference between [async] and [defer].
Browsers default to executing JS in order, which sucks if an early script
takes forever to download. And layout/rendering is typically awaiting all this
script to execute anyway.

3\. Render blocking. (Or technically, layout blocking). Can the browser try to
display pixels before this script has finished downloading or executing? By
default, it cannot, but an [async] attribute at least allows the browser to.

#2 and #3 definitely matter, with render blocking behavior usually being the
most important. [async] and decreasing script request count are very good.

~~~
gsnedders
For #2 it's also worthwhile to remember that stylesheets block scripts, so
it's not just scripts blocking other scripts.

------
some1else
This guide is a comprehensive explanation of Chrome's Network timeline, but
the optimisation recommendations are quite skewed towards the front-end.
There's a missing piece on server configuration, no mention of CDNs or
asset/domain sharding for connection concurrency, server-side or client-side
caching. It also doesn't take into account HTTP/1 vs. SPDY & HTTP/2\. For
example, loading JavaScript modules as individual files can improve
performance for SPDY & HTTP/2, because changes in individual files don't
expire the entire concatenated bundle. Here's a slide deck called "Yesterday's
best practices are HTTP/2 anti-patterns", that re-examines some of Nate's
advice:
[https://docs.google.com/presentation/d/1r7QXGYOLCh4fcUq0jDdD...](https://docs.google.com/presentation/d/1r7QXGYOLCh4fcUq0jDdDwKJWNqWK1o4xMtYpKZCJYjM/present?slide=id.p19)

~~~
nateberkopec
I wanted a guide that focused on the one thing all web developers have in
common - the page construction process in the browser. We all use different
servers, languages, etc.

HTTP2 will absolutely change a lot of how this works, you're right. I chose
not to cover it as it won't be a real option for most developers until nginx
supports it (still in alpha). The skills I'm teaching in the article re: the
Timeline can still be applied when HTTP2 gains mass adoption, so you can test
for yourself whether or not the recommendations in the article still make
sense.

~~~
some1else
Okay. I'm pointing these out because it says Full-stack in the title.

Nginx servers can run SPDY until HTTP/2 is stable, or use an HTTP/2 capable
proxy.

~~~
d0ugie
For those interested, the unstable (alpha) HTTP/2 element of NGINX is this
patch for the 1.9.x mainline versions:
[http://nginx.org/patches/http2/README.txt](http://nginx.org/patches/http2/README.txt)

------
ohitsdom
Great technical details in this post. When speeding up page loads, I usually
struggle with:

> You should have only one remote JS file and one remote CSS file.

I get this in theory, but it's difficult in practice. For example, this post
has 7 CSS files and 13 JavaScript files. Also, combining all resources
includes assets that aren't needed (CSS rules used on other pages), and also
reduces the utility of public CDNs and browser caching.

~~~
nateberkopec
Ha, I should totally fix that. The site uses Jekyll, not Rails, which I'm used
to getting the asset pipeline for free with!

Combining all the resources will include assets that aren't needed. Part of
the reason I made the post about profiling with Timeline was so that you could
test tradeoffs like this for yourself.

I don't know what you mean by reducing the utility of CDNs and caching though
- surely having only one file actually increases the use of browser caching,
since the browser will only make 1 request for CSS and serve the cached copy
for every other page?

~~~
ohitsdom
Agreed, it's all about tradeoffs.

On public CDNs- I mean public CDNs for common libraries (jQuery, Angular, Font
Awesome) can improve page load time if a user already has the file. The more
sites use the same public CDN, the more users benefit. But if a user doesn't
have the file already, it's a loss. So, tradeoffs.

On caching- if you make one JS file and one CSS file for each page, then there
is no caching between pages but only necessary info is downloaded for each
page. You could make one JS/CSS file for your whole site, but then there are
unused assets on each page, and it becomes more difficult to simplify CSS and
could increase layout thrashing. Again, tradeoffs.

Really enjoying this discussion, thanks for taking the time to write this up.
I have this debate in my head constantly.

~~~
nateberkopec
> On public CDNs- I mean public CDNs for common libraries (jQuery, Angular,
> Font Awesome) can improve page load time if a user already has the file. The
> more sites use the same public CDN, the more users benefit. But if a user
> doesn't have the file already, it's a loss. So, tradeoffs.

Even this has a tradeoff - if, for example, the other JS on your page require
JQuery, you can't make the CDN copy of JQuery use an "async" tag, because you
can't be certain of the order the JS will execute. And if you're not using the
async tag, the first load will suffer.

------
zeveb
Pure HTML loads ludicrously fast these days—as in, well-nigh instantaneously.
With a single CSS file, you can make it quite attractive. Eschew JavaScript
unless you really, truly need it.

~~~
dfar1
What website nowadays you do not ever truly need JS?

~~~
zeveb
If you're displaying text, then you don't need JavaScript. If you're
displaying images, then you don't need JavaScript. If you're displaying short
animations, then in theory you don't need JavaScript (I don't know how widely
support APNG is though).

If you're displaying movies, maybe you need JavaScript? I don't know if HTML5
video can work without JavaScript (it _ought_ to be able to, but lots of
things which _ought_ to be _aren 't_).

If you're accepting user comments, you don't need JavaScript. Forms work just
fine without it.

If you're accepting user votes, then you don't technically need JavaScript
(but the experience will probably be better with it, unless you're smart).

Really, it's hard to see what one truly needs JavaScript _for_. Slowing down
pages, sure. Destroying your readers' privacy, certainly. Getting root on your
readers' computers, no doubt.

~~~
elithrar
> Getting root on your readers' computers, no doubt.

Since when? The only JS privilege escalation is this Firefox vuln from 2008:
[https://www.mozilla.org/en-
US/security/advisories/mfsa2008-1...](https://www.mozilla.org/en-
US/security/advisories/mfsa2008-14/)

~~~
gsnedders
Any RCE exploit can likely be combined with any other OS vulnerability to get
privilege escalation.

------
daleharvey
This is a useful guide, however there is one thing missing that will have an
order of magnitude improvement over anything that is mentioned.

Use appcache (or service workers in newer browsers). Yes appcache is a
douchebag, but its far simpler than going through all of these and will have a
far bigger improvement

~~~
moron4hire
In my experience, appcache only improves _second_ loading time. You need all
of these other techniques to improve _first_ loading time. 2nd time is much,
much less important to optimize, because returning users have already gotten
through the annoyance filter, know what they are in for, has at least
subconsciously accepted the necessity of the load time they experienced the
first time. It's nice they won't have to experience it a second time, but if
first-time loading is very long, your second-time users are going to be much
fewer.

I had written quite a bit here on the sorts of things that I do to make very
short time-to-first-view sites, but basically the techniques all fall in place
if you setup Google Analytics and use their Page Speed Suggestions. You should
be able to get to 95% on mobile, 98% on desktop (I lose points on some 3rd
party scripts I've included for order processing, so I can't get around it.).
It will be a bit difficult your first time, but after that you will know what
all the tricks are and future sites you will just make "right" the first time.

~~~
JoeAltmaier
Confused; won't the second _user_ experience faster page load with appcache?

~~~
nostrademons
AppCache is client-side; it caches a certain set of URLs in your browser so
that they are available with no HTTP request. It doesn't work until your
browser has hit the page once, downloaded the cache manifest, and saved the
files.

------
FLGMwt
Udacity has some really awesome free courses from Google devs about this:
Website Performance Optimization[1] and Browser Rendering Optimization[2]

[1]: [https://www.udacity.com/course/website-performance-
optimizat...](https://www.udacity.com/course/website-performance-optimization
--ud884)

[2]: [https://www.udacity.com/course/browser-rendering-
optimizatio...](https://www.udacity.com/course/browser-rendering-optimization
--ud860)

------
fauria
I really recommend "High Performance Browser Networking" book by Ilya
Grigorik, it digs deep into the topic of browser performance:
[http://chimera.labs.oreilly.com/books/1230000000545/index.ht...](http://chimera.labs.oreilly.com/books/1230000000545/index.html)

------
jakub_g
One thing that might be non-obvious is that async script, while not blocking
`DOMContentLoaded`, blocks `load` event.

It means that if you have listeners to `load` event that are doing stuff, you
may want to have the `load` event as fast as possible.

Also, until the load event is raised, browsers display a spinner instead of
page's favicon.

Hence for non-critical third-party scripts, you may prefer to actually inject
them _in JS in onload handler_ instead of putting them directly in HTML.

A semi-related issue is handling failures of external non-critical scripts
(analytics blocked by adblock etc)

I wrote a draft of a blog article on the topic last week:

[https://gist.github.com/jakub-g/5286483ff5f29e8fdd9f](https://gist.github.com/jakub-g/5286483ff5f29e8fdd9f)

Context: We've faced an _insane_ page load time (70s+) due to external
analytics script being slow to load (yeah, we should have been loading the app
on DOMContentLoaded instead of onload).

~~~
paulirish
Since `load` waits for all iframes and images, you typically don't want JS
initialization to be dependent on it.

For non-critical third-party scripts, you might actually want to do something
like doc.on('DOMContentLoaded', e =>
setTimeout(requestIdleCallback(init3rdparties), 2000));

~~~
jakub_g
Hi Paul, thanks for sharing the `requestIdleCallback`, I didn't know it,
pretty interesting! Though since it's only in Chrome 47+, it will take a while
for it to gain market adoption.

------
cheriot
Spending the last 6 weeks in East Africa has completely changed my perspective
on web performance. And it's not just performance, it's reliability. Every
request a page can't work without is another chance to fail.

React/Flux and the node ecosystem are more verbose than I'd like, but they
might be onto something by rendering the initial content server-side.

~~~
paulirish
For sure. Rendering JS apps server-side is pretty much mandatory from a
performance perspective.

------
resca79
Another small advice, but less generic, could be to not include all css and js
libs of bootstrap, that is modular. I'm mentioning bootstrap because it is the
standard de-facto of many web apps.

Just spend 5 minutes selecting only the packages that you really use inside
your webpage, you can drastically reduce css and js file size

~~~
k__
Doesn't this force you to build it yourself from the less files?

~~~
resca79
yes but the compilation of boostrap from sass or less is very fast.

------
krat0sprakhar
This is damn helpful! Thanks for sharing. If you're interested in getting to
know more about how other sites perform and how to use chrome devtools to
address frontend performance issues, Google developer - Paul Lewis recently
started a series on Youtube called Supercharged. Here's the first episode -
[https://www.youtube.com/watch?v=obtCN3Goaw4](https://www.youtube.com/watch?v=obtCN3Goaw4)

~~~
nateberkopec
I am heavily indebted to Paul Lewis and Ilya Grigorik's writing, which
provided much of the source material in this article.

Anyone interested in further reading on the topic should check out anything by
those guys, they're the foremost in the field.

------
tedunangst
On a static HTML site with no scripts or external resources, I see 100ms of
loading/painting in the beginning, then 3000ms of "idle" time at the end,
which turns the flame graph into a pixel wide column. What is the point of
that?

~~~
nateberkopec
Something is delaying the load event. You'll have to dig deeper to see what's
holding it up - probably a network request for an external resource.

~~~
tedunangst
No resources. Happens on example.com, too, fwiw. Weird. Oh, well.

~~~
Semiapies
Double-check that you're disabling extensions.

------
radicalbyte
I've had great success in the past doing one very simple thing: on first load
send the client the exact html/css that must be loaded on their screen.

Once the page is loaded, use javascript to take over updates (using framework
of choice).

It worked great in 2008, hopefully the modern javascript developers can now
reinvent the wheel. It'll be a lot easier nowadays what with Node/V8 meaning
you can use the same code...

------
dasil003
This is a really nicely put together article, and I'll even admit the animated
gifs are funny, but damn if they don't make it impossible to focus on reading
the text.

------
philbo
Surely the first bit of advice in any post about analysing website performance
should be: __USE WEBPAGETEST __

It gives you access to everything Chrome dev tools do, plus so much more:

    
    
      * speed index as a metric for the visual perception of performance
      * easy comparison of cached vs uncached results
      * operating on median metrics to ignore outliers
      * side-by-side video playback of compared results
      * different user agents
      * traffic-shaping
      * SPOF analysis
      * a scriptable API
      * custom metrics
    

I could go on. There's a lot of excellent performance tooling out there but
WebPageTest is easily the most useful from my experience.

~~~
_mtr
> It gives you access to everything Chrome dev tools do, plus so much more:

Can WebPageTest (I've never heard of this tool) reach pages that require
authentication?

[http://www.webpagetest.org/](http://www.webpagetest.org/)

~~~
philbo
It can, using either basic auth or by scripting submission of login forms.

Probably obvious, but you should avoid doing it from one of the public
instances. Building your own private instance is as easy as spinning up a
prepared EC2 image, or if you have a couple of hours to fiddle about you can
do it from scratch on any Windows machine. Details here:

[https://sites.google.com/a/webpagetest.org/docs/private-
inst...](https://sites.google.com/a/webpagetest.org/docs/private-instances)

------
thekonqueror
I have been using AppTelemetry plugin for rough numbers on each phase of
request. This is much better for performance tuning.

Do you have any tips for optimizing PHP, where server response times are poor
to begin with? I've been trying to optimize a blog as proof-of-concept[1] but
it has plateaued at 1.5s load time.

[1] [http://wpdemo.nestifyapp.com/](http://wpdemo.nestifyapp.com/)

~~~
Gigablah
You are probably not going to get decent response times with PHP, especially
if you're using a framework. If you've already taken care of the basics
(opcache, PHP7 or HHVM, WP-specific caching plugins, etc) you would have to
settle for techniques such as nginx microcaching [1], which can drop your
responses to around 10ms.

[1]: [https://thelastcicada.com/microcaching-with-nginx-for-
wordpr...](https://thelastcicada.com/microcaching-with-nginx-for-wordpress)

~~~
thekonqueror
Wow. Microcaching reduced response time from 25 ms to 3 ms. Thanks!

------
arohner
Gratuitous plug, my startup, [https://rasterize.io](https://rasterize.io),
will give you most of the information in the chrome timeline, for every single
visitor to your site. It also analyzes the page and detects a lot of these
warnings, and alert when you introduce things that slow the page down.

It's in beta, but contact me if you're interested.

------
timbowhite
Any tips on how to handle site layouts that depend on the js that is
asynchronously loaded via the async attribute? Seems like this can cause a
flash of unstyled/unmodified html while that js is loaded and executed.

~~~
bshimmin
Use CSS to hide the unstyled content, and then JavaScript to remove that
styling and display it. This will, of course, invoke the ire of those who
browse with JavaScript disabled.

~~~
paukiatwee
How I did it is begin with .no-js class and using Modernizr to remove the
class so I can style element for .no-js .my-class -> visibility: visible and
.my-class -> visibility: hidden then finally using JS to remove visibility
css.

So if .no-js class is available (when browser disabled JS), .my-class element
still visible.

If .no-js is not available (when browser enabled JS), .my-class element will
be hidden and visible by your JS

See [http://stackoverflow.com/questions/6724515/what-is-the-
purpo...](http://stackoverflow.com/questions/6724515/what-is-the-purpose-of-
the-html-no-js-class).

------
Kluny
Could anyone explain like I'm 5 what "layout thrashing" is? As far as I
understand, it's when the size of an element is set in the CSS, like

    
    
        div {
            width:100px;
        }
    

Then later in the CSS it's changed to

    
    
        div .biggerdiv {
            200px;
        }
    

Or maybe it's javascript that changes it:

    
    
        $('.biggerdiv').style('width:200px;');
    

but either way it's when an element has some size near the beginning of
rendering, then as more information becomes available, it has to change size a
few times.

Am I getting it?

~~~
troels
Layout thrashing is when the browser layout engine does some work to render
the page, but then something changes and it has to do it over again. A typical
case is if you have a loop in javascript wherein you both read and write to
the dom. For example, if you change the css style of an element and then
afterwards try to read its position property. Because changing the style might
have affected the position, the browser will have to render parts or whole of
the page in order to give you that information. It's a very easy mistake to
make and it's not obvious what you have done if you don't look for it.

And in the context of page load, it would be if the dom is changed in many
different places (So yes, your example would probably cause layout thrashing).

------
RegW
Potentially dumb question from a backend dev:

Is there a way to get stats about page loading from Timeline, that could be
used to automatically ensure that the load times are not creeping up, and
breaking NFRs?

~~~
nateberkopec
Sure, check out the Navigation Timing API: [https://developer.mozilla.org/en-
US/docs/Web/API/Navigation_...](https://developer.mozilla.org/en-
US/docs/Web/API/Navigation_timing_API)

------
eatonphil
Could someone explain this paragraph? I feel like it is making a lot of
assumptions or generalizations about the use of $(document).ready();. I do not
follow what he is trying to say:

> Web developers (especially non-JavaScripters, like Rails devs) have an awful
> habit of placing tons of code into $(document).ready(); or otherwise tying
> Javascript to page load. This ends up causing heaps of unnecessary
> Javascript to be executed on every page, further delaying page loads.

~~~
eatonphil
Nevermind, he explained this again more clearly in the TL;DR.

> Every time you're adding something to the document's being ready, you're
> adding script execution that delays the completion of page loads. Look at
> the Chrome Timeline's flamegraph when your load event fires - if it's long
> and deep, you need to investigate how you can tie fewer events to the
> document being ready.

------
chain18
Can someone explain his point about $(document).ready()? I don't understand
how it is different from DomContentLoaded?

jQuery source for the ready function:
[https://github.com/jquery/jquery/blob/c9cf250daafe806818da1d...](https://github.com/jquery/jquery/blob/c9cf250daafe806818da1dd207a88a8e94a4ad16/src/core/ready.js)

------
snomad
Question about the 1 CSS/JS file rule.

If you have a 'My Account' section w/several unique rules (say 10k), which is
best? A) One website CSS (main.css) and your users download the My Account
rules even though they may never use them B) 2 CSS files are used for My
Account (main.css and myaccount.css) C) 1 file under My Account that
incorporates the main and section rules (main-plus-myaccount.css)?

~~~
temo4ka
In the case of 10 Kb files those approaches won’t make any noticeable
difference. But when you have large code bases consider how often changes
would be made to them: in most cases it is sensible to split your files into
two bundles — one for libraries or files that are not going to be updated any
time soon, and the other for project-specific code, that is very likely to be
tweaked in the course of the time following the release of the project. Thus
you leverage caching and don't force the users to download the whole CSS or JS
codebase every time you make some small adjustment.

------
gcb0
"JS and CSS assets must be concatenated"

that is fine if you have a tiny little site. If you are a big company, each
micro site will use a piece of the larger set of files. if one use a,js b.js
and c.js, when you concatenate you just lost 100% of cache if the user clicks
a link to a side of the site that only uses b.js and c.js

likewise, try to load, un-concatenated, common libs from widely used free
CDNs.

------
outworlder
I like this guide, with the caveat that, if you are doing a single page web
application, some of it gets turned upside down.

For instance, the "javascript will be loaded on every page load" part no
longer applies. It will be loaded only once, and will fech whatever it needs
from then on.

------
amelius
Regarding layout-thrashing: if only there were a way to hint the dimensions of
each element.

------
CosmicBagel
Gifs in the sidebar, 10/10

~~~
pstuart
Only the first cycle. After that they're 0/10.

------
dfar1
The best explanation of the timeline tool I've ever found. Thank you!

------
camperman
Very helpful stuff but I did go to The Verge and look under the timeline.
Scripting was a fraction of 1% of the load time. Have they disabled it because
of your article or am I missing something?

~~~
nateberkopec
Hmm, you must be. Here's my timeline for theverge.com:
[http://imgur.com/SOpXFZc](http://imgur.com/SOpXFZc)

Steps: go to theverge.com, open Timeline. Hit CMD-SHIFT-R for hard refresh,
which will automatically trigger timeline. Wait. When the load event fires,
Timeline will stop recording.

~~~
camperman
Ouch - OK, now I got it. Yeah that's terrible. Scripting is double loading,
rendering and painting combined.

------
humbleMouse
LOLZ @the layout thrashing gif! Nice post though, very informative.

------
blowski
Thanks - a well-written article with some really helpful pointers.

------
cbsmith
...and if you do HTTP/2, you'll get even faster loads if you pretty much break
all those rules (some notable exceptions).

~~~
nateberkopec
True, but that's a post for another day. We've still got another year or so
until HTTP/2 gets mass adoption.

~~~
MichaelGG
Except SPDY has been out for a while so why would anyone have been or continue
to wait?

------
jmartens
>While I use New Relic's real user monitoring (RUM) to get a general idea of
how my end-users are experiencing page load times, Chrome Timeline gives you a
millisecond-by-millisecond breakdown of exactly what happens during any given
web interaction.

New Relic does, too! Its a Pro feature called Session Traces.

------
yAnonymous
>47 requests, 7.474,76 KB, 4,68 s

That's on a 100Mbit connection.

------
_ZeD_
the page it's too slow to load... i waited solid 5 minutes... and... still...
nothing... to... see...

------
mahouse
"bare-metal Node.js"

------
dates
That was awesome- thanks!

------
binthere
Had to disable the font of the website, it's terrible for reading.

~~~
nateberkopec
I wish I could find a font everyone liked that isn't Helvetica :(

~~~
binthere
That's not the problem. When I view on a Apple device it looks good. I use
Windows 10 at home and I just couldn't read it.

------
pranaya_gh
Google just came out with the AMP project. It looks like google is encouraging
publishers to join its initiative in the same way how it herded everyone to
get "responsive". At the end of the day, good news for mobile web -
[https://techbullets.com/](https://techbullets.com/)

