
The Fastest Blog in the World - eugenparaschiv
http://jacquesmattheij.com/the-fastest-blog-in-the-world
======
onion2k
This is absolutely brilliant _if the only metric your audience cares about is
page load time_. For most sites that's only part of the story.

Take the first paragraph, about how Google's homepage of just a textbox should
really be a few hundred bytes instead of a megabyte. Google's homepage does a
lot more than just enabling you to enter a search - there's the autosuggest
feature, there's analytics, the apps tray, G+ integration with live updates,
etc. Google's homepage _looks_ basic but under the hood there's a lot going
on. Which is the real crux of the matter - Google have designed something that
doesn't get in the way of searching but is really a powerful portal to
Google's suite of services _because that 's what gets them the data that makes
them money_. The fact they might be able to shave a milliseconds off the
domContentLoaded time (which is only 394ms on my work PC) wouldn't make anyone
happier but it would damage their bottom line because they'd know less about
us.

If Jacques blog gets a significant amount more traffic by loading super fast
then that's a definite, measurable success. If the _only_ change is that it
loads faster, and he doesn't grow his audience, then he hasn't really achieved
anything.

There's a good lesson in this that's analogous to startups that spend huge
amounts of time and money doing things that get them no additional customers.
Optimising things that don't affect the metrics you use to measure how
successful you're being is a waste of effort. Put your time in to things that
actually matter.

~~~
vog
While I agree with your comment about Google, I disagree with this one:

 _> If the only change is that it loads faster, and he doesn't grow his
audience, then he hasn't really achieved anything._

If this makes his readers happier, it _is a success_ nonetheless.

Maybe this is a startup/HN thing that everything must grow and grow. But that
isn't the only successful strategy, even from a purely economic point of view.

If you have a niche, and serve that niche very well, you can beat your
competitors by quality rather than quantity. Not all niches are large. If you
manage to cover 100% of your niche, it is a huge success! - even if 100% means
just a few thousand people.

~~~
wahsd
load time is not a success factor in and of itself. Fast load times for bad
content will not lead to success as little as really great content with
horrible load times. But that being said, I am a proponent of significantly
prioritizing load times over everything else. Load time should be a constant
multiplier in decisions. If a "great feature" significantly taxes load time it
needs to either die or be optimized.

------
losvedir
> _I’m sure I can do better still, for instance the CSS block is still quite
> large (too many rules, not minified yet)_

CSS doesn't minify particularly well since the class names, tag names, and
attributes all have to stay in their full form. Basically it just ends up
being removing extraneous whitespace.

However, ever since reading James Hague's post on "Extreme Formatting"[0],
I've rather liked CSS without all the extra whitespace. For example, [1]. You
can see all the rules at a glance, and while it's a bit weird at first, I
think you can get used to it pretty quickly. But then again, I've always sort
of thought APL/J/K are beautiful in their own way.

[0] [http://prog21.dadgum.com/200.html](http://prog21.dadgum.com/200.html) [1]
[http://prog21.dadgum.com/p21.css](http://prog21.dadgum.com/p21.css)

~~~
rheide
But if everything that could possibly reference a class or tag or attribute is
inlined into the same file then you know exactly what the scope of your CSS is
for that particular request. So it should be possible to minify even the
names.

~~~
jacquesm
Hm. A minimizing post-processor for a webpage _and_ all the resources that it
loads. That's an interesting idea which means it has probably already been
done.

~~~
thenomad
Doesn't mean it's been done well, it's been done to the point it's usable to
anyone other than the originator, or it's been done and subsequently published
in a way that's available to the general public, though :)

------
Animats
Here's a page of mine that's almost 20 years old.[1] Look at the page source.
That's HTML in its pure form. No styling at all.

Looking at the page source for the "fastest blog", there's still far more
formatting info than content. Much of that bloat isn't doing anything.

[1]
[http://www.animats.com/papers/leggedrun/leggedrun.html](http://www.animats.com/papers/leggedrun/leggedrun.html)

~~~
bhaak
HTML in its pure form doesn't render well on current mobile devices.

It's idiotic that you are punished when you use HTML like it was supposed to
be used but that's unfortunately how the mobile browsers work currently.

~~~
e12e
Doesn't render nice in desktop browsers either. Once upon a time, at least
that page would actually be a little easier on the eyes (and a bit uglier)
thanks to Netscape's gray background. But the real kicker is that user
stylesheets have gone the way of the dodo. There's no reason that page (as is)
couldn't look better than most pdfs generated from LaTeX or what not. Sad, but
true.

~~~
bhaak
Firefox still has the user stylesheet but it's always been cumbersome to use
it.

There's the extension "Stylish!" for Chrome and Firefox which has quite a good
UI for giving the power of styling websites back to the user.

------
lucianp
> _Take the Google homepage. It’s a one liner text field and two buttons. It
> weighs in at a whopping 1170 kilobytes! That’s more than a megabyte for what
> technically should not take more than a few hundred bytes._

The Google homepage does a lot more than that. Since it is one of the most
important pages for Google, I think that the engineers over there know what
they are doing. I don't think it is a good example of _bloat_.

~~~
jacquesm
Coldtea pointed that out as well, I've changed the article to reflect this.
Thank you!

------
coldtea
> _I positively hate bloat in all its forms. Take the Google homepage. It’s a
> one liner text field and two buttons. It weighs in at a whopping 1170
> kilobytes! That’s more than a megabyte for what technically should not take
> more than a few hundred bytes._

That's because you didn't consider the business reasons behind it being
1170Kb.

If it was your "few hundrend bytes" design it would have sank the company (or
only have worked in the early days, when VC money took care of lack of income
sources).

~~~
pjc50
The Google home page from 1998 really is a one line text field and two
buttons.
[https://web.archive.org/web/19981111183552/http://google.sta...](https://web.archive.org/web/19981111183552/http://google.stanford.edu/)

It only really starts getting js-heavy after about 2010. Evidently the
relative cost of bandwidth and value of analytics have crossed over.

~~~
keithpeter
I remember first seeing google and it seemed so amazingly clean and fast
compared to AltaVista. We were using a 64k leased line into the small college
I worked in at that time, and I had a 28k8 modem at home.

As we move to price per Gb of download from 'all you can eat' I think that
page payload may become more important.

------
rtpg
doesn't inlining the CSS make the page require more data in the long run?
Especially if you're not changing it often

Browser caches solve a lot of things for us. Though you're still parsing a
bunch of JS, google's front page is "only" fetching 56kb

Granted, this blog post is 13.5kb, but it doesn't have an entire search app
embedded into it (with the whole search results automatically appearing thing,
I imagine that inside that 56kb is pretty much all the code required for all
of google's little widgets in the search results)

Anyways, always fun to see somebody go in and rip out as much cruft as
possible

~~~
jacquesm
There is a small penalty but it does not seem to weigh up against the
advantage of having all the data in one shot even on first page view on that
site. Total overhead of the CSS 'in flight' is about 6.6K, versus doing
another request which would require another round-trip to the server.

Inlining the CSS was considerably faster in all my tests.

~~~
nevinera
How much of that win was only on the first time a page from your site is
requested?

I'd expect in-lining the css to be a _slight_ net loss once the data is cached
- my strategy would be a `style-<md5sum>.css` with a long max-age.

I wish browsers could cache page fragments!

~~~
jacquesm
> I'd expect in-lining the css to be a slight net loss once the data is cached

I expected that too but it didn't work out that way. I'm not sure why,
possibly a cache lookup is still slower than reading the style info out of the
same page. I don't know enough about the guts of a modern browser to make the
call but the numbers aren't there.

~~~
nevinera
In chrome, I'm getting extremely similar times for the two approaches _without
caching_ (I am in-network, so my roundtrips are ~1ms). With the style.css
fully cached, I'm getting ~15ms faster on the sideband approach than with
inline styles.

Can you expose your methodology, or are you using a common tool for testing
these things?

~~~
jacquesm
Using FF development tools. Shift-ctrl-Q, reload.

~~~
nevinera
It looks like inline performs better on firefox and not on chrome - I just
made a bunch of attempts both ways, and I'm seeing cached sideband as
significantly better on Chrome, and only roughly equal on firefox.

I was not expecting to see such a difference between the two browsers, but I'm
not familiar enough with firefox to guess what's causing it :-\

~~~
jacquesm
Ah! Thank you for that datapoint, I don't use Chrome but I really should have
tested that too.

------
aorth
Very zippy indeed! Have you thought about switching to nginx so you can get
SPDY (and soon HTTP/2)? Also, Hugo is cool. I've evaluated it myself, but I
have 8 years of WordPress archives -- ~500 posts -- and I worry about things
like images, comments, etc.

~~~
jacquesm
The benefits from SPDY are probably not going to help much if you only do one
request :)

~~~
chrismorgan
Things like resource inlining are an antipattern in an HTTP/2 world. The
stylesheet for example _should not be inlined_ ; the server should instead
push the stylesheet out to the user agent proactively (typically three
frames—a PUSH_PROMISE telling the client of the request that is being
simulated, then a HEADERS and a DATA frame with the response).

The first time, the browser will receive it approximately as fast as the
inlined version; subsequent times, there is no overhead, moreover browsers can
load the stylesheet only once rather than needing to load it for every page.

------
binarymax
Mine is very fast [http://max.io](http://max.io)

I use wintersmith to generate it, and it is hosted by nginx in an EC2 t1.micro

~~~
jacquesm
Indeed it is! 330 ms on my connection, 5 requests, 18K data and 13 K in
flight.

~~~
binarymax
The funny thing is I didn't even try! Just feels like common sense. Maybe
since I've been doing web stuff since '95 its in my blood to keep things tiny
and simple.

~~~
jacquesm
I think one of the major differences is in how the content is generated. The
more software that is used to make the content the larger the pages will be.
If you hand-craft your page it will automatically be small and fast.

------
hendry
Mine is faster :)
[https://github.com/kaihendry/natalian](https://github.com/kaihendry/natalian)

    
    
      X1C3:~$ httping http://natalian.org
      PING natalian.org:80 (/):
      connected to 54.192.159.125:80 (147 bytes), seq=0 time= 91.36 ms
      connected to 54.192.159.123:80 (147 bytes), seq=1 time= 28.63 ms
      ^CGot signal 2
      --- http://natalian.org/ ping statistics ---
      2 connects, 2 ok, 0.00% failed, time 1394ms
      round-trip min/avg/max = 28.6/60.0/91.4 ms
      X1C3:~$ httping http://jacquesmattheij.com/the-fastest-blog-in-the-world
      PING jacquesmattheij.com:80 (/the-fastest-blog-in-the-world):
      connected to 62.129.133.242:80 (329 bytes), seq=0 time=550.05 ms
      connected to 62.129.133.242:80 (329 bytes), seq=1 time=550.08 ms
      ^CGot signal 2
      --- http://jacquesmattheij.com/the-fastest-blog-in-the-world ping statistics ---
      2 connects, 2 ok, 0.00% failed, time 2760ms

~~~
jacquesm
I'm pretty far away from you, you have to discount for the network.

Pretty good though, I think if you inline the CSS you'll win hands down :)

~~~
hendry
I don't like the idea of having reproduced CSS in each page. style.css should
be optimal. :)

~~~
jacquesm
It should be, but measurements bear out that it isn't...

~~~
hendry
Speed isn't everything to me. I love the fact that for example view-
source:[http://natalian.org/2015/07/06/Bank_secrecy_in_Singapore/](http://natalian.org/2015/07/06/Bank_secrecy_in_Singapore/)
is more readable than most blogs.

~~~
eitally
It's mind boggling to me, as someone who was first introduced to web
development in the days of gopher, that most fresh developers don't adequately
understand simplicity, but also believe it's a-ok to glue a bunch of js & css
libraries together and call it "good enough".

The first Java version of an internal app we had converted from Progress 4GL
used Java Webstart to dynamically load & launch, and because of programmer
laziness (and the 93 3rd party Java components they'd included) it literally
took 3 minutes to launch. That was the point where the manager -- who was a
better programmer than anyone in the team, but who had previously been hands-
off -- stepped in and created some rules and instituted code reviews. Still,
though, totally insane behavior by so many young web programmers.

------
jgrahamc
You could make it even faster by putting it on CloudFlare and setting a cache
rule so that we keep it in cache all around the world. Then you'll get the
advantage of fighting the speed of light as well.

~~~
jacquesm
That's a good one, I should try that.

------
codecurve
You can use the Chrome dev tools to run an audit. One of the performance
metrics it tracks is the number of unused CSS rules.

~~~
jacquesm
Thank you, that's a very useful tool.

------
PuffinBlue
I really enjoy trying to make things as bloat free as possible but also
retaining some styling, unlike motherfuckingwebsite.com.

Many of my posts use images and I use Google Analytics so I made a text only
one and disabled GA to see how it compares. With your trick of the inline
stylesheet I got a massive speed improvement[0], down from about 100ms to
between 60-70ms.

I too use Hugo. I host the site on a $10 VPS from digital ocean located in
London, but I also use Cloudflare to speed up image delivery.

I'm not sure where your site is physically hosted but testing from Amsterdam
we're pretty much the same given the 1KB page size difference.

Very cool post and thanks for the tips!

[0]
[http://tools.pingdom.com/fpt/#!/bvkhPp/http://josharcher.uk/...](http://tools.pingdom.com/fpt/#!/bvkhPp/http://josharcher.uk/code/speedtest/)

~~~
jacquesm
I'm hosting at virtual access in Uitgeest. Any difference you see is most
likely because the server this is on is a massive beast.

------
jsnell
It's really hard to argue with the basic idea. For my blog[1] it'd generally
be about 6kB for the html + 1.5kB of CSS that'd be cached for return visitors
(typically 15% of pageloads) + Google Analytics almost always from cache.

The "Recent Tweets" part of this site feels somehow at odds with the
principles though. It's got nothing to do with the actual page, and has a
really bad ratio of markup to content. Those 20 tweets are still 11kB
uncompressed! It's also a very heavy visual element.

[1] [http://www.snellman.net/blog/](http://www.snellman.net/blog/)

~~~
jacquesm
The goal was not 'minimum size per se' but 'minimum size while retaining all
the elements of the old blog'. So yes, the 'recent tweets' (and the older
posts) sections are at odds with a total minimalist style. But the whole idea
was to _not_ simply strip but to strip while maintaining all the old
functionality.

------
pauly
I did some testing using [http://yellowlab.tools/](http://yellowlab.tools/) to
find the slow parts of
[http://www.clarkeology.com/blog/](http://www.clarkeology.com/blog/) and I'm
pleased with the results so far. Ditching jquery was quite a big win. Having
no content to speak of makes it fast too...

------
e12e
20kb is still over a minute at 2400 baud ;-)

------
CognitiveLens
Surely you're throwing away some of the speed benefits of parallel requests by
massively-inlining everything?

I'd be interested in seeing some waterfall charts comparing this setup with
something that keeps the number of total requests small (say 3-5 requests) and
evenly balanced. Particularly as we move toward HTTP2, having lots of small
parallel requests will be a more effective way of getting raw page load
performance.

The speed gains from eliminating all but the most necessary components is
definitely the biggest win here, though - cool to see what you can do when you
decide to get focused about what _needs_ to be on the page.

~~~
jacquesm
I expected that to be the case but repeated measurements tell me that it is in
fact the opposite. The difference was about 50% faster on the version with
everything inlined. Counter-intuitive for sure!

For instance, just taking the CSS out and loading that separately doubled the
page rendering time (because another resource had to be loaded _after_ the
first one).

Now it is just like a 'declare before use' program in a regular programming
language, by the time the browser reaches a tag that needs definitions from
the CSS the CSS is already there, right there in the page, no need to wait
until reading that separate resource is done. And that round-trip to the
server is actually more expensive than the entire embedded CSS. Looking at it
after going through the whole exercise it makes sense but that was definitely
not what I expected. Even more counter-intuitive: this holds even when loading
multiple pages on the same site that share the same (small) CSS file.

So in the end the inlining of the CSS was a good thing to test. Presumably,
there is some cross over point where if the CSS file gets very large there is
a benefit for follow-up pages on the same site to be able to re-use it.

~~~
igravious
A couple of Hugo questions Jacques, and please don't tell me to RTFM!

Is there a way to instruct Hugo that once a crossover point is reached to
unbundle the CSS? That'd be sweet.

Is there a way to concatenate multiple Markdown files into a single post?

The reason I ask is because I've recently started building a site that has
weird hand-rolled static Markdown pages served up dynamically by a
Rails/Bootstrap combo. The kicker, some of the pages are very long so I've
split them up into multiple files to make them cognitively easier to edit. I'd
be interested if I could drop the Rails part :)

Thx in advance

ps: I'm now too afraid to measure the page load times of my Wordpress blog now
:(

~~~
jacquesm
> Is there a way to instruct Hugo that once a crossover point is reached to
> unbundle the CSS?

Not that I'm aware of, Hugo is pretty much of the 'sausage grinder' variety of
blog post generators, not much in terms of decision making during the
processing from what I've seen so far. I also ran into a pretty serious bug
while doing this and there are likely more. Still, as fresh as it is it
performs amazingly well and the authors are super helpful and worked hard to
track down and fix that bug.

> Is there a way to concatenate multiple Markdown files into a single post?

Again, not from within hugo but that one should be fixable with some pre-
processing. I use a makefile that does some pre and post processing.

> I'm now too afraid to measure the page load times of my Wordpress blog now
> :(

Do it anyway, that will give you a nice before-and-after benchmark.

------
kijin
> _inlined the stylesheet (there is a cache penalty here so you have to trim
> it down as much as possible but the page starts rendering immediately which
> is a huge gain at the cost of a little bit of extra data transferred)_

I guess you're optimizing for people who only visit one page and bounce away?

Well, come to think of it, the majority of visitors who followed this HN link
is never going to navigate past the single page that was linked. So it
probably makes sense for optimize for them, at the expense of repeat visitors
who will have to download the stylesheet over and over again.

~~~
jacquesm
By my measurements (and pagespeed and a couple of other tools like that) it's
actually faster for repeat visits and multiple pages on the same site as well
(which surprised me but I'll yield when the data is that conclusive).

------
design-of-homes
How does my blog compare? I submitted this link to Hacker News yesterday. It's
quite an image heavy page but I think (I hope!) it loads quite fast for most
users. It obviously won't be as fast as a text-only page. There are no fancy
optimisation tricks. Just plain HTML and CSS and a tiny bit of Javascript (for
older browsers). Not responsive either, but readable on mobile.

[http://www.designofhomes.co.uk/042-tour-bois-le-
pretre.html](http://www.designofhomes.co.uk/042-tour-bois-le-pretre.html)

~~~
jacquesm
That's pretty good actually. You're very heavy on visuals but that's to be
expected given the nature of the page and the only things you could do to make
it faster are to lose the font and inline the CSS, but likely the gains in
your case would be minimal so I'd just leave it as it is. Good job!

------
Grue3
I've seen this blog, jacquesmattheij.com several times on HN and every time I
try to access it I get 403 Forbidden. What gives? It's not very fast either...

~~~
jacquesm
Interesting! What's your IP? Maybe you ran afoul of my pretty trigger happy
security measures. That machine also runs a pretty high traffic website.

------
pixelbeat
It's great to see focus on performance, especially after all the recent
stories on web bloat.

I've noted a few techniques I've used on my blog to get pages served in a
single request to users, including using SSIs and avoid cloudflare's "rocket
loader"

[http://www.pixelbeat.org/docs/web/about/#performance](http://www.pixelbeat.org/docs/web/about/#performance)

------
alfredxing
1\. Thumbs up for making loading times a priority!

2\. Using Hugo instead of Octopress won't make your site any faster to users
¯\\_(ツ)_/¯

3\. You're almost completely forgetting about server-side performance. I'd
recommend looking into GitHub Pages for a free and fast (CDN-backed) way to
host a static site.

\---

Unrelated, but:

4\. I'm not a fan at all of the self-upvoting link at the bottom of your post.
Not cool.

------
fatso83
> I’m sure I can do better still, for instance the CSS block is still quite
> large (too many rules, not minified yet)

You could try out Addy Osmani's tool, Critical, for extracting and inlining
critical path css. That only leaves out the css in actual use. If you actually
want all css in use (also below the fold), then just specify a bigger window
size.

------
chmike
@jacquesm are you sending compressed files ? I can't check that myself with my
iPad 1. I'm in a camping with a bad WiFi. I can confirm that I saw and felt
the difference ! Congratulations.

~~~
jacquesm
Yes, the gzip module is on in the web server.

~~~
chmike
Are you using a front end web server for handling caches, TLS, etc. in front
of the Hugo server or is the hugo server in the front line ?

~~~
jacquesm
Plain old apache but tuned very well on a massive server (32 cores, 96G of
RAM), the server is actually the production server for a high traffic website
and this is just piggy-backed onto the site as a 'servername' entry in the
apache config. If I shut down the other site then blog will be quite a bit
faster still but that seems counterproductive ;)

------
witty_username
> (nice idea for a browser plug in, take the css loaded by the page and remove
> all unused rules)

I believe Chrome's devtools can do that. (See the Page Audit functionality).

------
firloop
Related:
[https://news.ycombinator.com/item?id=9990630](https://news.ycombinator.com/item?id=9990630)

------
xerophyte12932
I compared it with my svtble blog and I am pretty pleased with the results. No
bloat at all. Just one request with inline css.

~~~
jacquesm
Link?

~~~
xerophyte12932
[http://syedmusaali.svbtle.com/](http://syedmusaali.svbtle.com/)

------
dimitar
Is there a better way to have fast-loading comments than outsourcing them to
Twitter/HN or Discus?

~~~
rtpg
<form method="post"> <input type="text" /> <button type="submit">Post</button>
</form>

That will probably load really fast. You can server-side cache the post
comments section fragment (or the render of the entire page really) to get
nice results

------
noir_lord
You beat mine by 10ms, this affront will not stand!

The webfont kills me without that it'd be under I think.

~~~
jacquesm
Hehe. Challenge accepted ;) Let me know when you beat mine, I'll update the
post.

------
acjduncan
You could also uglify your CSS and HTML by removing redundant whitespace and
newlines

~~~
jacquesm
I'll have a look to see what the differences are, gzip is already enabled so
it might be less than on a site that sends all the data in plaintext over the
wire.

~~~
TheLoneWolfling
Also, you could cut down the size slightly by shrinking div names. Sure, with
gzip "aside.sidebar" isn't going to take much extra space, but it will take
some.

(That being said, please don't do this. Minified JS is bad enough.)

~~~
jacquesm
I think I'm pretty close to the 'sweet spot' where further optimizations are
both a waste of time and make the result much less useful in the longer term.
But just for the sake of research I'm more than willing to play around to see
what happens.

~~~
TheLoneWolfling
In that case... I wonder if reordering things (e.g. CSS rules) could improve
the compression ratio.

~~~
jacquesm
I'm still at the stage of dropping rules that aren't used. CSS tends to get
pretty inefficient if enough people have hacked on the file and the one on my
site is no exception to that. It'll still need some work. But even that is
already in the realm of 'diminishing returns on investment'.

