
The Fastest Blog in the World (2015) - nodivbyzero
http://jacquesmattheij.com/the-fastest-blog-in-the-world
======
djhworld
See also [http://motherfuckingwebsite.com/](http://motherfuckingwebsite.com/)
and
[http://bettermotherfuckingwebsite.com/](http://bettermotherfuckingwebsite.com/)

I really like the minimalism of these. I guess the interesting part of the
linked article was the guy wanted to make his blog look exactly the same as it
was before he made the optimisations

~~~
discreditable
[https://bestmotherfucking.website](https://bestmotherfucking.website) is
another one along those lines.

~~~
happy-go-lucky
Inspired by the link, I created:

[https://sknsri.github.io/advice-by-
kurosawa/](https://sknsri.github.io/advice-by-kurosawa/)

------
theandrewbailey
> This is a nice example of ‘premature optimization’ but I do hope that the
> users of the blog like the end result.

Is it really premature? You already had a blog that worked, saw a need (or
challenged yourself?), and acted. Sure, this might have been a bit overboard,
but I think that it was the right time for this optimization (not premature).

> Optimizing a thing like this is likely a bad investment in time but it is
> hard to stop doing a thing like this if you’re enjoying it and I really
> liked the feeling of seeing the numbers improve and the wait time go down.

You did a good job. And I like seeing those numbers go down, too!

------
hanikesn
I guess no one here reads [https://blog.fefe.de/](https://blog.fefe.de/)

It's the only page you can read on a GPRS connection (http) and you can
literally see every packet as it's transmitted, because the page is rendered
bit by bit.

~~~
tristor
Is there a way to slow it down on fast connections and or a video of this? I
have an academic interest and am curious how they did this, but don't have an
easy way to demonstrate it to myself as I don't currently have a connection
slow enough that it doesn't render instantaneously.

~~~
morsch
Both Firefox[1] and Chrome[2] (and, presumably, Safari and Edge?) have a
network throttle in their dev tools. Apart from that, you can use OS level
throttling, e.g. Network Link Conditioner[3] for OS X. Couldn't see any
incremental rendering on Fefes Blog though.

[1] [https://blog.nightly.mozilla.org/2016/11/07/simulate-slow-
co...](https://blog.nightly.mozilla.org/2016/11/07/simulate-slow-connections-
with-the-network-throttling-tool/)

[2] [https://developers.google.com/web/tools/chrome-
devtools/netw...](https://developers.google.com/web/tools/chrome-
devtools/network-performance/network-conditions)

[3] [http://nshipster.com/network-link-
conditioner/](http://nshipster.com/network-link-conditioner/) (We use this to
throttle Websocket transmissions, which, last I checked, the browser devmode
throttles don't apply to.)

------
geerlingguy
This is a worthy goal for any site you build, not just a blog... that said,
it's not even that hard to get crazy bloated CMSes like Drupal, Wordpress,
etc. as fast (or faster), as long as you set up basic caching (e.g. Nginx,
Varnish, CloudFlare, etc.).

I have a basic-ish theme, and I use Drupal's AdvAgg module to aggregate and
minify JS and CSS, as well as a few other tricks to get page loads smaller.
Finally, I use Nginx's dead-simple proxy cache to make most page loads take <
600 ms (and faster, if you're near NYC, where my DO Droplet is located).

See, for example: [https://www.jeffgeerling.com/blog/2017/tips-managing-
drupal-...](https://www.jeffgeerling.com/blog/2017/tips-managing-
drupal-8-projects-composer) (~600 ms from STL, MO, USA).

Obviously, the more images and other elements (e.g. social embeds, analytics
and other junk), the more time spent downloading the page.

But, IMO, there's no excuse for your personal blog to take more than 1s to
render and fully deliver a page, with an exception if you embed videos/audio
(e.g. podcasters).

~~~
flukus
> Finally, I use Nginx's dead-simple proxy cache to make most page loads take
> < 600 ms

600 ms is appalling. Most site should be able to get under 50 ms easy
(ignoring internet latency). A blog should be less than 25-30 ms, even if it
has to hit the database.

~~~
matthewking
I think you're confusing total rendering time, as being discussed here, with
server response time. I'm getting 86ms response time from the mentioned blog
(from London), and 900ms total rendering time.

~~~
flukus
I did mention latency, which you can't do much about. For me it's around 100ms
from response received to rendering, most of which seems to be google
analytics. I don't know where the server is, but I'm in Australia so it's
probably crossing the pacific. See my reply to the other person for why I'm
blaming analytics.

------
wkoszek
Read Maciej's
[http://idlewords.com/talks/website_obesity.htm](http://idlewords.com/talks/website_obesity.htm)

BTW: What causes this 11 points + 4 comments thing to hit the front-page of
HN? Just wondering, as I believed it's the number of comments + popularity
that pushes the link to the front.

~~~
amk_
[https://github.com/wting/hackernews/blob/master/news.arc#L26...](https://github.com/wting/hackernews/blob/master/news.arc#L262-L278)

~~~
wkoszek
O, that's nice. I didn't know the whole HN is open-source.

~~~
icebraining
Everything except the mechanisms to detect voting rings and such.

~~~
grzm
Actually, I think the code that's on github is quite old. This repo hasn't
been updated in 2 years, and I know they've made changes to functionality
beyond just the admin/detection features you describe above. I don't know
_how_ much different the code is now, but I'm pretty sure what's publicly
available is not up to date.

------
AndrewStephens
I completely agree with the basic sentiment of this article. Far too many
sites lead with massive images and huge javascript frameworks just to serve
what could be a few kilobytes of text.

That said, I did not go as far as this author when designing my personal
blog[1], I considered the following tradeoffs worth the slight cost:

* I didn't inline images or CSS. I can see the appeal but I don't believe it is really worth it unless you have relatively small amounts of CSS. In theory HTTP2 is supposed to help here as well, and on really slow connections inlining can slow things down as the browser is forced to download the inlined stuff instead of progressively displaying the page as it can.

* I ended up deciding that the custom font I wanted to use was worth the cost. I thought hard about it though and would perhaps decide against it if I was designing the site again.

* You can drive yourself insane trying to minimize traffic for images. Should you try to serve 2x images for retina displays? Small images for mobile devices? In the end I just serve the same images for everybody and minimize the use of images overall. It works for me because I don't have a lot of need for splashy pictures.

* I avoided any type of social media button or plugin, they tend to make additional requests back to the mothership. Very few people actually liked or +1ed anything on my old blog anyway, but people with better blogs might find the trade-off worth it.

[1] [https://sheep.horse](https://sheep.horse)

~~~
spc476
I don't inline CSS or images either, but I did give them a long expiry time (a
year). The first hit to my blog [1] might not be that fast, but subsequent
hits should be (it's mostly text anyway). The CSS file has a unique name, and
when I change it (it doesn't happen that often (last time was May 2015) it
gets a new filename. Also, I serve no Javascript.

I do have one external bit---a block pointing to my Amazon affiliate account
(which might have Javascript, I don't know, probably does). It's disabled for
mobile devices (via CSS---I use CSS to change the layout to make it more
mobile friendly).

[1] [http://boston.conman.org/](http://boston.conman.org/)

------
helipad
The joy of making a site like this isn't just the speed, it's that you can
continually remove and simplify things without making the experience worse. I
suppose that's the definition of minimalism?

It means that when you _do_ add an image, or some JavaScript, you're doing it
because it demonstrably adds something of considered value, not just because
it's easy.

------
currysausage
As wonderful as it is that we have complex tools avaibale for complex use
cases: let's keep simple things simple. For example, for something as simple
as a mobile dictionary web app, you don't even need a framework. Just look at
this web app, it loads faster than HN, practically instantly, and it still
looks sleek: [http://m.dict.cc/](http://m.dict.cc/)

Take a look at the JavaScript, it is so beautifully anti-best-practices! No
framework, global namespace pollution, whatever: it just works!

~~~
nashashmi
The one major problem that contributes to web page bloat: Efficient Developer
Systems.

These are solutions that do not belong in the front end of web development.
Something else must work instead.

------
mhd
Speed and readability is one of the main reasons why I'm still sticking to RSS
when it comes to reading blogs. Avoids most bloat issues and I don't really
care about anyone's favorite colors and web fonts.

------
tantalor
The "fastest blog in the world" mentioned:
[http://prog21.dadgum.com/](http://prog21.dadgum.com/)

Both sites score 100 / 100 on desktop & mobile on Google PageSpeed Insights.

------
FanaHOVA
A blog is really fast if you don't put anything but text in it basically.
Lesson learned!

~~~
interfacesketch
> _A blog is really fast if you don 't put anything but text in it basically_

Here's a dummy test page I made a while ago to see if I could create a fairly
lengthy, fast-loading text page for slow mobile connections. It's hosted on a
cheap shared hosting plan, so it may well fall over (or not!)

Version A (no font loading): [http://interfacesketch.com/test/energy-book-
synopsis-a.html](http://interfacesketch.com/test/energy-book-synopsis-a.html)

Version B (loads custom fonts - an extra 40kb approx):
[http://interfacesketch.com/test/energy-book-
synopsis-b.html](http://interfacesketch.com/test/energy-book-synopsis-b.html)

The image at the top of the page hasn't been optimized (about 40kb), however I
do think aesthetics are important in page design and I'm against reverting to
a plain HTML look with no CSS styling. The test pages above are plain looking
but, I hope, reasonably pleasant to look at. (The custom font version looks
nicer in my view than the no font loading version, but of course it adds a bit
of extra page weight).

------
dzhiurgis
He should revise his Apache configuration, because there's definitely
something wrong there. First request is taking twice as long as second one's:

    
    
        bayesian-goat:CreditScoreIcons heyoo$ httping http://jacquesmattheij.com/the-fastest-blog-in-the-world
        PING jacquesmattheij.com:80 (/the-fastest-blog-in-the-world):
        connected to 62.129.133.242:80 (329 bytes), seq=0 time=1023.28 ms 
        connected to 62.129.133.242:80 (329 bytes), seq=1 time=554.06 ms 
        connected to 62.129.133.242:80 (329 bytes), seq=2 time=555.50 ms 
        ^CGot signal 2
        --- http://jacquesmattheij.com/the-fastest-blog-in-the-world ping statistics ---
        3 connects, 3 ok, 0.00% failed, time 5052ms
        round-trip min/avg/max = 554.1/710.9/1023.3 ms
        bayesian-goat:CreditScoreIcons heyoo$ httpstat http://jacquesmattheij.com/the-fastest-blog-in-the-world
        Connected to 62.129.133.242:80 from 192.168.1.1:53437
        
          DNS Lookup   TCP Connection   Server Processing   Content Transfer
        [    521ms   |      344ms     |       278ms       |       559ms      ]
                     |                |                   |                  |
            namelookup:521ms          |                   |                  |
                                connect:865ms             |                  |
                                              starttransfer:1143ms           |
                                                                         total:1702ms
    
    

Edit: Noticed interesting things about
bettermotherfuckingwebsite.com(AmazonS3, Content-Length: 1943) and
motherfuckingwebsite.com(nginx/1.10.3, Content-Length: 5108) - the Content
Transfer part on those two only takes 1ms! Meanwhile dadgum.com has Content-
Length: 9344 and transfer takes 162ms. Anyone got ideas why the massive
difference?

------
yellowapple
Ome thing worth considering here is caching. For elements common to a whole
site (like web fonts and stylesheets), the initial download might be big, but
subsequent downloads won't need to happen, since the browser already has a
copy. It still ain't an excuse to load dozens of WOFFs, but it's enough to
make the hit a lot less severe for those who've already visited your site.

GZIP helps considerably here, too.

------
JustSomeNobody
>Bloat to me exemplifies the wastefulness of our nature, consuming more than
we should of the resources that are available to us.

That's the money quote.

------
stillsut
Hacker: How dare you make me enable JS to view your website

Hacker: It's simple: clone the repo, install gcc, then dependencies, open
command prompt, compile, now you can do the same thing as Microsoft Word, well
kind of.

[This comment is 99% joke; still might be useful to look at the community
aesthetic from an outside perspective]

------
PuffinBlue
There's a trick to make it even faster - serve from a CDN closer to the
testing server :-)

[https://tools.pingdom.com/#!/sNNVG/https://josharcher.uk/cod...](https://tools.pingdom.com/#!/sNNVG/https://josharcher.uk/code/speedtest/)

vs

[https://tools.pingdom.com/#!/dSNHcL/http://jacquesmattheij.c...](https://tools.pingdom.com/#!/dSNHcL/http://jacquesmattheij.com/the-
fastest-blog-in-the-world)

I enjoyed this post the last time it came up[0] and learnt a few tips from it.
Particularly interesting was the difference making CSS inline made, even for
reasonably large amounts of CSS.

[0]
[https://news.ycombinator.com/item?id=9995529](https://news.ycombinator.com/item?id=9995529)

------
Nadya
Well you include the "Follow" Twitter button twice. Once in the side bar and
once at the end of the blog. You can also PNGCrush your favicon.png to save 58
bytes. I'm sure I could spot a few more minor savings if I looked, not
including minification and cleaning up the CSS, since those were already
mentioned.

I highly recommend the advice of Heydon Pickering [0]. The best optimizations
can be made by _not writing code_.

If I disabled the custom font I'm using (87.7KB) the home page of my "blog"
[1] comes in at ~1463 bytes. 802 bytes of which is the CSS, leaving under 1kb
of HTML per post once the CSS hits cache. It would have an average load time
of ~45ms.

[0] [https://vimeo.com/190834530](https://vimeo.com/190834530)

[1] nadyanay.me

------
sandGorgon
My personal favorite is Jekyll Amplify -
[https://github.com/ageitgey/amplify](https://github.com/ageitgey/amplify)

A Jekyll html theme that looks like style of Medium.com and uses Google AMP.

------
jen729w
> Imagine an envelope for a letter that weighed a couple of pounds for a 1
> gram letter!

Sounds like the licenses we receive from Cisco. They are literally an A5 sheet
of (thin) paper packaged, 3 boxes deep, in something easily the size of a shoe
box.

------
thinkloop
Removing images and live tweet feeds makes it a different site without
improving speed as they are async. Removing embedded fonts, analytics and non-
rendering js for extra functionality also does not improve load time if done
asynchronously as it should. The same fastest'ness could have been achieved
without any of the sacrifices. Bloat is a different issue being conflated with
speed.

Personally I prefer loading the core stuff instantly while still allowing for
a rich site that progressively loads micro-libs and media.

~~~
nathancahill
Yes, agreed. Unfortunately, there was push back against the Flash of Unstyled
Content, so now many websites delay loading any content at all(!) until the
correct fonts are loaded. smh

~~~
thinkloop
You're right. I think the tide may be switching tho (as evidenced by the
recent flurry of posts like this one). I've personally opted for fast load
plus a little jitter for myself, think it's a better overall experience:
[http://www.thinkloop.com/article/state-driven-routing-
react-...](http://www.thinkloop.com/article/state-driven-routing-react-redux-
selectors/)

------
soulchild37
You don't even need CSS, this blog loads fine even using GPRS :
[http://danluu.com/](http://danluu.com/)

~~~
Jaruzel
...and it gets millions of hits a month, and is a regular on HN.

Proving that all that bloat and styling (on other sites) is a complete waste
of time if there's no value in your content.

Dan focuses on just the content, and it's worked, very very well.

------
rado
20K HTML+CSS, 130 KB page size including 1 photo. No special hard-core
optimisation, apart from using my natUIve WordPress theme on top of HTML5
Blank, with lots of features. [http://rado.bg/2017/01/my-kore-eda-
list/](http://rado.bg/2017/01/my-kore-eda-list/) It's possible and we don't
need the bloat. Happy to see optimisation around.

------
CJefferson
While inlining images and CSS might speed up the first page of your blog
people hit, won't later pages be able to use these things from the cache?

~~~
coldtea
Not necessarily. We visit tons of sites and a small blog has slim chances of
getting its assets into the cache in the first place (if browsers put every
asset we browse in the cache it could even reach 1GB per day or so eaten by
the cache...).

And most blog visitors are traffic by some random success post linked from
some popular site or one-off search traffic.

So, they won't stick around for cache to matter anyway -- better give them a
nicer first experience in the off chance that they do stick because of that.

------
thedrake
Also looks from this
[https://www.webpagetest.org/result/170213_3A_1BNS/1/details/...](https://www.webpagetest.org/result/170213_3A_1BNS/1/details/#waterfall_view_step1)
that you are actually sending the favicon file as well (which I like but it is
2 requests and not 1)

------
vram22
Interesting post. I tend to agree with Jacques. The value-add of many sites
these days is disproportionately low compared to their size.

My site is pretty lite :), though not in the ball park of the OP. Used to be
even lighter, need to trim it down again some.

[https://vasudevram.github.io/](https://vasudevram.github.io/)

------
michaelmcmillan
Mine is faster: [http://michaelmcmillan.net](http://michaelmcmillan.net)

------
thedrake
It will be faster if you send the request closer to the origin of the user.
You could push the site closer using something like the edge network from
google or other providers
[https://peering.google.com](https://peering.google.com) shows how google does
it

------
Jeaye
[https://upload.jeaye.com/tmp/blog-
performance.png](https://upload.jeaye.com/tmp/blog-performance.png)

He loads 4KB in 75ms, which is about 53KB/s. I load 47.82KB in 210ms, which is
227KB/s. Technically, I'm loading 4 times faster.

~~~
jacquesm
Apples and oranges. If you reduce your file size to 4KB you'll be comparing
apples with apples. The setup time of the connection counts dis-proportionally
high for small transfers and the goal wasn't a high transfer rate but a _short
time to load_. For a high transfer rate you should make your pages as large as
you can, they will take a long time to transfer but the rate will be close to
the rate of the slowest link in the chain.

This is also why you can't test available bandwidth reliably with a short
transfer.

------
LeanderK
i don't understand this holy crusade against bloat. Yes, many if not most of
websites i visit are bloated, but i think reasonable steps are the right
answer (like an automatic, idiot-proof system that serves 4k pictures to
4k-displays and hd pictures to smartphones). Shovelling nanoseconds by
manually inlining everything is not scalable. Sometimes i think about starting
to blog, but this should be a small hobby and as painless as possible. Ideally
i would like an automated system that handles:

    
    
      - a custom font (i like fonts)
      - being fast on wifi
      - being fast on LTE
      - being reasonable fast on slower networks (maybe don't load the font etc.)
    

Edit: trying to be not too provocative :)

~~~
pixelbeat__
That's a bit presumptuous. Not everyone is on fast internet. Even supposedly
"fast" internet benefits, as I've been very surprised about connection latency
issues in the US.

Also it's more secure to have a static site.

Also it's simpler to manage in the long term.

win win win

~~~
oogali
Depends on your goals.

If you're a company like Facebook or Google, you're looking to eke growth out
of every corner of the Earth, including those with really slow Internet.

So after you've conquered the 1st and 2nd world countries, you start
optimizing for additional tiers. (And optionally launch balloons that shower
Internet upon untapped markets)

However, if you're not one of those two behemoths, then your target audience
is probably located within ~3,500 miles of you/your servers (which would
translate roughly into a RTT latency figure of ~100 milliseconds, and add 40ms
for those folks on low-grade ADSL or cellular connections).

Your barebones site is competing with other fully-featured sites that are
taking advantage of the high bandwidth delay product that's available to them.

Unless, competing for eyeballs is not one of your goals.

But regardless, optimizing for reach beyond that mileage range by slicing bits
here and there, rather than bolting on a CDN, is probably a premature
optimization.

To throw out a new presumption, I'd say that if you measured user's rage,
they're more angry with packet loss than with consistent latency.

One is predictable and can be planned for ("open 5 tabs, go do some chores,
come back in 15 minutes when they're loaded").

The other is absolutely infuriating ("open 1 tab, get teased by some amount of
partially loaded objects, spend the next 5 minutes refreshing due to socket
timeouts, cross fingers that not too many refreshes evicts items out of the
local cache, give up")

------
rcarmo
I decided to optimize mine for maintenance and mobile, and gave up on doing
HTTP and format optimization after moving to CloudFlare - except for adding
just enough JS to do Medium-like lazy image loading to save bandwidth for
visitors.

But this is impressively fast nonetheless.

~~~
wtbob
> except for adding just enough JS to do Medium-like lazy image loading to
> save bandwidth for visitors.

Please, _please_ don't do that: it means that visitors without JavaScript
enabled simply cannot view your page.

If a visitor wishes to configure his browser not to download images until he
scrolls near them, that's certainly within his power. But if you break your
page and only unbreak it for those with JavaScript, then your visitors have no
choice.

~~~
rcarmo
Well, they do get a placeholder image (a blurry one, around 1-4k in size) as
default.

The percentage of folk with JS disabled seems to be around 1% of visitors to
generic web sites (no hard figures here, only search hits on quora and a few
analytics sites), and is likely to be pretty much zero for mobile users, so...
I'm OK with the trade off, since I'd much rather improve the experience for
those who pay through their nose for mobile bandwidth.

~~~
wtbob
> I'm OK with the trade off, since I'd much rather improve the experience for
> those who pay through their nose for mobile bandwidth.

I know that your heart is in the right place, but I believe that you're making
the wrong decision. I suppose it's one thing if each image is also a link to
the high-res source (although I wonder why suitably-scaled images can't be
served to everyone), and if the images are irrelevant to your text. But
_particularly_ if there is no way for clients without JavaScript to see
images, then I think you're breaking the web.

I think that JavaScript disabling will become more and more common as
advertising and tracking becomes more and more intrusive. A client who enables
JavaScript disables security, disables privacy and disables performance.

------
macandcheese
Really liking Middleman (similar to Hugo as OP mentions) for this kind of
stuff recently.

Powerful enough to use asset caching / on-build minification / direct deploy
to s3 / dynamic page generation from JSON, light enough to load in the blink
of an eye.

------
Scirra_Tom
Minimising the CSS/HTML would be one extra improvement, and removing HTML
comments.

~~~
pmlnr
No need for that, gzip will do that for you.

~~~
placeybordeaux
Does gzip remove comments?

~~~
theandrewbailey
No. But if there aren't many, they aren't long, and are all human readable
(like in this article), the penalty after gzip is negligible.

~~~
Scirra_Tom
I thought the goal was fastest blog in the world ;)

------
nso
I was musing about how to optimize it a bit further for multiple page
requests.

Would it not be possible to do something like this? (pseudo)

on page request:

if (!user_has_visited_before) { insert_css_as_inline_into_html();
lazy_load_css_file_with_js(); }

else { insert_link_to_css_file_into_html(); }

------
nvus
Really, the only optimization applied are cutting of byte and this is the
fastest...?

In the meantime in Romania:
[http://imgur.com/a/UfhoD](http://imgur.com/a/UfhoD)

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=9994276](https://news.ycombinator.com/item?id=9994276).

------
kordless
Compare this to the crap LinkedIn pushed out about a month ago. Holder images
linger long enough ponder their existence and allow for noting their janky
departure.

~~~
pmlnr
Could you please link it? I have no idea what you're talking about.

------
joshu
I should probably install a web server on my ROS box so the autonomous gokart
can serve web pages.

------
nkkollaw
This is so fast. Amazing.

