
I am a fast webpage - capocannoniere
https://varvy.com/pagespeed/wicked-fast.html
======
jlmorton
I hate to be negative, but what really is the point of this? That a simple
webpage without any content can be fast? Of course it can.

Is it desirable to inline your CSS, "like a boss?" Maybe if you have one
single web page. What if you have dynamic content and your users intend to
browse more than one page? With externalized CSS, that is all cached.

Same with images. If I'm building a web application, I certainly do not want
inlined images. I want those on a CDN, cached, and I want the page to load
before the images.

Not only is this not particularly useful advice, it's bad advice.

~~~
simbalion
the guy says he's not an idiot then brags about spending $30 per month on a
VPS (idiot), for a single-page static HTML website with all inline-code
(idiot+1).

It's not being negative to point out the glaring flaws in a person's
statements. My assumption is the entire thing is an advertisement for that
hosting service.

~~~
mikepurvis
Also, whether the VPS has an SSD or not is totally irrelevant— if you really
were serving a single page it would be cached in the memory of your webserver.

(Or better yet, serve the thing off S3 and let Amazon be your CDN.)

~~~
meowface
Hell, I'd just use my Dropbox account.

~~~
fakename
Dropbox is disabling html hosting on October 3rd

~~~
meowface
Ah, shame. I imagine it's for security reasons, though, so I can't blame them.

------
zackbloom
Just to point out, there's no particular reason to host a page like this on a
VPS at all. You could just throw it on S3. Even better, you could put it
behind a CDN like Cloudfront and the total cost would be a dollar or two a
month, not $25+ and it would be significantly faster.

~~~
toomuchtodo
> You could just throw it on S3. Even better, you could put it behind a CDN
> like Cloudfront and the total cost would be a dollar or two a month, not
> $25+ and it would be significantly faster.

I apologize for quibbling (really, I do! but I'm an infrastructure guy! This
is my bag!). Yes, host it on S3, but ALWAYS put a CDN in front of S3 with long
cache times (even just Cloudfront works). S3 can sporadically take hundreds of
milliseconds to complete a request, and you know, AWS bandwidth is expensive
(and CDN invalidation is damn near free). And you can use your own SSL cert at
the CDN usually instead of relying on AWS' "s3.amazonaws.com" SSL cert
(although you will still rely on the that S3 SSL cert for CDN->S3 Origin
connections; C'est la vie).

EDIT: It also appears Cloudfront supports HTTP/2 as of today. Hurray!

[https://aws.amazon.com/about-aws/whats-new/2016/09/amazon-
cl...](https://aws.amazon.com/about-aws/whats-new/2016/09/amazon-cloudfront-
now-supports-http2/)

~~~
cm3
Cloudfront requires JavaScript, which isn't always acceptable.

Edit: Nevermind, confused Cloudfront with Cloudflare. Thanks for the
correction, toomuchtodo.

~~~
toomuchtodo
Cloudflare does, Cloudfront (AWS' CDN) does not.

EDIT: cm3: I didn't mean to call you out, just wanted my reply in here for
historical context. Its _very_ easy to confuse the two.

~~~
cm3
No, thanks for correcting me. The names are similar enough and the purpose is
too, so it's easy to confuse. I mean, if you hadn't commented, I would have
wondered why it got downvoted.

------
neoCrimeLabs
> "I am not on a shared host, I am hosted on a VPS"

Hate to break it to you, but your virtual private server (VPS) is likely
sharing a bare-metal server with other VPS. ;-)

Also, you can look into content delivery networks (aka CDN), which will most
likely deliver this page faster to clients than your VPS especially when you
consider your VPS is in Dallas and CDN's have nodes located around the world.

~~~
rospaya
> Hate to break it to you, but your virtual private server (VPS) is likely
> sharing a bare-metal server with other VPS. ;-)

Likely? Isn't that the point of a VPS?

~~~
neoCrimeLabs
You can reserve entire machines, even using VPS. Heck, you can even luck out
and be the first VPS on a newly provision host.

Chances of either, slim. Still I try not to assume when I don't have the data.

~~~
paulddraper
Sure you might but be sharing, but the point is that you don't generally have
to care.

It's all virtualized and cloudy.

~~~
wingless
VPS performances wildly vary from provider to provider though.

------
bobfunk
Not that wickedly fast unless you're really near Dallas where the server is:

[https://performance.sucuri.net/domain/varvy.com](https://performance.sucuri.net/domain/varvy.com)

Hosting on a single VPS is never gonna be very fast globally no matter what
you pay your hosting. In fact our free plan on netlify would make this a whole
lot faster...

~~~
cperciva
Off topic, but is that service showing crazy slow numbers for "USA, Atlanta"
for anyone else?

~~~
apocalyptic0n3
It is. Linode's Atlanta data center has been getting DDoS'd on and off since
Sunday. This site isn't hosted on Linode, but could there perhaps be
congestion in Atlanta from that attack causing general slowness?

------
begriffs
OP has certainly nailed Hacker News psychology. My old coworker called the
technique "inferiority porn." Titles like "the secretly terrible developer" or
the closing statement of this particular article: "Go away from me, I am too
far beyond your ability to comprehend."

As many people have pointed out there are faster methods of static hosting
through a CDN, and many of the techniques of this site are inapplicable for
larger sites. But A+ on the marketing.

~~~
trymas
I hate this trend.

IMHO there is mainly one way to get attention - it's to get (great and
instant) emotion from the user. You can give good emotions or bad emotions.

Personally, I think, that to create a good emotion takes much more effort than
to create a bad one. The website/product can say how great am I, but it will
not 'click' as instantly as someone telling me I am a dumb baby and I suck
[0][1] or I am not superior mere mortal baboon [2], for which most people will
get instant rage and will start flame wars in whatever comment section, as
there "is no such thing as bad PR".

Most popular writer/bloggers in my country have created these dipshit arogant
characters (I tend to believe that they are "normal" people, but they clearly
know what sells) who always say that they are richer, smarter and better than
you. They create stories about "cheap restaurant breakfast for 60€" and so on,
though the most interesting thing is that people buy their shit and then rage
on whatever websites about how dared the writer call them a dumbass homeless
bum.

[0]
[https://www.youtube.com/watch?v=0nbkaYsR94c](https://www.youtube.com/watch?v=0nbkaYsR94c)

[1]
[https://news.ycombinator.com/item?id=12448545](https://news.ycombinator.com/item?id=12448545)

[2] [https://varvy.com/pagespeed/wicked-
fast.html](https://varvy.com/pagespeed/wicked-fast.html)

------
nine_k
I make my personal pages fast this way since last century. Probably a huge
amount of people did the same. It's pretty obvious.

When you need fancy graphics (a static photo album), things become less easy:
you e.g. may want to preload prev / next images in your album to make
navigation feel fast.

Things become really tricky when you want interactivity, and in many cases
users just expect interactivity from a certain page. But client-side JS is a
whole another kettle of fish.

When things become ugly is when you want to extract some money from page's
popularity. You need to add trackers for statistics, ad networks' code to
display the ads, and complicate the layout to make room for the ads, placing
them somehow inobtrusively but prominently. This is going to be slow at worst,
resource-hungry at best.

(Corollary from the above: subscription is more battery-friendly than an ad-
infested freebie.)

------
userbinator
A good sequel to
[http://motherfuckingwebsite.com/](http://motherfuckingwebsite.com/) , which
is probably too understyled for most people.

~~~
wingless
I prefer
[http://bettermotherfuckingwebsite.com/](http://bettermotherfuckingwebsite.com/)

~~~
dredmorbius
For some reason, I'm a fan of this one:
[http://codepen.io/dredmorbius/full/KpMqqB/](http://codepen.io/dredmorbius/full/KpMqqB/)

------
ksubedi
Took me almost 30 seconds to load, maybe because the server is being hammered
by HN traffic right now? Also like others here were saying, using a CDN would
definitely help with the initial latency.

~~~
AOsborn
I think this is the ironic lesson: for many sites, optimizing for consistent
performance (i.e. CDN, geographic caching) is a more important objective than
prematurely optimizing for a subset of users.

Example:

Business A - average render time 0.3s, but under load 5-10s

Business B - average render time 0.8s, but under load 1-2s.

Subjectively, around ~10s response time is the point I would close the tab and
look for another business if I was trying to do shopping online, anything
involving a credit card etc.

------
paulpauper
looks like this whole thing is a scheme to promote his webhsting affiliate
link:
[http://www.knownhost.com/affiliate/idevaffiliate.php?id=1136...](http://www.knownhost.com/affiliate/idevaffiliate.php?id=1136_0_3_1)

The fastest and most reliable hosting is, by far, based on my own experience
is amazon's e2 cloud and S3 bucket services.

~~~
Mao_Zedang
I picture him coding this in vi with a maniacal evil laugh, thinking of all
the money his scheme will make

~~~
Annatar
Nothing wrong with that; Smarts should be rewarded.

------
quinndupont
Is this image inlining thing something new? Am I reading it correctly that the
images are encoded in base64 and delivered as html? Surely this is a bad
idea... no?

~~~
creshal
> Is this image inlining thing something new?

No, it's been around since forever. Just not used terribly often.

> Am I reading it correctly that the images are encoded in base64 and
> delivered as html? Surely this is a bad idea... no?

It depends. Making a new request to fetch the image always has overhead.
Whether that overhead is bigger or smaller than the overhead of
base64-encoding the image depends on:

• file size (naturally)

• file compressibility: The difference isn't _as_ pronounced after gzipping
everything, especially if the source data is somewhat compressible

• protocol: http2 allows a correctly configured server to push attached data
with the original request, so no second request is needed. Even without server
push, http2's multiplexing will reduce the overhead drastically compared to
plain HTTP1.1 or the worst case, HTTPS1.1 to a different domain. The latter
requires a full TLS handshake, and that's what, >30kb data exchanged if you
have more than one CA certificate in the chain? That's a lot of image data.

~~~
sillysaurus3
Completely offtopic, but how do you type •? I like it.

~~~
maxerickson
It's system dependent.

On windows you can do alt+numpad 2022

On whatever is handling input for this XFCE system, control+shift+U 2022+Enter
types it.

~~~
creshal
And since I'm lazy, I put it on AltGr+, with xmodmap.

------
leesalminen
Ehh, I just got 10.91s load time in Chrome 53 from Colorado, USA.

Image of Chrome Dev Tools:
[https://reportcards.scdn3.secure.raxcdn.com/assets/uploads/f...](https://reportcards.scdn3.secure.raxcdn.com/assets/uploads/files/company/452e0fdee5904b10046de11c0f7bfccb.png)

As an aside, does HTTP/2 provide any benefit for a single HTML file with no
external assets?

~~~
mrb
HTTP/2 header compression is one benefit that helps even it you have just 1
request.

~~~
niftich
I want to benchmark this, because intuitively I disagree.

The HPACK spec is a pretty easy read [1]. There is a static, hardcoded table
that contains most of the HTTP header names, and even some common predefined
KV pairs. You save some bytes on the wire if your header's name or value is
one of these entries; the header name will essentially always will be in the
static table.

But for names and values that aren't in the static table, you have to put them
into the dynamic table and encode them using either the integer packing or the
huffman code. The client has to decompress these, of course.

On future requests, you have some leftover state in your dynamic table so
future 'duplicate' headers are packed, and take up very little space. But for
the first (ever) HTTP request-response pair, you have to trade ALL the headers
in "full". So the true benefits of the dynamic table don't kick in.

[1]
[https://http2.github.io/http2-spec/compression.html#header.e...](https://http2.github.io/http2-spec/compression.html#header.encoding)

------
pilif
_> my hard drives are SSD_

Of course that's entirely irrelevant as the page completely fits into the ram
of the server (or even the CPUs cache for that matter)

------
vonseel
Cool... Unfortunately in practice it's easy to find a list of best practices,
much harder to implement in a scalable and durable manner on any project of
sufficient size, especially if working with a legacy codebase.

------
usaphp
> "My images are inlined into the HTML using the base64 image tool, so there
> is no need for the browser to go looking for some image linked to as an
> external file."

This does not work in most cases when you use big images. From StackOverflow
answer [1]: "It's only useful for very tiny images. Base64 encoded files are
larger than the original. The advantage lies in not having to open another
connection and make a HTTP request to the server for the image. This benefit
is lost very quickly so there's only an advantage for large numbers of very
tiny individual images. "

[1] - [http://stackoverflow.com/questions/11736159/advantages-
and-d...](http://stackoverflow.com/questions/11736159/advantages-and-
disadvantages-base64-image-encode)

~~~
combatentropy

      > This benefit is lost very quickly so there's only an advantage
      > for large numbers of very tiny individual images.
    

In which case maybe it would be better to use sprites?

~~~
usaphp
I don't know, I hate dealing with sprites, it just not worth it in my opinion,
the time you spent on every edit...

~~~
brazzledazzle
If you're using photoshop you can create a PSD that sources other PSDs and if
I remember right create an action that generates the exported image so you
could automate things quite a bit if not entirely.

------
zodvik
Dlang forum (with dynamic content) is insanely fast!
[https://forum.dlang.org/group/general](https://forum.dlang.org/group/general)

~~~
jbb555
Yeah, this is the site that made me realize just how awful so many other sites
are.

It's very fast indeed, and has no useless graphics or javascript effects, but
is 100% functional and looks great.

------
INTPenis
A VPS is shared hosting to me, it's just an instance on a shared system.
Shared hosting used to mean a folder on a shared web server but I consider
sharing resources in a hypervisor equally shared. ;)

If they truly wanted speed through control of resources they would have used
bare metal.

But yeah, the website is easy to optimize when it's simple, the hard part,
often outside of your control, is DNS and actual connection handling. Many
have already mentioned CDN so there's that.

But you also don't know what kind of firewalls are being used, or switches, or
whatever else may impact your site. Why not just do what others have suggested
and put it all in the cloud so that Amazon can worry about balancing your
load.

------
josephjrobison
Pretty good at 97/100 on Google's PageSpeed Insights -
[https://developers.google.com/speed/pagespeed/insights/?url=...](https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fvarvy.com%2Fpagespeed%2Fwicked-
fast.html&tab=desktop)

~~~
vincentriemer
Don't really think PageSpeed score really accurately reflects page loading
speed (maybe initial page loading speed). It seems to not really care about
lazy loaded resources as one of my JS heavy webapps I made (around 200KB)
actually scores higher than this one
[https://developers.google.com/speed/pagespeed/insights/?url=...](https://developers.google.com/speed/pagespeed/insights/?url=io808.com&tab=desktop).
Funnily enough the screenshot on the test only shows the loading spinner.

------
cyberferret
Interesting exercise, in an age where web pages are now bigger than most
business applications I used to use in the early days of DOS/Windows.

Note: Just checked, and even a simple Medium blog post page won't fit on one
those old 3.5" floppy disks..

EDIT: To stay on topic - the OP's page loaded instantly for me here in outback
Australia...

------
smoyer
"Look amazing on any device" ... The right edge of your text is coiled on my
phone (not so amazing).

~~~
Frye
Same here.

------
pacnw
Ok I'll bite as this is near and dear to my heart. Instead of showing me a
fast webpage with a minimal content, tell me how to make my tons of css and js
load fast! That's a real problem.I deliver web apps, and interactivity is a
must.

IMO, the real problem with the web is the horrendous design choices and
delivery of very popular news and daily reading sites (ahem cnn) where
subsequent loads of ads and videos start shifting the page up and down even
when you have started reading something. Let's address that problem first!

~~~
hasenj
> tell me how to make my tons of css and js load fast

I went to the doctor and he told me to lose weight. What a fatphobe!

He should have told me how to eat everything I desire without any bad side
effects!

/s

------
KerryJones
"Faster than 100% of tested sites"

[https://tools.pingdom.com/#!/ehdlhb/https://varvy.com/pagesp...](https://tools.pingdom.com/#!/ehdlhb/https://varvy.com/pagespeed/wicked-
fast.html)

------
ivanhoe
For speed optimization it's really important to always fine-tune for you
particular use case and apply some common sense. For instance, inlining
everything as suggested here is faster only if you expect visitors to open
just that one page and bounce away, so browser caching is not helpful.
Consequently, it's a very good tip for e.g. landing pages, but it makes no
sense at all to serve pages that way to your logged-in users.

------
silverwind
Few more possible optimizations:

\- Brotli instead of Gzip. Likely saves around 10% size.

\- Minify everything, including HTML. Could save around 3% size on that page.

~~~
legion050
Another optimization: \- zopfli for PNG via advanceCOMP

------
nathancahill
Submit to 10k Apart: [https://a-k-apart.com/](https://a-k-apart.com/)

------
clessg
How much does HTTP/2 mitigate the need for such techniques, if at all?

~~~
creshal
HTTP/2 with server push will eliminate the inlining hacks, and automatically
compress content.

But the other points remain: No Javascript is still the fastest Javascript
framework, and while you can do lots of crazy hacks with CSS, _maybe_ you
shouldn't.

------
HugoDaniel
Inline all your CSS and you are forcing all your users a full reload whenever
you need to change/add something.

This can be tricky if your page grows in complexity/size and you need to
change something.

Please, when is more appropriate don't inline your CSS and prefer to take
advantage of cache.

~~~
mbrock
Well, the entire page is 7 KB.

------
matt_wulfeck
I think it _feels_ fast because it loads at once, but I'm actually not getting
very impressive results programmatically if you measure how long the entire
TCP transaction takes (which is what I consider page loading):

    
    
        # Both DNS records are cached before request
        >>> print requests.get('https://varvy.com/pagespeed/wicked-fast.html').elapsed.microseconds
        226515
        >>> print requests.get('http://www.google.com').elapsed.microseconds
        92027
    

Even google.com (92 ms) is about 250% faster than OP (226 ms) to establish
connection, read all of the data, and close.

~~~
paulddraper
But what's 130ms?

Okay, okay, it "matters". But it's nothing compared with the 3s to load all
the JS and CSS and the subsequent sluggishness as 20 analytics scripts are
loaded and processed.

------
brainless
Honestly? I am surprised to see this page with such high vote on the first
page. If you really wanted a fast "static" page, you would put it on a CDN.
All you wanted to do is put a marketing link in your last paragraph.

------
halayli
Your page can be very fast and uses minimal resources and is hosted in a good
place. But you always gotta watch out for proximity to user, time to first
byte and dns resolution time. Perceived speed is highly affected by those.

It took 2 seconds to load the page on a fresh ec2 box:

    
    
        time_namelookup:  0.061
           time_connect:  0.100
        time_appconnect:  0.223
       time_pretransfer:  0.223
          time_redirect:  0.000
     time_starttransfer:  1.935
                        ----------
             time_total:  2.066

------
jschwartzi
Yes you are. You're so fast I don't even see you refresh.

------
bennettfeely
Probably wouldn't make much of a difference, but there is still room for
performance improvement by minifying the HTML page.

------
exabrial
"No Javascript"

Amen.

~~~
faragon
It reminds me the "no synthesizers" used in the 80's by some musicians :-)

~~~
exabrial
I know right? I cannot really for the super-cool bloatware fashion craze JS
frameworks to die.

Here's an idea: WebAssembly, but use existing Opcodes from the JVM.

~~~
boubiyeah
JS bloat is not gonna go away

------
mashedcode
You can do much better! What's about html-muncher for css class minification?

Those png are not fully optimized and an SVG would probably even be smaller
too and even if it isn't in the case of the orange one it would have could be
compressed much better.

Making use of data: urls might look good on first visit but honestly with
HTTP/2 just push in the resources and externalize them.

Because seriously cache for 300 seconds? How about offline support anyways?
It's 2016.

Furthermore where's my beloved Brotli support?

By the what's about WebP support? Ok TBH if the PNG would be properly
optimized WebP would actually not beat the file size but hey: "It isn't"

So even though it's only this tiny static page there's still so much wrong
with it. Please improve! By the way what's about QUIC?

------
DigitalSea
Easy to make a website fast when it has nothing on it. In the real world a
site isn't this light. It has images, analytic scripts, stylesheets, fonts,
Javascript (jQuery at the least). Using a combination of a CDN and realistic
caching, I can make a fast website as well.

~~~
mbrock
Real world sites don't need all that stuff.

~~~
mxuribe
Many real world sites have strategic/marketing partners who ask to add
analytics scripts so that they can capture metrics for their partnerships with
you/your real world site...And if your site has been optimized, but their
servers (which serve up these 3rd-party/analytics scripts) aren't as
optimized, guess where the slow down comes from? And senior leaders don't
always enforce these partners to confirm to internal performance standards.
So, yes, unfortunately real world sites __DO __have - or at least are forced
to have - stuff like that.

------
sigi45
Simple text, few links to 'tipps', a little bit of base64 images without any
deeper knowledge. For example there was a website which showed the impact of
base64 images just a few weeks ago (when i remember correctly)

But it has a referral link.

Thats probably the point of this page.

------
heavymark
This is odd. Clearly anyone can make a lighting fast page by making a single
page since then you can have css inlined versus needing to link to css style
sheets with multiple pages, and of course not having javascript would make it
faster, but thats a requirement for most all typical sites these days, and
loading images that way is nice for hackers but not for real people using
cms's by common people and clients. Also paying $25-35 for hosting is not very
bright since you can get a $5 digital ocean server ssd, not shared, that would
load this particular page just as fast if not faster.

------
jayess
His affiliate link for VPS service has its cheapest option priced at $25 a
month. You can get a nice little VPS for static hosting on SSD from digital
ocean for $5 a month. $6 a month with backup.

~~~
r3bl
You can go even cheaper.

Time4VPS offers you 2 cores (compared to one on DO), 80 GB SSD (compared to 20
on DO), 2 TB bandwidth (compared to one on DO), and 2 GB of RAM (compared to
512 MB on DO) for 3 euros (3.36 dollars). 1 additional euro for daily and
weekly backups.

Started renting one just two days ago, so I can't really guarantee that it's
reliable, but it was recommended to me by a friend who's renting it for over
100 days now without any downtime.

[1] [https://www.time4vps.eu/pricing/](https://www.time4vps.eu/pricing/) (or,
if it sounds good and you want to use my referral link:
[https://billing.time4vps.eu/?affid=992](https://billing.time4vps.eu/?affid=992))

~~~
rocky1138
CloudAtCost works perfectly for me. It's a one-time fee for VPS' and they're
always having sales.

Use my coupon code to get 50% off any CloudPRO hosting: e6a8yWuhA4

[https://cloudatcost.com](https://cloudatcost.com)

------
spion
I can't wait for half of this advice to become obsolete with HTTP2

------
fsiefken
What an arrogance, the page is done with me? I done with the page yet. I can
get the same page much faster by putting the png in an inline svg, strip the
source of unnecessary whitespace and returns, serve brotli (or sdhc compressed
pages) with firefox, chrome and opera dynamically... or even just do the
decompression inline with javascript. Might save another 20%
[https://github.com/cscott/compressjs](https://github.com/cscott/compressjs)

~~~
mbrock
... With what are you going to compress the compression library?

~~~
fsiefken
Brotli or Gzip on server. But you are right, in my enthousiasm I overlooked
those bits!

------
baristaGeek
I can see in the source code that you're expressing all dimensions in terms of
ems and %s. A technology such as Bootstrap will always be the way to go;
however, could you tell us a little bit more about how you did this? How did
you ensure that it looks good not only on your screen but on any screen?

I know people are saying it has some errors on certain mobile devices, but
that's still some pretty good job manipulating CSS properties.

~~~
Scarbutt
_A technology such as Bootstrap will always be the way to go_

What? Why?

~~~
baristaGeek
Bootstrap was just an example. It could Bootstrap, Materialize, W3.CSS, etc.

The point is that it's much more convenient to reuse code from a framework,
because it's better to sacrifice file size for fast iteration and
functionality.

~~~
mikekchar
There is always a tipping point. We tend to use Bootstrap a lot at work for
the reasons you mention, but you can pretty quickly get to the point where
your CSS is complex enough that you would have been better off doing it from
scratch. All frameworks are like that -- you trade off initial convenience for
design constraints.

When I'm doing my own projects I always write my CSS by hand because it ends
up less complex in the end. I don't need to see pretty things up front like my
corporate customers do.

------
calebgilbert
The whole hosting issue seems to open a can of worm, at least if this comment
stream is any indication. I think it probably would have been better if they
stated something more along the lines of, 'Choose (and likely expect to pay)
for some sort of superior hosting solution which will prioritize allocating
resources to your site(s)'.

The general point could be made without leaving so much room for everyone to
argue over specifics.

------
kelvin0
Examples of fast pages:
[http://www.3riversstadium.com/index2.html](http://www.3riversstadium.com/index2.html)

[http://www.pmichaud.com/toast/](http://www.pmichaud.com/toast/)

[http://home.mcom.com/home/welcome.html](http://home.mcom.com/home/welcome.html)

~~~
tonybaroneee
Lol, thanks for the Three Rivers throwback! (Pittsburgh native here)

------
jrmacmillan
How dare HN spread this kind of speed-shaming hate slander! ;)

We need to see the bloaty-positive alternative, not all websites have to be
Google models.

------
natmaster
A lot of this stuff is outdated now:
[https://news.ycombinator.com/item?id=12448539](https://news.ycombinator.com/item?id=12448539)

For instance, delivery one giant JS/CSS file is now bad because it is harder
to cache, since HTTP/2 removes the overhead of multiple requests there is no
downside for many files.

------
traviswingo
This took almost 10 seconds to load for me...

~~~
AOsborn
Around the same for me, running fibre in New Zealand. Long delay before
content even began loading - as mentioned in other comments, would likely have
been a non-issue if a decent CDN was used.

~~~
andai
I'm curious, how much benefit is there to having a fibre connection in NZ
except for NZ websites? What's the max speed?

------
kazinator
The best "Shift+Reload" refresh I've managed to get out of this page from
where I'm sitting, in Firefox 48.0.x, according to its Network Console, is
around 360 ms. It doesn't beat this HN discussion page by a whole lot, and
this has actual content, which is dynamic.

------
disruptalot
Interestingly, Google has been going after this with AMP (accelerated mobile
pages): [https://www.ampproject.org/](https://www.ampproject.org/)

It enforces a set of rules to accelerate web pages. These rules can be used to
validate your pages.

------
outworlder
Well, many of these points make sense.

If I'm doing a single page application, surely I'll have infrastructure in
place already to compile, minify and do whatever I need to. So I could just
serve the monolithic page and be done with it. Much like desktop applications
used to do.

------
bobabobabob
A couple of problems rendering on iPhone 6s

[http://i.imgur.com/EpoC9lG.jpg](http://i.imgur.com/EpoC9lG.jpg)

[http://i.imgur.com/qHS5v2H.jpg](http://i.imgur.com/qHS5v2H.jpg)

------
cm3
If it's really all static, you can bundle it into a static Mirage unikernel
image with [https://github.com/mirage/mirage-
seal](https://github.com/mirage/mirage-seal)

------
gravypod
I've always wanted to play with putting /var/www into a ramdisk for PHP/html
stuff. Would be much faster loading since it's all just text in the end of the
day. Completely cut out the bottleneck of SSd/HDD

~~~
riboflava
If you own the hardware I imagine much of your PHP/html stuff will be served
from the file system cache much of the time so you probably wouldn't see much
benefit...

~~~
gravypod
The idea would be to do it in an ultra-minimalist setting on a VPS (something
under 256MB of ram).

~~~
detaro
I don't think it would do much there either: If it fits in your left-over RAM,
then it's probably in the disk cache. If it doesn't, then you can't create a
RAM disk large enough.

It might help with latency for the long-tail of data that isn't used very
often and thus maybe replaced in the cache by other data, but on the other
hand the OS probably had a reason to replace it and forcing it to stay in RAM
might slow everything down.

------
adrianpike
2.43s TTFB for me - nice and fast once that happened, but that TTFB is a
killer.

------
codygman
Maybe a lot of people are hitting it, but this webpage loaded slowly for me.

------
philip1209
I'm curious - would this page see any speed improvement with HTTP2? I ask
because the new protocol seems optimized for the exact opposite of this - many
asynchronous fetches.

~~~
guidedlight
It's already using HTTP/2.

------
idlewords
I was a fast webpage.

~~~
dredmorbius
Did you play tag as a kid?

------
pikzel
Loaded in about 15-20 seconds for me. Even if you think medium.com is slow,
they can handle the sudden extra load that your site couldn't.

------
padmabushan
By the site's own admission, this page's visible content not prioritized. I
would have knelt before it if not for that flaw!!

------
xiaoma
It took me several seconds to load (compared to about 1-1.5 for HN)... this
page needs better hosting for Asian users.

------
mxuribe
Ego aside, this kind of site (and associated commentary on the suggested
tactics) i feel is helpful.

------
edpichler
All being on the html, and doing less external css improve speed? How much? Is
it worthwhile?

------
jordache
This simple webpage was barely faster than hacker news' list view...

------
boubiyeah
Well, not including any javascript was one massive shortcut :)

------
debacle
In an ad-free Internet, many more pages would be this fast.

Alas.

------
patmcguire
Took about 15 seconds to load for me...

------
kovrik
Really cool!

Almost instant even here in New Zealand!

------
caub
what is that `.unit{display:inline-block; _display:inline;_ zoom:1` (the
stars..)

------
jlebrech
now we need a framework that targets that standard. as very fast dumb client.

------
stretchwithme
Some pages have big egos.

------
GrumpyNl
Its back to 1985

------
honkhonkpants
The bit about being hosted on SSDs is silly. I could host that site in unused
registers of my CPU.

~~~
CorvusCrypto
was thinking exactly this, keep loaded in mem for the duration of the server's
lifetime. Not too familiar with HTTP2 but could you cache the compressed
packet and reuse with minor modification to the headers when needed to speed
up the communication?

~~~
honkhonkpants
Preformatted payload can be a big win for page speed, especially if your
payload cannot vary based on request headers, or has only a few variants.

A special case of preformatted response used to be baked into microsoft IIS.
If you connected to an address that could only redirect to another address,
IIS wouldn't even wait for the request, it would just send the 302 response
and hang up. This, it turns out, was not really compatible with Mozilla at the
time, and may have violated some RFCs, but I kind of liked it as a hack.

------
smegel
> I make no external calls, everything needed to load this page is contained
> in the HTML.

Wont that make your webpage load slower?

~~~
bdcravens
That one file, but presumably it's loading assets that would be the same total
size if they were split. Loading 100k from one source is faster than the same
aggregate size from multiple connections.

~~~
smegel
Except if you only need the first 50k to render the page, and can wait for 50k
of Javascript to come later, your page is going to display a lot faster.
Standard technique.

> Loading 100k from one source is faster than the same aggregate size from
> multiple connections.

Usually doing things in parallel is faster than doing them serial. That's why
HTTP2 loads slower than HTTP1 - you are sucking everything through single TCP
pipe, even though it is multiplexing within.

------
Cozumel
Inline CSS _shudder_

