Is it desirable to inline your CSS, "like a boss?" Maybe if you have one single web page. What if you have dynamic content and your users intend to browse more than one page? With externalized CSS, that is all cached.
Same with images. If I'm building a web application, I certainly do not want inlined images. I want those on a CDN, cached, and I want the page to load before the images.
Not only is this not particularly useful advice, it's bad advice.
You say this website is only fast because it's "without any content". If there's no content then tell me how it communicated its point so clearly. If it's inherently fast then tell me why the same thing posted to Medium is so slow.
A hallmark of a great solution is that people who see it decide the problem must not have been very hard.
Search works well because they have relatively few features to support by default (for things like the calculator I bet they ship that in with the response).
I think Google's CSS embedding is terrible advice for the meaningful web, but logical advice for adwords landing pages or sites with content so bad or sparse you wont be navigating them.
Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?
I agree though that this is great advice for landing pages; load times are probably among the reasons of most bounces.
No, but I can imagine an photography site, or an e-commerce site without useless non-content pictures and shit-loads of CSS and JS.
> Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?
How does HN's CSS enforce a ban on img elements in pages pointing to a photo's canonical location? Or preclude putting your standard frames into it?
Imagine a photography site where no photo link is shared across any pages (but 90 page base64 encoded URLs are repeated randomly), an ecommerce site where a product is shown in a strange new light at every step in the checkout process using a mishmash of entirely different CSS.. Google's advice is approving the most idiotic behavior on sites that are barely keeping their head above water in terms of technical understanding, letting them hold onto strange ideas because they are "fast."
HTTP2 allows CSS to specified in the HTTP header. However Google AMP doesn't support it yet.
To shame people who build slow, bloated websites.
Saying that no bloat equates to no content is part of the problem.
There is no silver bullet solution to all problems when building webpages, and if you think this guy's advice is catch-all for everyone and that "shaming" people who don't follow those rules is good, then you're a bad developer.
Someone else said it elsewhere in this thread, but different contexts need different solutions based on determined use cases and needs.
Being diplomatic about it and not assuming everyone who deployed a Joomla installation with a ton of bad plugins because it HAS TO WORK NOW is an idiot is OK. But wrapping bad engineering practices in "different contexts need different solutions based on determined use cases and needs" is a very different story.
Obviously, different contexts and use cases need different solutions. That's no excuse to pick the bad ones, nor to pretend they're not really, really bad.
To have this single page experience as good as possible, I pack the CSS directly in the HTML. First I packed also the image of the molecule, but it was not good for SEO, because I have quite some visitors coming from search of the drawing of molecules. Google would be smart to index the inlined images, I would switch back to use them.
All these optimisation techniques are only good if the context is the right context. That is, you need to pay attention to the assumptions used to make these rules.
There are lots of people self-managing small sites that don't have a clue about any of this stuff - it's a decent resource for such people - nothing more.
An affiliate link in the last paragraph.
No, that most "content" making webpages slow is useless BS bloat.
It's not being negative to point out the glaring flaws in a person's statements. My assumption is the entire thing is an advertisement for that hosting service.
(Or better yet, serve the thing off S3 and let Amazon be your CDN.)
I'm not in agreement with many of the commenters regarding CDNs. I don't believe in a free lunch. Free software is one thing, but CDNs require infrastructure, which incur costs. Somewhere, the people offering those services expect to make those costs back. You'll either pay for it directly, or you'll pay for leeching off someone else's bill in karma. For a very tiny, low-traffic, low-bandwidth website, I think not using a CDN is perfectly reasonable.
Just because Godaddy sucks doesn't mean that all shared hosting sucks. NearlyFreeSpeech.net is fine for most people, or Amazon S3 if you're into AWS / webgui stuff.
If you're actually using a decent amount of bandwidth (ie: image hosting), something like Hawkhost.com would be good. Just gotta stay away from EIG (https://en.wikipedia.org/wiki/Endurance_International_Group), which is a conglomerate that's buying up all the shared hosts and making them crappy.
Bam. Now you have economies of scale AND far cheaper hosting than any VPS.
If you're website is that unimportant, then you could probably get by with any free hosting services where your page would be yourpage.serviceprovider.tld
You can always go with cheap, fully virtualized GNU/Linux server, or you can go with a virtual, true UNIX server running at the speed of bare metal.
Your choice, but quality, correctness of operation, data integrity and performance still cost something. It you don't care about any of those things, fork out $5 for the alternative and call it a day.
My (limited) experience with Vultr has also been fairly satisfactory.
Correctness of operation does not only refer to end-to-end data integrity, but also to adequate capability to diagnose and inspect system state, in addition to being able to deliver correct output in face of severe software or hardware failures. Linux is out as far as all of those.
In other words, if you want JustWorks(SM), rock solid substrate for (web)applications, anybody not running on FreeBSD, OpenBSD, or some illumos derivative like SmartOS is out, at least for me. Perhaps your deployment is different, but I don't want to have to wake up in the middle of the night. I want for the OS to continue working correctly even if hardware underneath is busted, so I can deal with it on my own schedule, and not that of managers'.
From my perspective (as a freelance web tech/dev professional who routinely manages close to two dozen hosting accounts for clients), what you're saying above comes very very close to driving a nail with a sledgehammer.
Hosting clientele is notoriously high maintenance; the more technology ignorant, the higher the maintenance in terms of support, and the more fallout one has to deal with when there is downtime.
My time is expensive. My free time is exorbitantly expensive. Therefore, when I pick a solution and decide to deploy on it, it has to be as much "fire-and-forget" as is possible. Picking a bulletproof substrate to offer my services on also increases the time available to provide higher quality service to my clients: since my time dealing with basic infrastructure is reduced as much as possible, I have more of it to spend on providing better service and adding value, thereby increasing the client retention rate. Because of the economy involved in this, and especially considering how razor thin hosting margins are, I feel that the nail with a sledgehammer metaphor is inapplicable to this scenario.
Edit: that's obviously a non-issue in this particular case since everything is static. But as a best practice this needs to be considered and inline CSS doesn't make you a boss.
CSP will do absolutely nothing for a single static html page containing all assets it needs as inline.
I think there are better pieces of security advice around than that...
The vulnerability means they can inject arbitrary markup including <link> or <script> that load offsite sources.
You can use CSP to whitelist allowed offsite domains. But if you're not careful, "you never know" and "you might as well" are more likely to waste your time chasing low value things.
For instance, inline CSS is valuable as an intermittent developer convenience, and disabling it takes that away while protecting your from an unlikely event.
Also, you generally should be escaping-by-default and not sanitizing. A templating system should escape by default and make it obvious when you opt out.
In the worst case scenario, where the server doesn't have all the blocks of the file cached in random acces memory, the server can fetch the single, inlined file with fewer input operations per second from local storage far faster than one's web browser can fetch multiple files over the network. This means that latency is lowered, and thus delivery time is accelerated.
And then their site broke as soon as I clicked a link
I apologize for quibbling (really, I do! but I'm an infrastructure guy! This is my bag!). Yes, host it on S3, but ALWAYS put a CDN in front of S3 with long cache times (even just Cloudfront works). S3 can sporadically take hundreds of milliseconds to complete a request, and you know, AWS bandwidth is expensive (and CDN invalidation is damn near free). And you can use your own SSL cert at the CDN usually instead of relying on AWS' "s3.amazonaws.com" SSL cert (although you will still rely on the that S3 SSL cert for CDN->S3 Origin connections; C'est la vie).
EDIT: It also appears Cloudfront supports HTTP/2 as of today. Hurray!
GAE doesn't charge for their object store nor their CDN service?
On my home account I've only used it for demo projects that are scarcely used, but it's fine for that.
I don't know how up to date this page is, but here's how it used to work:
Edit: Nevermind, confused Cloudfront with Cloudflare. Thanks for the correction, toomuchtodo.
EDIT: cm3: I didn't mean to call you out, just wanted my reply in here for historical context. Its very easy to confuse the two.
I personally went the other way. I still use CloudFront as a CDN but made it cache items for short periods of time. Invalidation was too much of a hassle, and it took too long. Admittedly, I should use hashes or something of the sort to keep my items versioned, but laziness always gets in the way.
Correct. Did I insinuate that? I apologize if I did. They are two distinct issues, both of which a CDN prevents.
1. S3 outbound bandwidth is expensive. Use it only as an object store of last resort. Your CDN bandwidth is orders of magnitude cheaper (don't believe me, go compare the pricing).
2. S3 response times can vary wildly at times. Use a CDN to avoid this.
And of course feel free to use a cache key instead of invalidating via an API if ~15 minutes it too long to wait for fresh content to appear at edges.
PS Don't apologize for laziness. When directed appropriately, its a most productive force.
Agreed on all other counts.
From what I see, S3 is $.09/GB to the Internet, and Cloudfront is $.085/GB to NA/Europe...and $.14/GB for Asia. How is this cheaper?
The SSD is really meaningless in this context. The website is so small that it will be loaded almost 100% from the filesystem cache. As long as it has more than 512 MB of ram...
If I wanted my website to load incredibly fast, I would absolutely not put it on an obscure VPS. Not that there's anything inherently wrong with it, but it's generally not going to make your site faster.
Putting bare text on the web is always going to be fast. So what. If he presented a real full-featured website with the bells and whistles that people expect today, and made it operate that fast, he'd have something to show. Instead he presents polished garbage.
I guess everyone's needs are different, but for me, hardly anyone reads anything I write. If I have a sudden surge in interest in something I wrote, last thing I want is to cut off access to it. Would rather keep paying the infinitesimal amount per page view to keep people reading it.
Instead of the affiliate link to some host no on has heard of,he should have affiliated linked to AWS (if possible) and a CDN. Then he could have added that as a strong feature that helps makes the page so fast?
I sometimes fear that if something like this happens the bandwidth bill will be too much to handle for small personal projects. Also it's a pain that AWS doesn't allow one to set hard limits on cloud spending. Yes, they allow to set up some billing alarms, but no hard limits. No guarantee that no matter what, the month's hosting bill will not exceed $10 for this project.
For small personal projects a tiny VPS seems to be safer from this angle. At max a DDoS will cripple the VPS but the hosting bill will stay the same.
If you have been through this, did you get any discounts from AWS for resources being used during DDoS attack or you had to pay the full amount.
- A very happy customer.
Hate to break it to you, but your virtual private server (VPS) is likely sharing a bare-metal server with other VPS. ;-)
Also, you can look into content delivery networks (aka CDN), which will most likely deliver this page faster to clients than your VPS especially when you consider your VPS is in Dallas and CDN's have nodes located around the world.
Likely? Isn't that the point of a VPS?
Chances of either, slim. Still I try not to assume when I don't have the data.
It's all virtualized and cloudy.
But its also the point of shared hosting, which the site hates on.
A good shared host (ex: HawkHost's semi-dedicated) will run circles around a Lowendbox VPS.
Kernel Same-Page Merging allows you to de-dup common pages across virtual hosts (such as kernel) for example.
I think the pedantry is unnecessary here. "Shared hosting" colloquially refers to multiple websites sharing a single web server, database, and PHP process. Everything is set up for you by the provider, you simply supply the files. What "shared hosting" does NOT usually refer to are containers, virtual machines, bare-metal, IaaS deployment environments, or anything like that.
Hosting on a single VPS is never gonna be very fast globally no matter what you pay your hosting. In fact our free plan on netlify would make this a whole lot faster...
stop the presses for the entire company to have a meeting on how to shave off 300 milliseconds for the poor residents of Japan!
As many people have pointed out there are faster methods of static hosting through a CDN, and many of the techniques of this site are inapplicable for larger sites. But A+ on the marketing.
IMHO there is mainly one way to get attention - it's to get (great and instant) emotion from the user. You can give good emotions or bad emotions.
Personally, I think, that to create a good emotion takes much more effort than to create a bad one. The website/product can say how great am I, but it will not 'click' as instantly as someone telling me I am a dumb baby and I suck  or I am not superior mere mortal baboon , for which most people will get instant rage and will start flame wars in whatever comment section, as there "is no such thing as bad PR".
Most popular writer/bloggers in my country have created these dipshit arogant characters (I tend to believe that they are "normal" people, but they clearly know what sells) who always say that they are richer, smarter and better than you. They create stories about "cheap restaurant breakfast for 60€" and so on, though the most interesting thing is that people buy their shit and then rage on whatever websites about how dared the writer call them a dumbass homeless bum.
When you need fancy graphics (a static photo album), things become less easy: you e.g. may want to preload prev / next images in your album to make navigation feel fast.
Things become really tricky when you want interactivity, and in many cases users just expect interactivity from a certain page. But client-side JS is a whole another kettle of fish.
When things become ugly is when you want to extract some money from page's popularity. You need to add trackers for statistics, ad networks' code to display the ads, and complicate the layout to make room for the ads, placing them somehow inobtrusively but prominently. This is going to be slow at worst, resource-hungry at best.
(Corollary from the above: subscription is more battery-friendly than an ad-infested freebie.)
There a certain length that a line can be, with being confusing or annoying to read. The reader mode in most browsers understands this, but for some weird reason reader mode isn't available for http://motherfuckingwebsite.com/, at least in Firefox.
 Example: https://cr.yp.to/highspeed.html
Business A - average render time 0.3s, but under load 5-10s
Business B - average render time 0.8s, but under load 1-2s.
Subjectively, around ~10s response time is the point I would close the tab and look for another business if I was trying to do shopping online, anything involving a credit card etc.
The fastest and most reliable hosting is, by far, based on my own experience is amazon's e2 cloud and S3 bucket services.
No, it's been around since forever. Just not used terribly often.
> Am I reading it correctly that the images are encoded in base64 and delivered as html? Surely this is a bad idea... no?
It depends. Making a new request to fetch the image always has overhead. Whether that overhead is bigger or smaller than the overhead of base64-encoding the image depends on:
• file size (naturally)
• file compressibility: The difference isn't as pronounced after gzipping everything, especially if the source data is somewhat compressible
• protocol: http2 allows a correctly configured server to push attached data with the original request, so no second request is needed. Even without server push, http2's multiplexing will reduce the overhead drastically compared to plain HTTP1.1 or the worst case, HTTPS1.1 to a different domain. The latter requires a full TLS handshake, and that's what, >30kb data exchanged if you have more than one CA certificate in the chain? That's a lot of image data.
On windows you can do alt+numpad 2022
On whatever is handling input for this XFCE system, control+shift+U 2022+Enter types it.
Characters like →, —, €, £, ©, ™, µ, ①, ②, , °, “ and ”, … and ‽ are easily available, as well as most European-ish letter accents: àáâäąȧåảāãæ.
(I live in Denmark, but rarely type Danish. The compose key is more than adequate for typing København, Østerbro and the Æ in my street's name.)
However, if the image is very large, it will make the initial request large as well. I would only use this for images that are small and above the fold.
You'd think HTTP/2 server push would stand for this, but I can imagine inlining is still a bit faster.
We know life doesn't work that way.
If it's small, the overhead from base64'ing it (if the page is gzipped) is lower than the overhead of opening a new HTTP connection just to retrieve that one image.
These probably aren't particularly important for most sites, but it's something I do on my personal site ( chriswarbo.net ) since I care more about ease of maintenance than load times.
For that image I would prefer to use inline SVG...
Not all browsers support SVG, and not all support all properties, but those that do give some pretty good results.
Image of Chrome Dev Tools: https://reportcards.scdn3.secure.raxcdn.com/assets/uploads/f...
As an aside, does HTTP/2 provide any benefit for a single HTML file with no external assets?
If you're done in one HTTP round-trip, your HPACK state has to push brand-new headers, you benefit nothing from server-side push (if it's even enabled), there's nothing to pipeline, so don't benefit from multiplexing and head-of-line blocking is a non-issue.
The HPACK spec is a pretty easy read . There is a static, hardcoded table that contains most of the HTTP header names, and even some common predefined KV pairs. You save some bytes on the wire if your header's name or value is one of these entries; the header name will essentially always will be in the static table.
But for names and values that aren't in the static table, you have to put them into the dynamic table and encode them using either the integer packing or the huffman code. The client has to decompress these, of course.
On future requests, you have some leftover state in your dynamic table so future 'duplicate' headers are packed, and take up very little space. But for the first (ever) HTTP request-response pair, you have to trade ALL the headers in "full". So the true benefits of the dynamic table don't kick in.
Of course that's entirely irrelevant as the page completely fits into the ram of the server (or even the CPUs cache for that matter)
This does not work in most cases when you use big images.
From StackOverflow answer : "It's only useful for very tiny images. Base64 encoded files are larger than the original. The advantage lies in not having to open another connection and make a HTTP request to the server for the image. This benefit is lost very quickly so there's only an advantage for large numbers of very tiny individual images. "
 - http://stackoverflow.com/questions/11736159/advantages-and-d...
> This benefit is lost very quickly so there's only an advantage
> for large numbers of very tiny individual images.
I tried to create an SVG version to see how an SVGZ would compare, but evidently I'm too crap at Inkscape and kept screwing it up.
This is one of the reasons to discourage using large attachments on emails (which then stick around forever).
That's a very fast page, that actually does something.
If they truly wanted speed through control of resources they would have used bare metal.
But yeah, the website is easy to optimize when it's simple, the hard part, often outside of your control, is DNS and actual connection handling. Many have already mentioned CDN so there's that.
But you also don't know what kind of firewalls are being used, or switches, or whatever else may impact your site. Why not just do what others have suggested and put it all in the cloud so that Amazon can worry about balancing your load.
Note: Just checked, and even a simple Medium blog post page won't fit on one those old 3.5" floppy disks..
EDIT: To stay on topic - the OP's page loaded instantly for me here in outback Australia...
IMO, the real problem with the web is the horrendous design choices and delivery of very popular news and daily reading sites (ahem cnn) where subsequent loads of ads and videos start shifting the page up and down even when you have started reading something. Let's address that problem first!
I went to the doctor and he told me to lose weight. What a fatphobe!
He should have told me how to eat everything I desire without any bad side effects!
- Brotli instead of Gzip. Likely saves around 10% size.
- Minify everything, including HTML. Could save around 3% size on that page.
This can be tricky if your page grows in complexity/size and you need to change something.
Please, when is more appropriate don't inline your CSS and prefer to take advantage of cache.
# Both DNS records are cached before request
>>> print requests.get('https://varvy.com/pagespeed/wicked-fast.html').elapsed.microseconds
>>> print requests.get('http://www.google.com').elapsed.microseconds
Okay, okay, it "matters". But it's nothing compared with the 3s to load all the JS and CSS and the subsequent sluggishness as 20 analytics scripts are loaded and processed.
It took 2 seconds to load the page on a fresh ec2 box:
Here's an idea: WebAssembly, but use existing Opcodes from the JVM.
Those png are not fully optimized and an SVG would probably even be smaller too and even if it isn't in the case of the orange one it would have could be compressed much better.
Making use of data: urls might look good on first visit but honestly with HTTP/2 just push in the resources and externalize them.
Because seriously cache for 300 seconds? How about offline support anyways? It's 2016.
Furthermore where's my beloved Brotli support?
By the what's about WebP support? Ok TBH if the PNG would be properly optimized WebP would actually not beat the file size but hey: "It isn't"
So even though it's only this tiny static page there's still so much wrong with it. Please improve!
By the way what's about QUIC?
But it has a referral link.
Thats probably the point of this page.
Time4VPS offers you 2 cores (compared to one on DO), 80 GB SSD (compared to 20 on DO), 2 TB bandwidth (compared to one on DO), and 2 GB of RAM (compared to 512 MB on DO) for 3 euros (3.36 dollars). 1 additional euro for daily and weekly backups.
Started renting one just two days ago, so I can't really guarantee that it's reliable, but it was recommended to me by a friend who's renting it for over 100 days now without any downtime.
 https://www.time4vps.eu/pricing/ (or, if it sounds good and you want to use my referral link: https://billing.time4vps.eu/?affid=992)
Use my coupon code to get 50% off any CloudPRO hosting: e6a8yWuhA4
I know people are saying it has some errors on certain mobile devices, but that's still some pretty good job manipulating CSS properties.
The point is that it's much more convenient to reuse code from a framework, because it's better to sacrifice file size for fast iteration and functionality.
When I'm doing my own projects I always write my CSS by hand because it ends up less complex in the end. I don't need to see pretty things up front like my corporate customers do.
The general point could be made without leaving so much room for everyone to argue over specifics.
We need to see the bloaty-positive alternative, not all websites have to be Google models.
For instance, delivery one giant JS/CSS file is now bad because it is harder to cache, since HTTP/2 removes the overhead of multiple requests there is no downside for many files.
It enforces a set of rules to accelerate web pages. These rules can be used to validate your pages.
If I'm doing a single page application, surely I'll have infrastructure in place already to compile, minify and do whatever I need to. So I could just serve the monolithic page and be done with it. Much like desktop applications used to do.
It might help with latency for the long-tail of data that isn't used very often and thus maybe replaced in the cache by other data, but on the other hand the OS probably had a reason to replace it and forcing it to stay in RAM might slow everything down.
I could be wrong.