Hacker News new | comments | show | ask | jobs | submit login
Delivering WordPress in 7kb (css-tricks.com)
176 points by ashitlerferad 9 days ago | hide | past | web | favorite | 54 comments

This is a great reminder of what can be done by just “rethinking everything” and saying why do I NEED that plugin, lib, script, font lib etc. one thing not immediately obvious to me was that the responsive menu goes to a new page https://sustywp.com/menu/ and because the site is so fast it actually doesn’t feel like a page load. I haven’t really thought about this but hamburger menus get so complex these days it might just be easier to load a minimal page.

Something about this menu approach feels so clever but wrong at the same time.

The menu on this site is essentially what a website's index page used to be, back in the early days of the WWW. It was, generally, a listing of a website's contents, modeled after the default directory listing that a web server produces.

So this menu felt very normal to me, but it may be off-putting to newer users of the web who are used to contemporary conventions such as slide-out hamburgers (not to be confused with slider hamburgers :-) ).

I just opened and closed it a few times to try it out. Then I pressed the back button several times to get back here, cycling the menu open and closed again.

I'm gonna go with "it's a bad idea".

I don't think opening and closing the menu is a realistic test.

Generally when a user opens the menu they're trying to go to a different page.

That's a bug not a design flaw. You could fix that with a tiny bit more JS checking history entries for repetition.

If one really wanted to, couldn't they eliminate menu access from the user's history?

Yes. You can use replace() instead of push().

But there are other issues like bookmarking or sharing the page with the menu open. People won't expect the result.

It's really not difficult to create an on-page menu so this seems more trouble than it's worth.

But would people really try to share / bookmark a page that at that very moment doesn't show the content they want to share?

I think it is a beautiful, simple solution that just works (and also did work perfectly in the past when we didn't have JS)

Yes. If you have ever worked on a site with lots of users, you will find that if there is a way to break something, someone will absolutely do that thing. It might not be super often, but it will definitely happen.

Interesting work. I wrote a bit about optimizing WP here: https://hackernoon.com/dont-brake-for-fonts-3-web-performanc...

The most interesting thing I learned is that the number of HTTP requests (not just the total file size) really affected page load times. So for example, WordPress has profile pictures called 'Gravatars' that can be shown alongside comments. By default, your browser will make HTTP requests to fetch all the Gravatars before showing the headline of your post! So we lazy-loaded them instead.

And we used the same plugin shown in this article (Autoptimize) to combine multiple CSS files. You can take this to an extreme by just putting the CSS inline too (but too much markup might make the page slower on mobile).

(And the most powerful tool we used was Cloudflare 'Rocket Loader', which defers all JavaScript execution until after the page first renders. But lately we're finding that it conflicts with the events or listeners in some plugin scripts, causing their JS to not execute at all.)

If you're interested in trying the same approach that Cloudflare took with Rocket Loader as a WordPress plugin, check out PhastPress. https://wordpress.org/plugins/phastpress/

This plugin also defers all scripts and tries to simulate all the events that occur during a normal pageload so that the scripts still run as they normally do.

I think this is the wrong way to go about site minimalism. My personal site[0] is pretty minimalist--the home page has one request for 3.3 KB. But it doesn't sacrifice in the same ways this site does: it keeps readable source code, it still uses BEM classes for maintainability, and it has a working menu on the same page (rather than requiring visitors to load a seperate menu page).

So, with less than half the kilobytes, I have a more readable, usable, and maintainable page. How? By jettisoning WordPress and SEO entirely. I use a static site generator (in my case, Gutenberg[1]) to generate the page--no WordPress backend required. And, by respecting my user's privacy and not trying to play the SEO game, I don't need any of the code many sites devote to tracking/SEO.

I believe that static sites and respect for privacy are the better path forward. Instead of tweaking WordPress, we should be looking to build the next WordPress--and build one that is static, and minimalist, by default.

[0] https://www.codesections.com/

[1] https://www.getgutenberg.io/

I’m curious as to what you are referring to when you say ignoring SEO. In my opinion on page SEO is about structuring your code and content appropriately. What SEO tracking are you referring to?

Justifying SEO is the classic motte-and-bailey argument of this industry. Of course SEO is, ostensibly, about "structuring your code and content" in a search-engine-friendly way. But in reality, it's mostly about trying to game the search engine algorithms and spamming the living shit out of the entire web.

I meant the Yoast stuff, which they claim to spend a whole KB on. They say

"but the meta stuff added by Yoast bumped it up almost a whole KB! However, given I'm trying to spread a message [it's worth it]"

"The Nobel-prize-winning physicist Richard Feynman wrote about his early childhood experience repairing transistor radios in the 1930s."

They would not be transistor radios in the 1930s. Battery radios had valves/tubes in them and two kinds of battery. The picture is a very late 1950s/early 60s transistor radio.

Nice site, interesting content, have you ever come across the British judge Lord Denning? He was a mathematician to begin with then turned to law having decided that teaching was boring.

There are lots of WordPress plugins that generate and cache static pages. As for keeping code readable, if you are using a generator to create a static page, why wouldn't you minimize upon generation?

You of course keep your code readable internally, but what your users get should be minimized and as small as possible, unless your users are there to peak at the source code.

I'm fully on board for creating leaner websites and writing less code, but if we're measuring energy usage, then doesn't streaming video pretty much dwarf all other internet traffic? With perhaps blockchain syncs in a distant second place?

if we're measuring energy usage, then doesn't streaming video pretty much dwarf all other internet traffic?

Probably. But just because plastic bags dwarf plastic straws as an environmental problem doesn't mean we can't address both.

This is great, for those of us putting document structure first in the name of accessibility I think this going green idea is a natural fit to 'why bother'. Previously my goal had been to score highly on the metrics of Lighthouse and get increased conversion rate, particularly when the reader has 'lie-fi' or feeble 2/3G connection. I genuinely had not thought that going green was also a good reason for the things I have been trying to do. Accessibility is something that is hard to explain to people, they think it means high contrast and MASSIVE fonts. But it is not that and those that do such things have given a bad name to the cause. Going green is a much easier starting point for that conversation.

I think that there is far more that can be shaved off. The SVG logo can be placed in the CSS as a data url so it only gets downloaded just the once. Currently it is in the file and weighs in at 1.19Kb. If this is drawn with more elegant SVG rather than what Inkscape churns out then it could be easier to edit and a lot smaller. There is no need to use three decimal places for the points and since there are no fonts in the SVG there is no need to bloat it out with any styles.

As well as serving via NGINX there is also mod_pagespeed to remove whitespace, comments and create src_sets for any blog images. In this way a client project based on this theme need not lose all gains as soon as an image is uploaded.

I also think the classless methodology has to be considered as the way to go with a pure document and all decorative chintz done in CSS with pseudo elements and more advanced selectors.

CSS Grid also reduces CSS size significantly and would help here. Block based layouts are fine for people that have been hacking away with margins and floats and padding things since IE6 but are absurd for newcomers to learn. CSS Grid is more maintainable as well as lighter on the download.

Not so sure about the menu being a separate page, a web page should be part of a 'book' and navigation with proper HTML5 markup for it is important for accessibility.

The motivation for this is superb and it is a lot more easy to explain to clients than accessibility. Page speed and conversion rate can be hard enough to get buy in with as it is, going green might be an easier sell.

Valiant effort. Slightly arbitrary in the finer details but the underlining message is good.

Removing the main menu (which is very small) while also having twitter meta info in the head and other unnecessary tags is an odd choice.

One slightly amusing thing is that in the web 1.0 era we had menu's as separate pages as our pages were built out of 'frames' and the menu frame simply targeted the main frame. So no we've come full circle again. The more screen real estate I have the more designers intend to put less information in it I've noticed.

Minimizing requests, using system fonts and using vector images are what makes this great because it makes the page loads so fast. Does 7kb really make much of a difference?

7kb specifically is not of major importance. If you're loading all content from one origin, fitting the content needed to load the page into the initial TCP congestion window does matter. 7kb of content will probably fit depending on how much non-content is in the packets.

This is not noticeable on a low-latency connection, but on a bad mobile connection it makes a massive difference.

(Just got back from Googling.) So basically, if the content can be sent within the window then you save a roundtrip?

It depends on if you're loading from multiple origins (multiple TCP connections), HTTP 1.1 vs 2 (multiplexing, push, HOL blocking), but generally speaking: yes. If you can fit the content needed to render the page in the initial TCP congestion window, the page will be noticeably faster on high-latency connections.

This unfortunately means, for most websites, some amount of CSS in a style tag in the head to get a basic structure of the page together to start rendering. Then you also include a full stylesheet with the high-fidelity version.

I'm leaving out a lot of details here :)

Thanks for the explanation!

I have done things on my site like inlining CSS and minimizing requests because it feels fast, without knowing exactly why. I didn't know that a single request can actually have multiple round trips!

My WordPress homepage is 2.4kb gzipped and just a single request :) so it is fast but its, uhh, a little boring and only 158 words.

> I didn't know that a single request can actually have multiple round trips!

If you load something from a third-party origin (e.g. 23789dz89asd789s.cloudfront.something), then it can actually be a lot worse than that. DNS needs resolving, which can take quite a while, depending on if and where things are cached. Then you get a full TCP handshake (since TFO doesn't work for the first connection), a full TLS handshake and then you get to roundtrip your request(s).

I'm trying to make a static website myself. Can you tell me if it's possible to optimize the network side of things?


1) Drop the web font if at all possible. If not, serve it from your domain to avoid DNS query and new TCP connection. For extra points, subset it to your content's character set.

2) Consolidate your CSS and JS files into one per type and minify it. For extra points, inline the critical CSS needed to show the important content of the page directly into the head.

3) Move your script tags to the end of the body to avoid blocking rendering, or inline it if it is absolutely needed to be there before the page loads and is not too big.

4) That web font loader script is not necessary at all.

5) You might get a smaller size out of those buttons as SVG.

You do five requests for resources that are in total 2 KB. Just inline them (including those tiny images).

This is great and is a good lesson for anyone developing websites, regardless of the platform.

Reducing the amount of requests and roundtrips required to render the page, and reducing the latency on those requests is key.

For example, you use Google Fonts. For that you need to load CSS. So that's three roundtrips for rendering your page: 1) your page, 2) the Google Fonts CSS, 3) the font files.

We've build a WordPress plugin called PhastPress (fast press) that helps you reduce the request count and those roundtrips without building your own theme. https://wordpress.org/plugins/phastpress/

Among other things, it inlines the critical CSS needed for rendering the page, defers all script loads, and optimizes images. Cuts page size by ½ on the stock theme, and the amount of additional roundtrips to 0.

> three roundtrips

Well, before we get to the Google Fonts CSS, we first have to open another connection there, do a TLS handshake, and maybe we even need to look that host up first before any of that.

True. I got the roundtrip terminology from Google's PageSpeed Insights tool, which AFAIK does not count the actual network roundtrips.

The one "logical" roundtrip (an additional request) would indeed count for many if you consider DNS, TCP and TLS handshakes.

Even more of a reason to get rid of them.

Most of modern web is like that Pascal's letter:

"I would have written a shorter letter, but I did not have the time."

It takes time to make lean sites look good.

I don't want to take a way from the work done, it's indeed a lot of work to get websites to load fast nowadays, when the images are so hi-res and there are so many js libraries and requests, and then you have to take tablets and phones into account... it get's big fast. But the 7kb title is clickbait at best. I too can make a website load fast by extracting pretty much everything from it. When working in the real world with real clients, this wouldn't fly. It would be much more impressive to get a website such as css-tricks.com and show me how you have optimized it.

While I mostly agree with the author's diagnosis, the cure proposed is a bit strange. First, it's not WordPress anymore. You could use any other solution then, including popular leaner alternatives. Moreover, the front-end is just a part of the equation. And if you're really focused on creating a realistic proposal, there are many other factors to consider, not just slimming down your blog and making it incompatible with the extensions and themes you're using.

How is “I took a WP theme designed as a minimalist starting point for building your own theme and cut out a lot of stuff I considered to be cruft” not Wordpress? It’s still WP behind the scenes.

(Maybe you think that Underscores is another CMS? It’s not, it’s a WP theme. https://underscores.me)

What's the big deal? Wordpress can be lean on the code is delivers? It certainly can't be lean on the backend. There's also a helper functions for adding tons of classes to elements everywhere so you can "patch" CSS for certain sites / categories only, clearly not for the target audience that values minimalism.

Think resonates well with minimalism. My site (halfcoded.com) is inspired by https://zenhabits.net/ and follows a minimal wordpress theme. Happy to learn that more people are cautious of the direct impact of internet on the green world.

To put things into perspective, nweb is a webserver written in C which can serve static webpages, and its smallest executable is just 12kb in size:


Why use wordpress at all? Just create a static webpage at this point.

Technically you could create a static webpage from the WordPress one.

Not only you can, but in many (if not most) cases you should - because of performance and security benefits. The drawback is your site will no longer use the built-in WP search functions, but there are ways around that.

Couldn't you put WP behind a reverse proxy and cache the hell out of it?

We cache the hell out of our WP marketing site. Cloudfront > Varnish > Apache > OPCache > MySQL Query Cache. It feels fast to visitors. People in the admin area still suffer.

WordPress themes are notoriously bloated and 1/4 of the web is powered by WP.

As a developer I'm not too happy about the trend I've seen in the past couple years of 'monolithic' WordPress themes with tons of JS and other code, because making even simple changes to them can get extremely complicated. I think they caught on for two reasons (1) they often have interactive 'builders' that come with widgets you can drag and drop (contact forms, slideshows, etc.) and (2) The whole design trend of things that change as you scroll--'parallax' elements and dynamically-resizing headers, etc.

I've always made my own themes, and even those can be fairly slow without the standard caching plugins. Optimizing Craft CMS sites was a breeze by comparison.

You could also get it down to 0kb if WordPress just serves a blank response. Still uses a ton of resources on the server.

Should say "7kB".

7 KiB.

> It devastates most benchmarks, with an average time to first byte (TTFB) of about 0.15s, and fully rendered within 0.5s.

That really isn't that fast, not for your / landing page.

Given the goals, this is a project that only works if you don’t care about having anything but a barebones website - so not practical for most ppl. Its also irrelevant that WordPress was used. Any CMS could apply here with the same approach.

I guess I was hoping for something more insightful.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact