Hacker News new | past | comments | ask | show | jobs | submit login
Web Performance 101 (3perf.com)
315 points by iamakulov 6 months ago | hide | past | web | favorite | 53 comments



IMHO instead of going for compressing/minifying/whatever else, it is far better to just remove the useless cruft in the first place --- and then you can still apply such techniques to whatever is left to squeeze out a bit more improvement. The best way to make your site ultra-responsive is to cut out all the bloat.

Related item: https://news.ycombinator.com/item?id=10820445


> The best way to make your site ultra-responsive is to cut out all the bloat.

And if the site doesn't do it, you can often decrease page load time by turning on your browser's built-in tracking protection:

https://blog.mozilla.org/firefox/tracking-protection-always-...

It's a bit sad that blocking trackers can cut page load time in half but that is unfortunately the web we have.


This is mostly about tuning for performance for SEO - its amazing in 2018 that sites are delivered with unoptimized images - its not rocket science to use Photoshop or run all the images through a cli based tool.


Agencies are only concerned with speed of putting content online and don't have time for that. Mom and pop, 80% of the internet, are only concerned with getting content online and haven't a clue what anyone is talking about cause they leave that up to their high school age kid for that.

Designers have become "coders" but aren't versed in the area of the science of computers and networking. Thus we have given to us Wordpress, Wix, Squarespace where you, too, can become an internet web site developer!


This part is like always missing from such presentations. First point always should be remove as much as possible.


You're totally right, but unfortunately, I think that's because the person writing these types of presentations don't have control over the content or design of the website, so removing stuff is often impossible or way more work that these tips.

The core holdback for slow websites is usually political, not technical or lack of skill. These presentations usually just try to make the best of those team dynamics issues.


It reminds me of "reduce, reuse, recycle, in that order". It's better to reduce resource consumption than to reduce its impact.


Exactly. The weird thing is that you don't have to loose that JS advantages if you pull out an old school technique called progressive enhancement. If you mix that with a strict no-js first design you will achieve performance that no ultra optimised "dynamic" style can compete with.


I have noticed a lot more landing pages using gradients/plain backgrounds instead of a big hero image.


Truth be told, I love big hero images amd especially videos. However, they're the largest performance problem on my otherwise text-based website, and they're purely aesthetic.


"Removing useless cruft" doesn't seem very practical, did it often work for you? In my experience, erring on the side of making existing stuff work faster is better than removing non-critical features.


> and then you can still apply such techniques to whatever is left to squeeze out a bit more improvement

Yes, like "compressing/minifying/whatever else", which is the subject of this presentation.

What's next? Really, before minifying, you have to get a server to serve anything in the first place. And register a domain name. And, hmmm, well first you need to buy a computer and plug it in...


So, in summer, I gave an introductory talk into web performance. This is its textual version :)

The talk includes 94 slides with text about:

  — why web performance matters
  — how to optimize:
    — JS (async/defer, code splitting)
    — CSS (the critical CSS approach & tools for it)
    — HTTP/connection stuff (Gzip/Brotli, preloading, CDN)
    — Images (compressing images, webp, progressive vs baseline images)
    — Fonts (the `font-display` trick)
  — and what tools help to understand your app’s performance
Would love to hear your feedback :)


I found this super interesting and there were a few areas I wasn't familiar with where I learnt something new. Kudos.

Meta node: I find this presentation style to be great. You can scroll up and down at speed, can text search the whole page, we have a nice mixture of imagery and text, its clean and accessible, we even have a table of contents. How did you author this?


Some more ideas for your next talk:

HTTP2 is a doddle to implement. Do it.

PWA makes it possible to work fully offline.

With CSS best to chuck it all out including those reset files someone wrote a decade ago. Instead re-write the whole lot using CSS Grid and using custom variables.

Inline the SVG into the CSS as custom variables.

Get rid of the bulk of the JS by only targeting evergreen browsers. No polyfills, no jQueries, just minimal javascript that does a lot of things in the PWA and manipulates CSS custom variables rather than the DOM.

Use HTML5 properly, with no lip service. Get rid of JS for forms and rely on HTML5 to do it.

Pagespeed to sort out the images and make them into source sets.

The goal of a lot of the above is to strip out convoluted build tools and have actual neat HTML that can be maintained. No more 'add only' CSS to hand on to the next guy, instead have something with comments in the code and sensible names that target HTML5 things like 'main' and 'aside' or 'nav' rather than made up class names.

A final thought is that the starting point can be to build a green website, i.e. one that cause too much cruft to be downloaded. This is the same thing as 'minimizing/cutting out bloat' but I find that setting out to build a website that sets the example of being green is a better mindset than 'must do those hacky things to make website faster'.


Good article! The main thing I didn't quite agree with was the blanket statement that GIFs should never be used. I fully agree that they shouldn't be used for video clips, and I understand this was the main context, but for small animations (especially pixel-art types) they're still a good format.

I also found them useful recently for small (28x28px) thumbnail images on my personal website. On average, saved as a PNG the thumbnails were 20kb, as a JPEG 9kb, and as an optimized GIF about 1-2kb. With about 100 thumbnails on one of the pages, the savings are pretty significant. (At least, this seemed to be the best approach; if anyone with more knowledge of image compression has a better suggestion, please let me know).


I'm saddened to not see Closure listed on there for JS minification. It might be worth mentioning that there are more advanced minification tools.

Also, there doesn't seem to be anything on JSON minification, which is a sizable portion of responses. There are techniques to transpose JSON objects to be easier to gzip compress.


Google Closure is still best in class at DCE and cross-module motion, but it's never caught on with the larger web community, partly because it applied certain constraints to your Js code that weren't always met. This has changed a bit with modern Closure better able to consume npm modules, but AFAIK, the only heavy non-Google user is still ClojureScript.


wouldn't the JSON object have to be pretty big before it affects performance.?


A few slides after you say that gifs should never be used, you use a gif ;)


I definitely learned something, thanks


> Compress your JPG images with the compression level of 70‑80.

In practice some images can get noticeable artifacts even at around 90. Most of JPEG compressors always apply chroma subsampling which is often destructive on its own [1]. On the contrary, many hidpi images can be compressed at around 50.

> Use Progressive JPEG… Thanks to this, a visitor can roughly see what’s in the image way earlier.

That's not the point of using progressive JPEGs nowadays. The 10-200% decompression slowdown is for the 5-15% size reduction.

> Use Interlaced PNG.

Don't. Interlaced PNGs can easily be 1/3 bigger. There are better ways to show loading images, and it's already used on the website.

> webpack has image-webpack-loader which runs on every build and does pretty much every optimization from above. Its default settings are OK

> For you need to optimize an image once and forever, there’re apps like ImageOptim and sites like TinyPNG.

These tools are no good for automatic lossy image compression [1]. The default is mostly JPEG 4:2:0 at quality 75, PNG quantized with pngquant at settings as low as conscience allows, missing out many PNG reductions and optimal deflate, no separation between lossy and lossless WebP if at all, etc.

As a result, the images on the website have about 13-24% more to optimize losslessly.

[1] https://getoptimage.com/benchmark


> These tools are no good for automatic lossy image compression

The self-promotion didn't bother me until this claim, because you posted some great advice along with it.

ImageOptim is great. If you choose "lossy minification" it does automatic lossy image compression, preserving perceptual image quality while making huge reductions to file sizes. Users can even adjust how aggressive it is.

I'll take your word for it that I could get 13-24% smaller file sizes with your Optimage product on top of the 80% (or whatever) that I can get with ImageOptim. But I'd prefer that you didn't claim that other choices are "no good".


I specifically meant automatic lossy compression with predictable visual quality. If ImageOptim could actually achieve it (automatically), that would save me and others an awful lot of time. But as it turns out it is not that easy.

Some very smart people at Google go to the trouble of creating projects like Guetzli. I personally have spent months on this, and it gets me every time someone claims "just use that one tool" without any evidence. I presented mine and it's reproducible.

ImageOptim is a great tool otherwise.


> [1] https://getoptimage.com/benchmark

A score of 24/55 for TinyPNG and then 55/55 for their own service makes it look as if this article is an advertisement. Especially since TinyPNG gets better/very similar file size while staying visually lossless up to a point (images where it's nothing but a bunch of rainbow gradients are its weakness).

Remember that TinyPNG is optimized for web use where artifacts are tolerated. It was configured with that in mind. They test for images that are visually identical and won't get it from any images optimizer that is made for web usage.

Users only spend a few seconds looking at images that on web pages and the artifacts from optimizers are very minor. See: https://3perf.com/talks/web-perf-101/#images-compress-jpg-si...


Based on my first scan, I've bookmarked this for the next time I have a perf issue on a site.

One thing I didn't notice is that one of the biggest speedups is removing junk from the pages. That could be too many JS trackers, user-hostile videos, or whatever. IMHO it's an underrated skill for Web developers to be able to push back with cogent arguments when asked to ruin the performance of the sites they work on.


Why? Most trackers load async and don’t block the page from loading or rendering. I get it’s popular to hate but technically they load without blocking page from rendering. A good read is https://sites.google.com/a/webpagetest.org/docs/using-webpag...


Even just one poorly designed library can cause serious memory issues and trigger a ton of events. Usually it’s the marketing and ad people that throw around this “but they are async loaded”. That’s true, but doesn’t change the the fact that 40 trackers and their dependencies that come with it are slowing down and infringing on the users privacy. Let’s be real.


Can you share an example site with 40 trackers? I believe you but it just doesn't seem common place.


There has been a lot written about this, but here's a test I just ran on a random page on Huffington Post:

https://www.webpagetest.org/result/181031_TT_443f9d1e666d08f...

The article is probably <200 words, but the page is 3.2 MB and makes > 200 requests. There are 55 Javascript requests in there.

Right, HuffPo is egregious. How about a site HN readers might frequent, something lightweight like Reddit?

https://www.webpagetest.org/result/181031_H6_eaa2e64c9969515...

Random Reddit page, content: medium-sized image. 164 requests, ~12 seconds to render the page on the test rig. 60 Javascript requests. 1.5 MB of JS downloaded to display the post, a 46kb image. There's a 30:1 ratio of JS to post content (what the user wanted to see) here. And there's other bloat beyond the JS.

I just picked these two sites at random. You can do this all day with random websites. I would guess that most sites behind a .com (or .co.uk, etc.) will look similar.

I don't mean to pick on these sites, just wanted to point out that these practices are widespread and even reputable developers engage in them. Which will make it more difficult to undo the rot.


I see there is a lot of JavaScript but are you certain that is JavaScript with the sole purpose of tracking or application code? I’m not gonna say these sites could not be implemented better but the claim was these are all tracking pixels. Is that true?


You're parsing this too narrowly. I'm not making a claim about the purpose of each request. I'm looking at the totality and saying that maybe making 200 requests to display 200 words of content is overkill.

The larger claim was that the pages are loaded with junk that affects the performance of the pages. The signal:noise ratio on modern sites is broken, and optimizing the junk can only accomplish so much. Developers need to advise stakeholders of the downside costs, performance among them, of loading sites with bloat.

Here's an article I read a while back on the impact of specifically Javascript:

https://medium.com/@addyosmani/the-cost-of-javascript-in-201...


I agree but if you look at the parent response claimed 40+ tracking scripts, I just don’t see 40 tracking scripts I see a website using probably too many scripts to implement functionality but I can’t claim to know they can do it with less scripts - I can only assume and that is what the website posted in the original document talks about how to optimize... hence async and trackers not blocking comment was correct


They may load async but JS parse + compile + execute takes up client device CPU cycles and can block the main thread from responding back to user actions immediately. More so on mobile devices. This, and the fact that a typical page has multiple such trackers can cause notable overhead.


Can you share some example sites where the CPU cycles on mobile are a major impact? I suspect maybe news websites? I just don't encounter many on a daily basis where it has an impact and most sites I look into for performance - the issue definitely isn't trackers. It's almost always oversized large css files and blocking js files. When you dig into the site you usually find that it will be very challenging to architect it to avoid the blocking script or shrink the css... but again would love to see some "typical" examples.


They may not be blocking, but it's an extra burden on my limited bandwidth (slows other connections down while it downloads) and balloons the memory usage of sites that are otherwise delivering static content.


Usually the browser has a priority on each connection and when you load a script async the priority is set to low. Have a read here: https://developers.google.com/web/fundamentals/performance/r...


Love knowing this kind of stuff, thanks


Good document! I wanted to mention, something I've found to be significant in practical testing but often not considered by web developers is reducing the total number of HTTP queries. It seems like fetching 5 CSS files of 5kb each will significantly slow things down compared to combining them into one 25kb CSS file.


It slows things down especially because the browser will likely have to layout/re-render the page everytime a new chunk of CSS arrives.

Generally avoid splitting CSS. Even if you don't use all your CSS on every page, a cached 100kb CSS file will outperform a bunch of unique-per-page 25kb CSS files everytime (especially since it's 0 requests for the second page). Except for dial-up probably.

It may be beneficial to split CSS files somewhere above the 300kb mark, but I wouldn't know. My one-page-app is only about 500kb over the wire, including CSS, JS, Fonts and HTML. ~30KB of that is CSS.

I've been optimizing that for years though.


Http/2 makes this less important. Perhaps still worth it for CSS, but higher effort things like image spriting might not be as attractive now.


Yes -- in fact some webperf techniques are HTTP/1.1 - specific, and are actually _anti-patterns_ in HTTP/2. Spriting is one such example.


Domain sharding is another.


AFAIK SVG sprites (or inline SVG) are still required to style SVG images with external CSS.


Back in the day, reducing total HTTP requests meant less time waiting through the network lag (round trip times). With HTTP2, that's less important because it allows multiple requests simultaneously on the same connection (fewer network connections made=less lag). It's a nice to have feature and has measurable benefits, but I don't rely on it. There is a non-negligible cost for the browser to parse yet another CSS/JS file, but I'm not sure exactly what those costs are.


I know i'm kinda self-promoting here, but since you mention webpack loaders like responsive-loader, and you recommend image-webpack-loader multiple times, I figured I could mention my plugin imagemin-webpack-plugin [0]

The problem with image-webpack-loader is that it only works on images which are `require`d or `import`ed. responsive-loader adds those images to webpack in a way that the loader cannot compress them.

Plus there are a bunch of other fany features that many helpful users have added like caching (no need to re-compress every image every time you run it!), the ability to minify images not in the webpack pipeline, and more.

[0] https://github.com/Klathmon/imagemin-webpack-plugin


I've found the server PageSpeed module (in my case for Nginx) does a lot of good things "on the fly".


This is the correct way to do a lot of it. For instance, images. Yes you could optimise the things in Photoshop but actually you want your artworkers to be doing the images right, so they look good and tell the story. Optimising the images is something that should be abstracted out, so on your dev version of the site everything is in 100% glorious, maybe even phone-res megapixels.

Then, abstracted out with PageSpeed you can deliver the webp when you need to and also the source set images so every device has the right size images that update themselves automagically if people zoom in.

Same with minification, why have complex build tools when you can just have PageSpeed do it properly?

You can also have beautiful HTML for view source by putting on the right PageSpeed filters.

The list goes on, apart from the results it also does the abstraction bit, so artworkers can do their Photoshop stuff unencumbered, same for frontend and backend devs.

The thing about it though is that you need to understand a mix of different things that are nowadays split up into different job roles of ever increasing specialisation. A Photoshop person isn't going to go all command line on the server for the perfect PageSpeed Nginx setup, neither is a CSS person, a UX expert, a JS expert or a backend expert. Not even the guy who keeps the site online is going to typically step up to using Pagespeed for the benefit of the team. Pagespeed just doesn't fit into one of these niche-jobs so it is more likely to be found on smaller one-person efforts where there aren't the organisational hurdles in the way.


No you need your artworkers to do it right it the flipping first place and not place excessive load on your live environment - or make the live environment more complex than it needs to be.


Well, in reality, all of these build tools for performance are design to deploy to a "static server" which is out of the control of the developer. That is why they process the images at build time rather than "on the fly"

Not saying that Page speed is wrong, but it is a niche tool that depends on the server side implementation. Some developers prefer to abstract "the server" from their architecture...


Careful with GZip and SSL, though - your payloads become vulnerable to the BREACH attack.


I've seen that. My understanding is that you can compress CSS, images, pretty much anything that does not require a secret. So if your sending a secret to get an image, I think you're doing something wrong. If you keep the secret in a cookie, or transfer it in a header rather than a body, you should be clear to use Gzip and HTTP compression.


I would have liked to have seen more on HTTP/2




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: