Hacker News new | past | comments | ask | show | jobs | submit login
1MB Club (1mb.club)
965 points by bradley_taunt on Nov 19, 2020 | hide | past | favorite | 375 comments



I love it!

I feel like the 1MB limit is excessively generous, especially for text-only pages. But maybe that's what makes it so damning when pages fail to adhere to it. I know at least one website I maintain fails it spectacularly (though in my defense it's entirely because of that website being chock-full of photos, and full-res ones at that; pages without those are well under that 1MB mark), while other sites I've built consist entirely of pages within a fraction of that limit.

It'd be interesting to impose a stricter limitation to the 1MB Club: one where all pages on a given site are within that limit. This would disqualify Craigslist, for example (the listing search pages blow that limit out of the water, and the listings themselves sometimes do, too).

I also wonder how many sites 1mb.club would have to show on one page before it, too, ends up disqualifying itself. Might be worthwhile to start thinking about site categories sooner rather than later if everyone and their mothers starts spamming that GitHub issues page with sites (like I'm doing right now).


Don’t forget that also 1MB of JavaScript is much much more heavy on the client than 1MB image


Indeed, but you also get more bang for your downloaded buck.

My toy project https://k8.fingswotidun.com/static/ide/?gist=ad96329670965dc...

Gives you an emulator, an 8-bit AVR Assembler, and an IDE for just over 500k transferred. Almost all of it JavaScript.

Using math.js is by far the heaviest part but at least your Asm already knows things like solarMass and planckConstant :-). CodeMirror comes in second heaviest but for user-experience-per-byte you're always going to be streets ahead of a gif.


True, but clients can opt to not load (or lazy-load) images without too much adverse effect. JS-heavy sites often completely break without JS.


Yours is an exception to the rule - in 99% cases for 1MB of JS you get 1 MB of ads/tracking code.


> Indeed, but you also get more bang for your downloaded buck.

Yet this is most times used to load React to make a button clickable.


At the risk of over-complicating things, perhaps there could be limits per resource type. 10Mb of images might be reasonable (e.g. for a photojournal), but only 128KB of JS, and 128KB for everything else. Something along those lines.


Yeah I was surprised they included pictures in the limit at all -- I mean, sometimes, you need those pictures, and for them to load slower is less important so long as you don't need them to navigate the page.


Well if you need them you can't be a part of this 1MB club.

It's not a bad thing, it's just a thing.


If you were able to calculate the space in the document flow for those images, I'm fine with the lazy loading. I hate when the page text appears rendered long enough that I start reading, but then lazily loaded items cause the flow to rearrange the text so that I lose my place.


I imagine it should be pretty easy with javascript to set up dummy images, and replace them... probably also doable in pure css, just make them a block element or something, with a set size...


You don't have to imagine everything. Normally the lazy-loaded image should be seeded with transparent SVG that as the same dimension. It's a solved problem.


I miss lowsrc. It had a point.


Wasn't this what progressive jpeg was invented for?


I’m not surprised. The whole point of it is that you can reasonably load it over 3g.

If you have a few megs of images that never show up because they take too long to load, there is no point.


You dont really need full size/resolution images on a web page.


You people are why some sites have ugly blurry logos/images on my 4k screen


more the fault of the site devs for not using css media queries, or svg ;p


Does getting in the club assume you have a low resolution screen?

SVG only works in a few places.


You do for full width retina images on 1440p+ monitors.


Yeah true but even with small images, you can hit that cap quickly.


You could use SVG images.


Check out this app: https://webide.se/?disable=fonts,discoveryBar

It's over one hundred thousand lines of JavaScript code minified and Gziped into a 300KB bundle which should fully load in about 300ms on a decent computer.


That took 4000ms on a phone.


Which is still rather fast. Starting an IDE like Visual Studio on my desktop takes longer.


4+ seconds is hardly "rather fast", and being faster to start than Visual Studio is hardly glowing praise.


What hardware/tools did you use to measure?


It was about the same for me, and I used a simple stopwatch. Stopped the watch as soon as I saw anything on the page. I am on a fairly fast network, too


For me it wasn't a second on a Honor 10


> I feel like the 1MB limit is excessively generous, especially for text-only pages.

No kidding. I just checked and the average text-only page on my blog well under 100kb. Even the image-heavy front page is under 1MB...


I was pleasantly surprised to find the same!

Even the page where I used images came in at juuuust about the limit -- to the point where you'd have to make a ruling of whether the rest of the page gzipped counted because over the wire was technically <1MB but it was just above after decompress. The site does specifically say "downloaded", haha


Personally, I stuck to the 40K best practice recommendation for the basic web page as a target, which was in place when I started. Modified to up to 140K including webfonts. Notably, this is without images.

(This is a flexible target, depending on the complexity of a page. E.g. for a "bloated" page, a fully styled video display for competition winners showing 200+ entries and 280+ individual videos in categorized views is about 250K, including a few images, two and a half font families and the Vimeo Player SDK, but excluding the load of any external video streams. However, with compression we still manage the 140K mark.)

Then reality hits: Client insists on a full-width photographic hero image as it's still 2014. Usual controversies about a full-size intro video (autoplay, of course), we must have this highly intrusive chat asset installed, etc, etc… – And we easily blow the 1MB limit.


Something about all pages being under limit the limit instead of every page being under the limit changes the exact meaning to something that I cannot agree with. Which was the meaning I replied below and then wrote this after realizing you might of meant every when you wrote all.

Lets say you write a daily blog. A single A4 of text contains on average 3000 characters, your posts average slightly above that by being 4000 characters. How long until the text content alone is above the 1MB limit.

https://dictionary.cambridge.org/grammar/british-grammar/all...


I doubt you can find a single text blog on a specific topic that wouldn’t be improved by limiting it’s total text to 1MB.

Being more verbose is generally just poor writing. Now, using a separate website per topic seems like a silly limitation, but the more topics being discussed the less relevant the discussion.


Yeah, I did mean "every" rather than "all", as you put it. Though both would be interesting.


I think an onload limit is much more useful than file size.

A 700kB JavaScript page can take up to 10 sec. to render on older mobile devices. And a 500kB image can contain megapixels which will slow down non-PGU browsers.

Personally I always go for a max 2 sec. limit on all devices.


Agreed. 1MB seems more reasonable as a per-site threshold. Or, alternatively, somewhere between 10kB and 100kB as a per-page threshold.

For context, 1MB is the same order of magnitude as the original Doom which was about 2.4MB in size. [1]

[1]: https://www.wired.com/2016/04/average-webpage-now-size-origi...


I think it should be relative to content. As in versus actual text on the screen. This metric can also be applied "per-page" and "per-site", with less ambiguity for SPAs; every new load brings in more bytes, but also more text, thereby contributing to the ratio.

Even better - fixed site-wide assets (i.e css, js) to features; hence loading an entire framework only to use a small % of it's features is penalised.


500kb is even crazy imho, unless you need to display media formats


Next level: The 1MB Quine Club.


Tiddlywiki


Do they check the 1 MB limit?


Looks like they do yes.


From a "Why, who cares" perspective website (and app) speed are highly correlated with conversion and engagement.

Google's headline research on the subject says "Improving your load time by 0.1s can boost conversion rates by 8%."

Some add'l data, sourced by Neilsen Group below:

- Google found that increasing the load time of its SERPs by half a second resulted in a 20% higher bounce rate.

- Google found 53% of mobile visits ended if a page took longer than 3 seconds to load.

- Akamai aggregated data from 17 retailers (7 billion pageviews) and found that conversion rates were highest for pages that loaded in less than 2 seconds; longer load times correlated with 50% drops in conversion rates and increased bounce rates, especially for mobile visitors.

- BBC found that for every extra second of page load time, 10% of users will leave.

Want to sell / fix this?

Here's the best three simple resource illustrating objective third party results from increasing site speed:

- https://blog.hubspot.com/marketing/page-load-time-conversion...

- https://wpostats.com/

- https://www.cloudflare.com/learning/performance/more/website...

Here is a more compelling deeper look from a leader in the UX space:

- https://www.nngroup.com/articles/the-need-for-speed/

Here's a really well written article about how to PROVE to the powers that be that site speed is worth testing:

https://web.dev/site-speed-and-business-metrics/


I used to work for a major e-commerce company and will back this up. The importance of performance, of course, depends on the use case.

In e-commerce, the biggest factors (that come to mind) to compete on are—SEO and brand aside—price, offerings, and convenience. Convenience has two dimensions: UX and performance.

If a user has a very clear idea of what they want from your site, they'll probably be patient. If a user is channel surfing (and the vast majority are when it comes to shopping, comparing options, etc.), then every millisecond matters. Every millisecond spent loading is a millisecond not spent selling/convincing.


I totally believe that the vast majority of page views are from channel surfing (or bots), but have a hard time believing that has any correlation to actual purchases.

If I'm spending an hour's pay on something I really want, of course I'll wait 10 seconds for the page to load, or if I can't get it to load at all, I'll make a note to try again later from different browser or internet connection. I'll manually enter my address details if they didn't auto-fill properly. I'll respond to emails on what I want to declare for customs, and various other efforts.

I, and people I know, feel like we buy very little on a whim. Is that unique? Are there whales who buy everything you can put in front of their face, or a different demographic who searches for something to buy and then changes their mind in precisely 850 milliseconds?

I would accept that like candy in the checkout lane, the profit is small but worth more than the extra effort it takes to put the offering there, but the revenue is small compared to the actual stuff people need to buy, but the analytics that suggest that hundreds of people click a link to your page but most close it faster than you can read the headline just seem unbelievable.


Maybe it's people who know kind of what they want, but not exactly. Let's say you want a vacuum cleaner you go to a few different websites and check about 20 models on each site. One one site every time you check a model you have to wait 10 seconds. You might give up early on that site.


> Are there whales who buy everything you can put in front of their face, or a different demographic who searches for something to buy and then changes their mind in precisely 850 milliseconds?

Maybe, maybe not, but site load time has a very well documented correlation with bounce rate. And if a user bounces, they're obviously not going to end up spending money.

I believe it's more about keeping the user in a sort of sales/marketing 'funnel'. If I'm just browsing and come across a service I might be vaguely interested in, perhaps I'll click the link to view their page. Now I'm at the top of the funnel. With each link I have to click to follow, there is an opportunity for me to get bored, get distracted by something else, or so on. Maybe I'm not buying anything then, but maybe somewhere along the funnel is a free trial of the service, and if I'm in a pleasant mood, maybe I'll do the trial. But if I bounced because of a 5 sec load time, chances are good the mildly impulsive mood I was in is not going to persist enough to encourage a subsequent visit.

So it might not be an immediate thing, but better engagement does lead to more sales. I like to consider myself a mindful shopper, but there are definitely a handful of $5/month services I am subbed to because I was in a vaguely impulsive mood and stumbled into something I thought might be useful -- and their signup process was silky smooth.

And if you are instead shopping for an item and browsing several different sites, if one of them sticks out because of its slow load times, that's a tab you are going to close.


Probably lots of people who are like the stories about women shopping. They go around looking at lots of different things they don't need to see if they find something but mostly don't buy. The more items they browse the more likely they are to find something they want and buy it.


>If I'm spending an hour's pay on something I really want, of course I'll wait 10 seconds for the page to load

When I go to ebay I end up on the landing page. Then I have to enter my product name into the search. Then I will go into search settings and filter out auctions (if I want to order immediately). Then I have to open 3-4 different product listings. That's around 6 page loads. If each page takes 10 seconds then I would have to spend 1 minute doing nothing but waiting for the site to load.

> or if I can't get it to load at all, I'll make a note to try again later from different browser or internet connection.

Isn't it far more likely you are going to buy whatever you want from somewhere else?

>I, and people I know, feel like we buy very little on a whim. Is that unique? Are there whales who buy everything you can put in front of their face, or a different demographic who searches for something to buy and then changes their mind in precisely 850 milliseconds?

It's not about buying. It's about which store you visit when buying something. If your site performs poorly then users might not come to your online store. They may go to a competitor.

The thing about those averages is that they are quite unintuitive. Remember they are averaged over thousands or millions of users. There will always be some users with latency spikes because their device is much slower than usual or simply because servers are running at capacity. They are in the minority and have small effects on average metrics but the latency they see could be very high, maybe even up to 5 times the average. If you can decrease average latency by 1 second then the site might be 5 seconds faster for a minority of users.

>I would accept that like candy in the checkout lane, the profit is small but worth more than the extra effort it takes to put the offering there, but the revenue is small compared to the actual stuff people need to buy

You have to consider that each product listing on an online store is basically an ad. You are not necessarily buying all the products you see but each product listing informs you about what's available in the market place. If pages load faster you end up seeing more "ads" which increases the likely hood that you eventually do buy something.


So what you're saying is, the fact that I feel icky and decide against spending any money on HomeDepot.com lately, is hardly a surprise.


Those studies all come from companies trying to sell you something.

They seemed to come from when mobile phones were much slower when there was a valid use for AMP pages. A/B test data is very easy to cherry pick from and you can see a 10% increase in conversation based on random chance. I can see my marketing team misinterpret data like that and use it for promotion. I am skeptical.

I improved the initial loading speed of a website so it was 30% faster on an old mobile phone and it made no difference in conversation based on the analytics.

Most people used faster phones with faster internet that it really didn't matter. After the first page load, most of the bloated assets were cached so even those with slow connections were relatively fast navigating further.

There could be some types of websites where speed matters more like if you promoted click bait articles with high bounce rates. But if you have high quality content, it isn't going to be that important unless you have really terrible bloat.


Some of these, not all, don't even seem to be A/B tests, just correlation being explained as causality.


"This more streamlined design that also happens to load faster led to higher conversion rates! It must be because it loads faster!"


If nobody cares, why did Google invest so much in amp ?


To take control of other people’s content and, by extension, revenue streams? Performance was a selling point to the web dev - me, as a user of the web, was never even asked my opinion, and I can’t (easily) disable amp.


Alphabet cared. It cared about what corporations can care about, making money. It invested so much because there was a good return on investment and there was a lot of cash on hand wanting for investment and Amp was big enough to make use of enough cash in hand to be worth pursuing. With billions of cash on hand fixed overhead makes small investment not worth the bother. For Alphabet, a hundred million dollars is less than rounding error.


To compete with Facebook in centrally controlling the internet and the revenue that flows through it. It's their Plan B after their Plan A, Google+, failed.


Interesting that a site that champions for performance has an ungodly favicon [0] that is 10x larger (15KB) than the page itself.

[0] https://1mb.club/favicon.ico


I did a quick Google search and discovered that SVG favicons are valid in most browsers, so for fun I reduced it by 95% down to 886 bytes:

https://output.jsbin.com/yudonidujo


And it looks much nicer zoomed in (as in your link) ;)


svgo got it down to 876 bytes without visual difference but svgcleaner couldn't do anything.

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 48 48"><path fill="#662113" d="M27.3 5.7c4.9-3.1 11-3.9 16.8-4.1A162 162 0 0031 10l-3.7-4.5zm4.1 5.6c4.8-3.6 10.1-6.5 15.3-9.6a18 18 0 01-4.3 12.7l-11-3z"/><path fill="#c1694f" d="M44 1.6l3.3-.3-.6.4c-5.2 3.1-10.5 6-15.3 9.6 3.7 1.2 7.3 2.2 11 3l-6 6.2c-3.6-.8-7.4-2.8-11-2.4a70.5 70.5 0 00-17 18.3c2.7.2 5.5.4 8.2.4h.5c-3.3.2-6.6.4-9.9.9 3.6.6 7.2.5 10.8 1-3.7 2.2-8.2 1.7-12.4 1.7-1.6 2.4-2.7 5.3-5 7.1.7-2.8 2.6-5.2 3.8-7.8.1-2-.4-4.1-.5-6.2L6.7 36l.3-.3c5.2-7 10.7-13.7 17-19.7l-1.6-7.7L27 5.4l.3.2 3.6 4.5c4.3-3 8.7-5.9 13.2-8.5z"/><path fill="#d99e82" d="M8 21.2C11.6 16 17 12 22.3 8.3L24 16A178 178 0 007 35.7c-1-4.8-1.3-10 1-14.5zm.5 15.2c4.6-6.8 10-13.4 16.8-18.3 3.7-.4 7.5 1.6 11.1 2.4L33 24.1c-2.5-.2-4.9-.4-7.4-.4l5.4 2.4-3.2 3.5c-3.7-.3-7.4-1-11.2-1l9.7 3a10.7 10.7 0 01-9.5 5.2c-2.7 0-5.4-.2-8.2-.4z"/></svg>


Nicely done. Manual or automatic conversion?


[1] generates ico, png and svg. you may want to remove some favicons from the generated HTML snippet; in my brief testing, Firefox loads both png and svg, while Chrome loads ico.

[1]: https://realfavicongenerator.net/


excellent. have you shared this with the website person and asked them to use this ?


I recently noticed something similar, while optimising my e-commerce website.

I went with:

  <link rel="icon" href="data:;base64,AAABAAEAEBAQAAEABAAoAQAAFgAAACgAAAAQAAAAIAAAAAEABAAAAAAAgAAAAA
  AAAAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
  AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
  AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
  AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA">
Renders a black square, everywhere except Safari. The black square actually goes with the brand.


Doesn't it support RLE compression?


What kind of an internet connection are you on that 15kb is significant?

Performance is important, but if your site is clocking in at 17kb then you're probably doing okay.


But when you start this way, you quickly endup with the bloat that is modern software. Because each extra bit of waste seems justified at every steps along the way.


May seem pedantic but those little gains do add up. A missing favicon also has a surprising impact.


Spoken like somebody who's never been to Antarctica. Or on a slow train WiFi for that matter...


I don't mind highlighting and curating small sites for fun, that's neat.

But calling larger sites a "cancerous growth on the web" just feels immature to me. Everything is just cost/benefit. There's no need to bring a black-and-white fanatical exaggerated mindset to it.

You can celebrate something you like without having to make the alternative a disease, sheesh.


With all due respect. Modern news websites are a disease, and I find it pretty hard to disagree with the nomer cancerous growth since the behavior is spreading, and normalized by these websites.

There is zero benefit to me in loading 20mb of garbage just so I can read 10kb of text.


I think the reason why blog posts are written in this hyperbolic format is because it catches people’s attention, or “click bait”, if you will. If it were toned down in the way you suggested (which I agree with in principle), it might lose more readers before they get to the substance. Just hypothesizing here btw.


Yes news sites are the worst of the bunch but then we get people complaining about full web apps using JS when there is no real alternative.


The problem is that sites that really had no business being "web apps" have turned into these monstrosities. The new Reddit and Youtube are great examples of this. The old versions worked just fine, but for some reason they were replaced with all Javascript versions that perform much worse.


It seems pretty clear what the motives for reddit are. They want as much tracking/ads as possible as well as actively discouraging use over the app. At every step the website tells you to use the app instead or that this content is only available for app users.

JS is not the issue and reddit could have easily made a snappy react version of the site. Monetization, ad tech and engagement growth are the issues.


Exactly. I build enterprise web apps for a living and don’t have any of these problems because I don’t have to put ads in them. They are blazing fast and users love the UX through and through. There are myriad reasons why I would never build them with traditional SSR templating.

I could also build a CNN clone in pure React (no SSG or SSR) without the ads and the millions of autoplaying videos that would demolish its current performance.

As for Reddit, perhaps SSR would be a better fit indeed. But most of its problems come from the fact that the SPA implementation is dogshit, and not from the architecture itself... Some things plainly don’t work, for example a feature as basic as the comment tree is broken as hell. This mess would be equally possible with SSR though - I would personally screw up more easily a comment tree implementation with AJAX data fetching in SSR templating and vanilla JS than in React.

I agree that many sites, especially news sites, are disgusting bloated messes. And yes, JavaScript is often involved but it’s not the cause - it is merely an accessory to the fact. The causes are ads, tracking, data mining, even crypto mining (!), and piss-poor implementations (and no, JS frameworks are not conducive to the latter any more than vanilla JS). Each of these causes is in turn motivated by its own root cause, most often economic in nature.


I mean, I agree full web apps are annoying to some extend, but if you’re loading a whole application in a few MB that actually seems kind of reasonable.


That’s just it — no benefit to you, but a lot of benefit to them. They need analytics to sustain their business. They need ads to sustain their business. They (maybe) need fancy interactions to differentiate and thereby sustain their business. If you expect everyone else to only do things that benefit you instead of mediating between benefiting you and benefiting themselves, you’re gonna be pretty disappointed.


1) Many websites are not news sites. In fact, I'd guess the vast majority. So this point doesn't feel relevant.

2) Sometimes things might be useful to people other than you. The world doesn't exist to cater to your needs, and referring to things that aren't exactly what you want as cancerous is childish.


In response to your second point, please tell me who benefits from the 20mb of cruft loaded when accessing the page?

Aside from the marketing monkeys that put it all there in the first place?


The reader who gets to read the page for free in exchange for that marketing cruft being there.


Per definition the reader gets less value than the marketing monkeys.


Because you should be comparing the aggregate of all readers versus the content publisher.

Also, what’s with calling people monkeys? I’m sure many coders here produce stuff that is as questionable in value as what ad executives do.

People need to put food on the table and it’s abhorrent to think of yourself better than an another person because you build data pipelines in Scala that the universe doesn’t give a shit about.


Both sides of a transaction value the thing they get more than the thing they give.


And the writers and editors and other staff members whose salaries are paid by that stuff.


And of those 10kb one line is the actual content and the rest is ELI5. In their defense can say that news websites are predominantly compatible with noscript.


hyperbole much?


As much as it "feels" immature, it is still an understandable analogy, regardless if you're a fanatic. As other users pointed out, it has become a norm for software/webdev houses to convert what is mostly static content into weird, flashy, bloated SPAs that load a ridiculous about of resources for little content. I see it in my peers too, and I saw it even when I was attending college. Frankly, I think it's immature to pretend this is not a problem as some seem to suggest.


Most webpages that load megabytes of bullshit today would be much better for everyone, developers, operators, hosting, and end users, if they just didn’t do that.


Sure there is a place for nuance, but there is also a place for bold writing without all the pussyfooting. Exclusively using either style makes it boring for the reader. We need the sweet with the sour ;)

>You can celebrate something you like without having to make the alternative a disease, sheesh.

You can, but would you be equally successful at it?


How accurate is this list? I see it mentions that visiting https://danluu.com/ downloads only 9.1 kB but when I actually visit this website with Google Chrome, in the Developer Tools' Network tab, I see that a total of 21.7 kB is transferred.

  Name          Status  Type      Size
  ------------  ------  --------  --------
  danluu.com    200     document    2.5 kB
  analytics.js  200     script     18.9 kB
  collect       200     xhr          67 B
  favicon.ico   404     text/html   266 B
Shameless plug: I maintain a lightweight home page myself at https://susam.in/ and I try to keep it as lean as possible without compromising on some custom formatting I care about. So far I have got the transfer size down to less than 6 kB.


Nice page. I only miss Firefox reader mode on these simple pages (for my preffered font size, and dark mode). I wonder if it's possible to hint to the browser that it should be available, even if the mark-up is simple?

Ah, the late 90s and early 00s when we still had user.css (in a meaningful sense).


There's a way to force reader mode. Add "about:reader?url=" before your URL.


Thank you. But no easy way to force the button/option to appear for visitors, as the author of the page?


If you just want the button on all websites, get firefox to privide that option for you. No need to push that onto websites.


There are vestiges of user.css like the Style sheet option in Safari. [1] I do wish they were better supported, and that user styles automatically had priority over webpage styles.

[1]: https://support.apple.com/guide/safari/advanced-ibrw1075/mac


6 kB fits in the initial congestion window (first round trip) for TCP. [1]

Does an analogous window exist for QUIC and HTTP/3?

[1]: https://hpbn.co/building-blocks-of-tcp/#cwnd-growth


> Initial congestion window: 10 segments (10 x 1460 bytes ~= 14kb)

Yes, my goal (very hard to do sometimes) is to fit all of HTML + Push CSS within 14kb


Are you not using an ad blocker? danluu.com has google-analytics, which is a very large script.

Please use uBlock Origin.


I don't think an ad blocker should be assumed when calculating the size of a web page.


It would depend on precisely what you're measuring. If you're concerned about page loading speeds, then resources that don't block rendering or functionality (like, I assume, a proper GA integration) might not be worth considering. But if you're worried about, say, bandwidth usage, then of course you'd want to count everything.


This is true, although any resources loaded introduce CPU overhead, which can be non-negligible, especially on mobile.


I do use uBlock Origin in general. However, I disabled it while verifying if the download size claimed in this post is accurate.


Ad blockers already don't work. They're largely useless. If you use one these days, you already know why.


I don't know why. Can you explain?


Because of the large and rapidly-increasing number of sites that just refuse to work if you are running an ad blocker, unless you manually disable it first.


I think I can count on my fingers the number of instances of this that I've seen in my many years of using an ad blocker, and I don't even remember the last time it happened. To claim that some tiny percentage of websites doing this makes ad blockers "useless" is completely ridiculous.


Can you link any site that doesn't work? It is easy enough to try and get past anti-adblock with uBlock Origin.

Try uBlock Origin. Don't use stuff like 'Adblock Plus/Pro'.


they are LARGELY useful... they do what they say on the tin, at least unless you visit some super agressive site which uses anti-adblock, but, then again you can conf up an anti-anti-adblock ;)


5c hint: a little bit of padding and max-width on your content would make it far more readable!


Your page looks very nice, I almost want to duplicate my own website into a format that's just a list of links and titles.


My personal fav: 148kb, $530B company. https://www.berkshirehathaway.com


Just for your information, from their legal page: "...linking to this website without written permission is prohibited." Not sure what is meant by this, but I found it funny.


That same sentence starts (paraphrasing) "[Copying or giving out of any stuff gotten on this domain ... is banned]" so merely quoting the legal terms is also "illegal".


IANAL

What I think they mean by this is that you shouldn't link to resources on their website to make it seem like they endorse your (product, website, whatever).


Imagine if we all wrote?


HN hug of love, by snail mail. Buffet Bro would get so busy BRK share price might drop a few bucks due to this alone ;-)


I assume you went through the proper channels to get written permission to make that link.

From the legal disclaimer at the bottom:

> linking to this website without written permission is prohibited


How enforceable is this? And what's the likelihood anyone would try to enforce it?


Probably as enforceable as having "looking at me is prohibited" written on your shirt while walking down the street.


Want to tell them you like it? Better buy a stamp.

"If you have any comments about our WEB page, you can write us at the address shown above. However, due to the limited number of personnel in our corporate office, we are unable to provide a direct response."

I see they have a link to "Berkshire Activewear". Now that's a much much more heavyweight page.


Copyright © 1978-2020 Berkshire Hathaway Inc.

Can it be assumed that this same website has been in place since 1978? Obviously not exactly like it is now, but probably not far off.


HTML being from the early 90s or so, I wanna say "no."

I wonder what the logic for the 1978 date is. It's hard for me to believe they had any reasonably connected predecessor of this in 1978.


It’s the copyright for the entire website. There’s a letter to shareholders written March 14, 1978 - https://www.berkshirehathaway.com/letters/1977.html


I'd assume not, given that Tim Berners-Lee hadn't invented HTTP yet.

But, the website looks rather similar to how it did back in 2001 (with the recognizable two-column list of bullet points):

https://web.archive.org/web/20011129002047/https://www.berks...


And it has an ad on it. (Boy do I miss the days of Google's text only ads)


Seems impressive but considering most holding companies have no website maybe it’s bloated


and it has an ad for geico


Including an ad, no less.


That's not completely fair since they are an investment holding company.

How about you average their subsidiary web pages? Start with DQ.com (Dairy Queen)


I was once asked to debug an extremely slowly loading page. In the first 5 minutes it was evident that the client was doing things it wasn't supposed to do. It was downloading 300MB worth of resources to show the webpage. Incorrect implementations and inefficient use of libraries is the reason why we're seeing a bloated websites all over the web


> It was downloading 300MB worth of resources to show the webpage. Incorrect implementations and inefficient use of libraries is the reason why we're seeing a bloated websites all over the web

In situations like that, it's right and proper to ask who built the site. Then shake your head with absolute contempt.


I've had to fix web apps like that, and no, that's not always right and proper. One example I can give is a tool which loaded 50MB of deeply nested JSON, because when it was written 5 years ago, the payload per item was 80% smaller and the company had 0.1% the number of items.

The correct response is to work out who was responsible for maintaining the site for the past five years.


and follow them around from job to job, giving yourself endless contacting work cleaning up their messes!


Here's my example of a non-trivial product landing page weighing in at 0.3MB transferred:

https://www.checkbot.io/

Includes analytics, chat client, big product screenshot, theming and payment integration, so you can still do a lot well within 1MB.


KUDOS - That's pretty darn impressive alright.


0.006MB on brave :)


Thanks for the example. Looks awesome! Great work.


It's really really good, congrats!


In 2005 I had to fill in a form to request a page size exception for our election results map on the BBC News site. The page weight was going to be just a little under 400kb. At the time the size considered acceptable for the whole page (images and all) was 75kb.


kb or kB?

Assuming kB and dialup, which was 83% of the US at the time (easiest stat to quickly find), that was a difference of about 23s vs 121s load time.

Edit: Seems like the UK would have been about 50/50 around that time according to this article: https://www.independent.co.uk/news/business/news/broadband-o...


Yes, sorry, kB. If I recall correctly our argument was that because after the initial load we only needed to poll occasionally for small data updates users would probably save time/ bandwidth overall (a familiar argument for the present day SPA advocates)


As far as my math goes 75k(b/B) on dialup (usual speed for that time is 56.6k) is either 1.3s or 10.6s. Either being acceptable at the time since browser loaded text first and rendered images when they came.


In theory yeah. In practice, 32 was about the best I ever managed and 24-26 was typical.


I believe this is placing emphasis on entirely the wrong thing.

Page size does not matter. Render time/time 'til functional does.

Stripe.com is 1.2MB fully loaded but is pretty much all non blocking so it feels as fast as a very small site. By the time you've scrolled to the bottom of the homepage it's loaded >2MB. The convert pretty well, and my experience is that Stripe knows what they are doing better than most. So what should we be optimising for? Total page size? or a lightning fast experience with non-blocking content which can be as big as we require?

IMO developers need to optimise for building fast sites, not light ones for the sake of being light.


1.2mb will only be fast on fast hardware and a fast network. You’re not accounting for p90 load times. Targeting small page size is a better indicator of audience wide performance


Maybe we should optimize for both? Small page size is irrelevant if the loaded js renders the page unuseable for seconds, but the same thing is true on slower connections if you have to load tons of data in the first place.


yar, maybe it should've been about js size, ideally closer to the 200kb-of-js club


I'd be down for that, but without the "k" :)


Ahh let the bikeshedding continue.

In these situations all I want to say is "Who cares".

Optimization matters when it actually solves a problem, before that it's just wasted effort.

I don't hear the general public yelling from the rooftops "The web is too slow! Developers are building too fast! I wish we could go backwards!"


If general public even knew who to complain they would have surely yelled loudly. For now I hear non-tech people say 'Oh, our internet is slow even though we pay lot of money'

Meanwhile valley bros keep helpfully reminding me to upgrade my computer to a reasonable 2020sh.


Alternatively, they say things like "I hate how slow ads are." It's not the ads that are slow. It's all the tracking and libraries built on top of it.


I wonder if the answer is better visualization tools? We have fairly detailed dashboards that we as developers can look at, but there is no “check engine light” version for people who just want things to “go”.


How would you make a check engine light actionable for the visitor of a website? They already notice that it's slow and their fans start spinning.


The general public likes to do things on the web other than reading minimalist hypertext. They like real-time chat and video and images and playing games in the browser. They would rather be able to do the things they like faster and more efficiently than abandon the last 20 years of technological progress.


I had real-time video chat and played browser games 15 years ago, on hardware a fraction of the power of that today.

It seems a lot of the newer generation just doesn't realise what was already possible before, and is only infatuated with inefficient reinventions.


Those things are all great and probably far better optimised than your average should-be-static site serving bloat.


The overwhelmingly vast majority of things on the World Wide Web are things that are perfectly feasible - and indeed pretty darn trivial - with minimalist hypertext.


The general public likes to do things other than reading minimal hypertext. They don't demand that we shoehorn everything into the web.


I simply disagree that this relates to bundle size. When people (I know) complain about slow internet it's typically in relation to high bandwidth streaming (Netflix, Youtube, etc...)

This is said by someone who is not a "valley bro" and often lives in rural Canada.


To be frank, that's probably because you're not listening.

YouTube redesign, Twitter redesign, and one other huge site I can't remember at the moment all had plenty of complaints about how much slower --- and less featured --- they were. The average user doesn't know a static page has been replaced with a bloated SPA, but can sure feel the difference.

The web has declined for sure, and it's precisely because of ignorant developers with attitudes like yours.


Twitter's web version is an abomination. Every time you click on a link to read a tweet, you get an error. Every time. You need to force a refresh to get the content. Its an absolute joke.

To any twitter "engineers" on HN, sort it or abort it.


What's even worse is that the much better static-page "mobile" version which I've been using (mobile.twitter.com), and you can only get to it if you set your UA to an older browser, now has this ominous "warning": "This is the legacy version of twitter.com. We will be shutting it down on 15 December 2020."

It's deeply disturbing that if they go full-SPA like YouTube did, it seems not a single person working there realises the insanity of somehow needing so much software (and hardware) just to read 140-character long microblogposts, something that would've been possible with hardware and software from 30 years ago. If you look at their minimum browser requirements, then look at the minimum OS requirements from the browser, and go back to hardware requirements from that, you'll probably end up with something that seems ridiculously overpowered for the task, yet with that "minimum supported" system the site will still be slow as molasses and worse than the static page. Seriously, what the fuck!?


Go to about:serviceworkers (for Firefox, other browsers might have it elsewhere) and remove anything related to twitter - or better yet disable service workers entirely. Not saying that this is acceptable, just providing a workaround.


> The web has declined for sure, and it's precisely because of ignorant developers with attitudes like yours.

Not only due to developers. Designers have a hand in this too. Example: on many sites (Twitter, Imgur, etc.), it is not possible to simply zoom in an image without jumping to a lot of hoops. You hover the mouse over the image, the pointer becomes a magnifying glass, you click and...

The image blows up to fill to the screen until it his a overly large and useless border that some daft designer thought looked cute. There is no way to zoom in any further, that is blocked. So zooming to see a part if the image in full detail is out of the question. What's worse, if you have a small screen, likely to be the reason that you wanted to zoom in the first place, the zoomed image is actually smaller than the original. Great!

So, I've wasted my data plan to download a large high resolution image and a bunch of javascript libraries that make sure I don't get to enjoy that resolution. I can't remember for sure, but I bet that his already worked fine in Mosaic in 1994. Not anymore. And why? What benefit does this bring? Other than that it looked nice on the designers computer, I mean.


I mean I don't like a lot of these redesigns. However, I think your mixing up correlation and causation.


> In these situations all I want to say is "Who cares".

This list has a second hidden benefit: Typically, I find that those who have very lightweight pages have more interesting things to say.

That being said, that's not a hard and fast rule, as I both have a lightweight page, and not much that's interesting to say.


This is also true for how the site 'looks'. The less flashier, the more interesting the content might be to read. They don't bother about SEO/clicks


The best content is always on some boomers custom made forum software from 2009 that they use to post in depth articles about obscure topics.


This feels like both an anecdote and a straw man.

I can point to two blogs on either side that have a lot to offer.

Large bundle: https://www.joshwcomeau.com/

Small Bundle: https://www.eugenewei.com/


> This feels like both an anecdote

Definitely. This is just something I've noticed.

> and a straw man.

Wait what? I was just bringing up something I've noticed. How is this a straw man? As I said before, this is by no means a firm rule, just something that I noticed is frequently the case.

PS. Thanks for linking those blog's, I've added both to my reading list.


Agreed. Not a general rule but as a rule of thumb it works very well.


Not everyone within earshot of your roof is "the general public". It matters to people on lower-bandwidth connections. It matters in terms of carbon footprints.

And just as someone who grew up horrified if individual pages got over 100kb (or whatever the company rule was), the idea of not caring at all about "page" weight is how my old company wound up passing 20+ megs of JSON around just because it was a little easier.


In my company I don’t care so much about load time since everyone will only load the app once per day, we know and control the network speeds, and anyway, they’re paid to sit there waiting for it to load.

The same thing is not true for the general public.


The public does yell "The web is too slow!"

Except they usually yell "My phone is too slow" when the problem is actually the web that got 3X bloated since they bought their phone.


"Optimization matters when it actually solves a problem" - agree.

With the major disclaimer that if your website serves ANY commercial utility for you personally or your business website speed is hugely important a clear cause / effect impact on engagement, conversion rate, bounce rate, etc.


I always think, the one feature everybody wants is speed. The same page but faster is always better.


No, not always. Users typically prefer waiting a few secs after entering their criteria rather than getting instant filtered and sorted search results, especially on price-sensitive services like price comparators, but basically everywhere the website is supposed to have the user's financial interest at heart.

Makes them feel like the site actually does some calculations, even if everything is cached in the background.

See travel agencies, airline comparators, train booking services, insurance benchmarks or credit simulations.


Well also, my site is built with Gatsby and comes in at <1MB (just) but I bet if you told people it was Gatsby you’d get the stink-eye. “Ugh, dirty framework, what’s wrong with no-JS?”

I mean never mind that my internal page links load in <50ms thanks to prefetching and all the other smart stuff that Gatsby does.


Prefetching (or not) should be left up to the browser.


I don't know about you but I see a lot of complaints on twitter about the new stories feature. That isn't a complaint about things being slow but it does seem to suggest users want to go backwards wrt new features that are unneeded.


I hear my users constantly yelling for more features. Not one has complained about the page load (5mb uncompressed, plus a delay for auth to initialize).

I'll keep listening to my users.


Because anyone who complains is given this treatment. We don't complain to everybody we just bounce.


If you're offering a service that is so uncompelling or so undifferentiated that the page load speed is a driving factor, than you should certainly focus on page load speed.

On the other hand, Gmail regularly takes 10s+ to load and yet most users are glued to it.

If you're a SaaS business, your marketing page is probably not your app. The people using your app care about usability, and page load speed is rarely the driving factor. Especially with SPAs.


I hear you saying that if my service is compelling enough, people will tolerate the horrendous page loads.


That is absolutely correct. Ask yourself: Will my customers be happier with additional features X, Y, and Z, or a faster page load?

Better yet, ask your actual customers.


hahaha, brilliant


Maybe the initial load is 10+s, but clicking into and out of emails is fast, and that is what matters. If that took 10s it would be unusable.


> We don't complain to everybody

Except on forum's. We complain there :P


Interesting.

What I've found is that the users I have that complain are by far the minority of my users. They ask for every little whim, many of them conflicting. If I'd been listening to them my website would be a mess right now. Alexa top 5k. 7 million uniques / day. 550kb. Less than 2 years old.

Every time I implement a new feature, there is no significant change in my site's trajectory. Improving latency by adding an auto-scaler, however, has lead to a massive uptick in usage in the past 90 days.


Perhaps Bradley Taunt is bikeshedding, but it doesn't mean he's not pulling in the right direction.

People obviously care because people obviously want their browsers to load content quickly. No one likes waiting for webpages to load.

How do you decide when a website/webapp is fast enough and you've "solved the problem"? On what range of devices and internet speeds do you test on?

As my family's personal IT assistant, I can't tell you how often I get the complaint that "my phone is too slow" or "my laptop is slow and heats up" when all they are doing is browsing popular websites. An we live in a developed nation with decent internet -- I can't imagine how awful it must be when you don't have those luxuries.

I think your comment is lacking severely in perspective, and the mindset you've demonstrated to be particularly bad for the web. Not to mention snide and dismissive.


Well I agree and disagree :) as always it depends. I think it’s clear that especially on slow mobile connections people will close a site quickly if they don’t see results and this in turn yields lower sales for example. Also, Google will factor Speed into its algorithm. So you will lose traffic if you don’t optimize. At the same time, people at home with broadband wouldn’t care less.

That’s actually a core reason for slow pages. People in offices with large monitors and huge bandwidth develop those sites


I believe in this case it illustrates that lightweight is possible.

I think it might be like bicycle parts. It is ridiculous that parts are sold with a pricetag PLUS a weight in grams.

I mean, you could save the equivalent of $100 by wearing a short-sleeved shirt.

But those obsessive people end up making most bicycles lighter weight, and thus bike rides for normal people are more enjoyable.


The public continues to smoke 40-50 years since it’s been shown that it causes cancer. The public isn’t a good indicator.


What about the problem of CO2 / MB ?


I dunno. That time the user spends waiting for the sluggish website downloading gigabytes of html is time they maybe don't spend reaching for that apple that was air freighted across the world to reach their fridge.

Flippant but I doubt there's any clear correlation between reducing the size of a web page and reducing overall CO2 emissions.


I feel like it'd be more productive to reduce the CO2 per MB (e.g. using a 100% renewable-energy-powered hosting provider, pressuring ISPs to use renewables, pressuring power companies to use renewables, etc.) than to reduce the MB in this case.


Why not both?


Thats because you're not listening to them.


I’ve always had a project in mind for a bespoke search engine that only indexes pages smaller than 65kb.


Or a news aggregator that only accepts links to pages with the same restriction.


There is a search engine that tries to do something like this... searches mainly for blogs and whatnot and I’m so sorry I can’t remember the name of it. :(

Ironically you can google about and find it probably.


Is it https://wiby.me? If not, I'd love to know more!


The surprise me feature has yielded some really good links!

"Doom as a tool for system administration" [1], "What can you do with a slide rule?" [2], and of course some vintage 90s personal pages... some of which have been actively maintained to this day (!) [3]

edit: that last page has some incredibly-detailed information about the various cars he's owned [4]

[1]: https://www.cs.unm.edu/~dlchao/flake/doom/

[2]: http://www.math.utah.edu/~pa/sliderules/

[3]: http://www.billswebspace.com/

[4]: http://www.billswebspace.com/124Abarth.html


[1] Is actually a really cool idea. I thought it would just be a joke, but the article makes a good case for it, at least as a POC for a UI.



That is a brilliant site, thank you.


1MB.

When I started writing Web sites (late '90s), we'd get spanked for 50K pages.

I'm not sure it's possible, anymore, with a lot of the CSS and JS libraries; let alone the content.

But back then, we didn't have content delivery networks. CDNs make a huge difference.


I used what was probably the first 1MB page on the internet to discover why realloc() was not a smart choice for progressive parsing of content. I showed the problem to the lead dev and he cut the load time by a factor of ten in as many hours. NCSA had a web page where you could list your web server and a couple of sentences about it. Which was great until Netscape came out and everyone was setting up web servers.

This was back when Lycos was news to most people and Alta Vista was just launching. It was a poor man’s Yahoo. I don’t think it was a coincidence that the end of its usefulness came so close to the birth of better alternatives. The ungainliness of the status quo is usually what goads someone into trying something new.


We also had super grainy images, no fonts (besides the default set) and no videos. We can complain about bloat, but I don’t think comparisons to the 90’s are totally appropriate, except as a dunk.


If you note, it wasn't actually a complaint, or a dunk.

It was an observation, and recounting some personal experience.

What is bloat, as far as I'm concerned, is hitting a page with a 4K background video.

If you want a real "I had to walk 2 miles, barefoot" story, I can tell you about writing [C]WAP/WML sites (the old "m.<yourdomain>" sites). I actually considered using XSLT to extract WML from the regular site HTML.


Some of us are older than that


I was going to say that I remember coding towards a 35-50KB limit in the 90s. That'd get you a load time within 10 seconds on a typical modem of the era, if you were lucky! :-) Couple that with only being able to use hex colors at multiples of 00, 33, 66, 99, cc, and ff, as well as having to cater for 640x480 resolutions - great times.


the new internet sucks

priorities all over the place and major regression in the most crucial things


I wish sites like this could take a positive tone instead of a negative one. Referring to sites that don't conform to your pet preferences as cancerous is deeply off-putting to me.


Growing without bounds is kind of a defining trait of cancer. It seems pretty apt to me, although it's certainly also hyperbolic.


1MB in size feels like an attempt to define “performant” without considering... performance. A speed test from an “average” connection that also measures first paint etc. would be much more meaningful: not all megabytes are equal.

I love the idea, just needs a better measurement.


I think that's a little unfair tbh; there are plenty of places that already do that, but don't take account of the memory footprints on the clients machine etc.

I do agree that 1MB is arbitrary, but it's the right kind of arbitrary IMHO.


Pretty easy to write a one line js loop that makes the page slow as shit.


100 KB of images is not the same as 100 KB of JavaScript. You could technically still create a website that is a 1 KB HTML document that loads almost 1 MB of JavaScript, which is definitely too much.


I think they are looking at whatever loads in that initial Page Load.


How about WebAssembly? Whats your size limit for that?


Good question. I don’t know.


Should be 0kb for that


Two suggestions

1. Weight sites by some popularity count. Otherwise does anyone care if some random small site generates a 0.1k response just to get to the top

2. Do you plan continuously verify the download size is still accurate.


1. maybe weight websites by alexa rank? 2. some selenium github action would pretty much automate everything


Guys, add a search box where people can enter their site & see "if they quality". If they don't, and enter their site again (and are <1MB), maybe you can add them in a "Recent improvers" list. I think this will motivate even more websites to participate.


Just checked our newest Blazor server-side web administration site in chrome devtools.

12 requests total, 322kB transferred (914kB uncompressed, so either way), finish in 991ms. Request was made between Texas and AWS us-east-1. Server running on a T3a instance. This is .NET Core 3.1 on Windows Server 2019.

Of the 322kB, blazor.server.js is 39.2k and our JS interop shim is 3k. This represents all that is required to communicate between server and client. The rest of the static source is ace code editor, bootstrap, jQuery and open iconic. All remaining interactions happen as deltas over the websocket connection.

Every day that I work with Blazor, I am further convinced that it is the only way to build our web applications moving forward. I don't have a good answer for Blazor server side in mass-scale public-facing scenarios (a la Netflix), but anyone can come up with an edge case to poke holes. Our 'internal' dashboards are actually accessible from the public web and we have not experienced any DDoS or other performance incidents. If something does come up, I will just pop the VM over to the internal network & require VPN access. That said, I really prefer accessing Blazor server side apps as directly as possible due to the latency concern.


That's pretty interesting

So, instead of sending tons of js on the client side that does the job, let's connect via websocket to the server and let the server return HTML with new content dynamically, yea?


Correct. The way Blazor produces DOM updates is pretty interesting. There was an excellent NDC talk that does a deep dive on how all of the internals work:

https://www.youtube.com/watch?v=dCgqTDki-VM

The amount of javascript interop that I have to maintain in order to support a very complex Blazor application is somewhere around 200 lines at the moment.


> Blazor

Does that mean LiveView?

I guess we need a term for that since every framework is using a different name for it.


Here's a common example, from a FAANG company: 72mb and 367 requests to display around 2,000 characters of text. Maybe double that if you count the "hyperlinks."

Single page applications are horrible, and oh-so-common.

https://imgur.com/a/v2rud7T


They really don’t have to be. Unfortunately a lot of them are just built really badly, by people with fast hardware and gigabit networks.

With good code splitting, which is supported basically out of the box by Webpack and all the big front-end frameworks, I would argue a well-written SPA transmits less data in the long run than a fully server-rendered app. The initial payload is bigger by a few hundred KB sure, but after that:

- Like server rendered pages, you still only load pages on demand, thanks to code splitting

- Unlike server-rendered pages, you can cache everything except the data that changes, because it’s JavaScript not HTML.

Yes you’re still making a request for fresh data on every page load, but while the code is cached you need only download the raw data itself. You’d have to be really clumsy to send a JSON payload bigger than a full HTML document containing a rendered representation of the same data.

Of course, this only really applies to apps. Static or infrequently changed content, you’re better off either server rendering or just serving static files.

You can also make the argument that it would be more efficient to serve mostly static HTML, with the bare minimum JavaScript required to fetch new data, but ultimately everything is a trade off :)


Apple marketing page?


72 millibits doesn't sound feasible for even the most minimalist of websites. ;)

Seriously though, this isn't just a "grammar" thing. If you don't know the difference between millibits and megabytes, you probably shouldn't post on the subject of page size (or anything else relating to bandwidth consumption / network performance).


I understand what you are trying to say - proper use of units is important, especially on a technical topic. The SI[0] is simple, everybody can take a look and understand it in a minute and never make a mistake again. But you really don't need to be so condescending.

I still hate it when people flag comments like yours because maybe it sounds "offensive" or something, but you can improve your tone too.

[0] - https://en.wikipedia.org/wiki/International_System_of_Units


Ironically, you seemed to have full (and complete) understanding of my intended meaning. Maybe you shouldn't comment on the use of language unless you have an understanding of linguistic deep structures. ;)


The fact that someone was able to infer a possible alternative meaning, just because the actual meaning wasn't credible doesn't make it any more correct.

In this case though, it was the context (1MB Club) that gave the clue. So, no.


Nice! I've gotten so jaded with how big and slow most of the modern web is, that randomly stumbling upon a tiny site prompted me to write about the joy of it on my blog (a much less than 1MB site itself): https://m-chrzan.xyz/blog/small-big-sites.html


Keep in mind that this might discriminate against CJK pages. While it is trivial to design a beautiful webpage in 1MB for a Western audience including CSS design and western fonts, a single CJK compatible webfont (one style/weight) is 500kB-1MB.

My blog is pretty lightweight, but I use Japanese in the about page, and that font on that page blows everything else out on download size. I could subset it just for that line, but consider sites containing entirely CJK content...


Why can't you use a system font?


> webfont

Don't.


> These things are a cancerous growth on the web.

I'm a huge fan of minimalism using gopher:// frequently.

But blanket statements like this seems seems a little too much.

I'm personally aware of some educational tools that non-techy educator are able to use with just a user name and password. A few years ago in order to deliver tools like that the process was a physical CD, some key, some installation wizard... and if you had Mac? tough luck.

Let's be conscious about bandwidth usage and bloat, but we are no longer in the 90's and, even if imperfect, progress has happened in tech


Sure, many sites are bloated, but I can't help but feel the sentiment is somewhat dramatic. Cancerous growth? "Client-side queries!?"

Performance is just one piece of the UX puzzle. Users don't really care about bundle size... They care if your site provides value in a reasonable amount of time.


Stylistic nitpick: don't use the word cancerous for things that don't make too much of a difference for anything but your "ethos". Real cancer has people die (animals too). Bloated websites might be annoying, but you will survive them even if your the 1337est hacker around.


I can definitely imagine situations where being able to load a page quickly is indeed a matter of life or death. This is indeed part of the design rationale for the min css framework: https://mincss.com/index.html

> Min is used by over 65,000 people in 195+ countries, from North Korea to South Sudan to Mongolia to Somalia. Think your software is critical? Try a webapp keeping you alive in a warzone.

Now, whether or not this actually happens is a good question, but it does seem like a plausible possibility.


> I can definitely imagine situations where being able to load a page quickly is indeed a matter of life or death.

Can you share some such situations?


One example that comes to mind even here in the US would be getting emergency info out (evacuation orders, contact info, etc.) during natural disasters like wildfires and storms. A lot of rural areas are still stuck with dial-up and low-bandwidth cellular data at best, so every kilobyte matters. This also applies even in places with abundant cellular coverage; congestion can and often does cause issues with thousands or millions of people trying to get info or reach out to emergency contacts all at once.

Min's rationale further brings to mind things like disseminating info on hospitals, shelters, etc. in war zones, often to people who at most have an ancient (by first-world standards) phone with expensive and slow network connectivity.


The problem with bloated website is, that the cumulative time lost while waiting for them to load is (for many) measured in many, many years. Pages like google/facebook/... optimizing load time by a couple of hundredths of a second, means years of real people time saved.


I know it sucks waiting for a website to load, but it's not like your life depends on it. You can still be an overall happy person while doing so, time is not "lost".

Performance metrics are better when put into perspective.


Had to upvote, since I'm currently building a quite complex web application (using Svelte) for which I'm trying to restrict myself to 1Mb JS uncompressed (including SVG paths).

To do this, there is almost no runtime dependencies - while I "lost" a few months building things I could borrow from bigger libraries (from parts of rxjs to much bigger component libraries/design systems), development speed is now excellent!

Currently sitting at 750kb and close to feature completion so I'm pretty sure that it will be achieved.


The submissions page for this project is 1.8 MB, which seems kind of against the spirit of the endeavor.


There is a problem I’ve wrestled with at pretty much every webapp job I’ve had.

People decide it’s more economical to add more stuff to existing pages than it is to introduce a new page. As time progresses the number of pages goes up logarithmic to the amount of functionality.

This hits on two fronts. One is pressure from the non technical people, who want the cost of new stuff to be O(1). “This is so simple. Why do you have to make it a big deal?” Makes sense the first time. Makes no sense at all the 58th time.

The other one is that we don’t have an easy way to carve up functionality and move it around efficiently. Webpack and friends try to solve this problem, but it leaves a lot to be desired. It’s easier logistically to have the same giant couple of JS bundles where almost every bit of logic is available everywhere. Which makes it more tempting to expose all functionality everywhere.

I discovered at one point that the quite effective strategy I used for maintaining mature wikis is essentially applying the B-Tree algorithm by hand, with depth weighted by recency (eventually all outdated documentation is relegated to the leaves, or updated in order to avoid relegation).

I have a hunch you could do much the same for websites, but I haven’t worked out how to do it in a spirit of refactoring (stability is the first word, but progress has the final say).

There’s a related issue with Conway’s law, where the links on the main page or in the footer are always proportional to the number of divisions in the organization, and the number of initiatives currently in progress or recently finished. This vastly increases the decision tree even when you avoid having a giant landing page. I’ve only seen one solution to this and that is to treat these links as what they are: ads. Ads are either short lived or get rotated frequently to avoid overwhelming the audience.


I guess the leader is missing "packet.city" an entire website in a single TCP packet weighing in at 1.2kb

Site: http://packet.city/

Source: https://github.com/diracdeltas/FastestWebsiteEver


Loud sound warning would be nice.


Takes the same 1 second to load that every website takes.


Yeah agree, I only mentioned it facetiously to make fun of the idea of a list ranked by download size.

CDN's, network speed, server load, page render time, all have a greater effect than the fractions of kilobytes that differentiate these sites...


Unpopular truth: Lowering page weight doesn't often help web performance.

https://www.speedshop.co/2015/11/05/page-weight-doesnt-matte...

> Bandwidth is not the problem, and the performance of the web will not improve as broadband access becomes more widespread.

> The problem is latency. Most of our networking protocols require a lot of round-trips. Each of those round trips imposes a latency penalty. Latency is governed, at the end of the day, by the speed of light. Which means that latency isn’t going anywhere.

Of course, this will get downvoted on an all-text forum. That won't make it less true.


In some ways it's interesting to think that a modern website is larger than most operating systems used to be.

I can understand why (frameworks, rich assets, etc) but it puts us in an interesting place.

I wonder what the web will look like in 2030?


Indeed, I remember running QNX from a floppy disk (1.44MB) complete with windowing system and utilities.


Yeah, that sounds about right. I remember MS DOS used to fit on a 720k floppy -- back when they were literally floppy...


Reinventing QNX will be state of the art for decades to come.



Unpopular opinion: I agree that all else equal a smaller bundle is better, but I don’t think “cancerous growth” gets at the core of why bundle sizes are growing.

Some part of the growth is bloat(analytics frameworks, janky animations, needless SPAs), but another factor driving bundle size is the fact that the raw web is mostly unaesthetic and unnatural to use, HN excepted.

The DOM being what it is, you need a lot of code to make it look like a native app, and I think it’s a noble goal to get as close as possible to native even if bundle sizes grow.


1MB seems to be a bit much...

A few years ago I wrote a little blog software which is based completely on JS functionality (including i18n and page transition animations) and doesn't work without it. So much so that no search engine ever finds the posts (my mistake, if you would do it right, that should not be a problem nowadays).

And yet, loading a page is less than 1 MB. In fact, loading the HTML and JS only, the page would fit in 100kb. The Webfonts are responsible for most of the traffic.


Project Gemini was created for a similar reason.

https://gemini.circumlunar.space


This seems to be a common sentiment among developers today but honestly bloat doesn't bother me too much. I do want good performance from my websites, but with fast internet connections, who cares if it is 1KB or 1MB?

Also, if you can develop a website for half the cost with twice the size, isn't that the right thing to do? Developer time is the expensive resource, and bandwidth is cheap.


1. Not everyone in the world has a fast internet connection, or even a reliable one. Smaller pages have a better chance of making it through congestion or physical layer problems.

2. Even when speed is not an issue, many people are billed by the megabyte. My phone's data plan for example. I almost never use it for anything more than email and a handful of very trusted sites.


A fair assessment for an MVP in the US, but you fail to scale globally if your site is bloated as not all countries and territories have the access to data at low cost the US has. Definitely worth it to optimize and reduce bloat after establishing your product's place in the market (or develop from the get-go with reducing bloat in mind).


> have the access to data at low cost the US has

I don't know where you heard that, the US has some of the most expensive and slowest Internet access in the developed world.

https://www.numbeo.com/cost-of-living/country_price_rankings...

https://en.wikipedia.org/wiki/List_of_countries_by_Internet_...


I don't think this chart really tell much because most services are hosted in the US which some of these country have small or no direct link to (e.g. Thailand is 6th fastest in APAC on average speed, but the whole country has total of 30Gbps direct link to the US and pretty much have to route everything via HK or SG) which sometimes result in horrendous latency (~500ms) or speed (< 1 Mbps). It's fast at CDN edge where these surveys are taken, but page sizes still matter a lot.


This makes me happy. My latest project (unreleased) has its entire front page - including CSS, Javascript, and images - served in a single request with an unminified response payload of just over 4k.

Part of what made it possible was the Mu CSS Framework: https://bafs.github.io/mu/


Thanks, especially for:

https://john-doe.neocities.org/

I'm just looking at modern front end web again (we're using react, ant - and I've stockholmed myself into considering react with tailwind as a "lightweight" alternative... Nice to be reminded about truly simple html+css).


ah I like this one, thanks!


The first item on the list, sjmulder.nl/en/ takes up to 2 seconds to load with TLS being over 500ms.

Is this just some sort of self host?


Craiglist took me almost 8 seconds to load.


It loaded in under 200ms for me?


I agree. Too much bloat. Too much using a Technology X hammer when a screwdriver is what's need.

That said, the metric here is too blunt, too broad.

1) We need to call out avoidable bloat. A simple example: too often I'll visit a site where say the full-hero image is full width and full height. The problem is the same 300-500k+ image that's served to desktop is served to mobile. No media queries (if they're using background-image), no sizes and srcset if it's an image tag.

2) It's about expectations. Some site are naturally image heavy (e.g., photography). Within reason, that's acceptable. Some use of lazy load is better than paging, and such.

3) And do we throw the baby out with the bathwater? If a site is slightly bloated but accessible, I think there should be redeeming points for that. That is, you had X or Y amount of resources and you made #a11y a priority, at say the expense of trimming some bloat. I'm okay with that.


This reminds me of people who buy powerful machines, then run the most threadbare Linux environment possible and freak out anytime the machine's RAM and CPU aren't completely idle.

I do miss the lightweight early/mid 90s web sometimes, but I have no interest in artificial constraints that treat smallness as a virtue in and of itself.


guilty as charged


I like to limit myself to around 100k for simple, interactive pages.


I hate how people have to use "cancerous" everywhere. Out of respect please use a different word.


I like the use of CSS variables for the gray bars:

  width: calc(var(--data-size)/1000 * 100%);


How much of these bloat libraries / frameworks does the browser cache, vs how much is downloaded fresh each time (looking across multiple sites)?

Are there no tools to extract only the portions of js / css etc that your site actually uses, and just host that mini package yourself?


Nice to know of CNN lite, had never heard of it before.


Indeed. Also, much less of an attention sink, and lynx-friendly. Are there more news websites like this?


This site has a list with several others: https://greycoder.com/a-list-of-text-only-new-sites/


NPR has one on the list. I don't know about any others.


Wow yeah, this made my day! It's nice not waiting 10 seconds for a video I don't want to watch to auto start playing (just so I can stop it).


https://restofworld.org/ initially loads <1M. But when you scroll down, it balloons to 3M as it loads more images. Should that count?


beautiful webpage either way, and remarkably small for all the media it includes.


Using a large amount of memory is totally fine for an enterprise SaaS app that a user is going to spend multiple hours a day in. There isn’t a one-size-fits-all web. Not every app is a blog or serving text content.


Congrats on the launch! I've been wanting to do this forever, in fact still own 100kb.org which sits there unused in my cemetery of domains (working on other stuff!). As others say 1MB feels too generous.


It would be useful to tag or categorize those links. I can't imagine I'm the only one who won't click on some domain I don't recognize that doesn't suggest its purpose.


I have a blog generated using Pelican, using a slightly modified version of the pelican-bootstrap3 theme.

This makes me feel pretty guilty when I realize that my blog is loading 118.18 kB of minified (!!) css, and almost 200 kB of web fonts, even though the front page of my blog still doesn't exceed 1MB overall.

(And I actually don't like the current theme of my blog that much anyway. I have a hard time picking a clean, "brutalist" theme.)


I love the idea and I think it is a very reasonable, although generous limit. Excessively generous, as @yellowapple called it, maybe. But I am indeed surprised to find not only tiny blogs and personal homepages on this list, but also government (gov.scot, gov.uk) and other resource intensive pages.

Despite all that, I propose an additional, even more exclusive club, because the internet needs a break. I propose 250kb compressed first-load size.


On Bear Blog all the blog pages are roughly 4kb. https://bearblog.dev


Instead of being an absolute number it should be a ratio of useful information/size, or a signal to noise ratio. It's quite hard to calculate in general, but it could probably be done by rendering the page something like Firefox's reader mode and taking that as the signal and the rest as the noise.


Not much interested in arbitrary size limits. Far more interested in the content/size ratio. Especially if given the option to see more and weightier stuff if interested. My favorite example, that I use daily, is:

https://text.npr.org/


Hopefully on-topic:

Can anyone recommend a drop-in, minimal stylesheet that will make basic HTML (without classes/ids) format well on both web and mobile?

In particular, by default stuff is really small on mobile screens, you have to zoom around.

Basically bootstrap, but more lightweight and without the effort of learning all the classnames etc.


I use hugo theme (even) - https://themes.gohugo.io/hugo-theme-even/ which seems pretty decent in terms of performance. There's some heavy loading of fonts and js, but it still comes down below 1Mb.

Maybe you can derive some inspiration from it.


That's nice, thank you.


> In particular, by default stuff is really small on mobile screens, you have to zoom around.

That's just because mobile browsers are intentionally broken by default. All you need to get them back to sanity is the following element in your <head>:

<meta name="viewport" content="width=device-width, initial-scale=1">


It's a refreshing initiative, personally, since a few years I am using the DOOM as unit of measure for bloated websites:

https://twitter.com/xbs/status/626781529054834688


I wonder if this is possible to write an extension that would allow to allocate a user-defined “data volume budget” to a site when the browser starts to load it, and hard-cut the process after this budget has been exhausted, and render the page using whatever it managed to download.


This is interesting, but maybe instead of hard-cut maybe we could serve different website versions depending on the budget (eg. the same way we do with srcset for images).

Client requests page with 500kb budget? Send text-version with tiny images. Client requests page without any budget? Send full rich version, with full-res images.


Shout out to https://1mb.co/


Is there an award for the opposite? Sites using more than 3 GB per visit? I'd win that one.


I previously suggested something a little bit along these lines. But what I thought would be better would be a distributed database and a browser extension that would measure load times (or maybe page size) and stability and automatically update it.


One truly useful service. I would love a Search Syntax feature for Google or DDG where one could specify the maximum size of the showed sites. This would definitely improve the results as in many the cases the most useful resources are text-only.


Would love to see a 1MB club for websites where there was a requirement they would have to make use of both FA, CSS & JS all at once. Or perhaps a 1MB SPA club.

This is interesting though. Wonder what the average newly founded website sits at these days.


I checked out a Wordpress site I run, and it comes in at 374K, when I have put zero effort into optimizing it. I feel like 100K would be a far more meaningful bar to set for this kind of collection.


Would be awesome to see a breakdown of the tech on the sites. I just checked my "svelte site", data transferred is 1.1Mb. The biggest hog seems to be Google-Ads(I'm sorry)


Shameless plug: My page, a fully-featured message board with tagging and stats, private key based accounts, and optional JavaScript features, the homepage is much smaller :)


Don't forget to plug your page at the end so we can check it out :D


I thought that is what profile is for?


Would be better if ranked by size:content instead of size alone.


1MB is several novels.

That being said, one of the top sites is text.npr.com. It’s cool NPR has that (from an accessibility standpoint), but it shouldn’t get any awards for compact design.


The culture change around the importance of performance is frustrating, new developers all seem to love their massive JavaScript libraries. Giving a trivial function such as search box on a webpage or some animation results in a convoluted 10MB JavaScript monster from hell. If they run into anything remotely challenging they'll simply add a library to do it for them creating more bloat. It used to be the opposite. When asked 'why not just some jQuery and DOM'? they'll reply it's too hard. React, webpack, node etc are great tools but they need to used carefully.


React/vue/etc are not massive libraries. The problem usually comes from websites loading 5 versions of jquery and then 30 ad network scripts.

And yes, doing anything mildly responsive with jquery is too hard. And not in the "I don't know how to do this" way. Its just a huge waste of time. What takes 100 lines of jquery code can be done for free with jsx. Usually jquery sites settle for sub optimal UX because its too hard to do the things react sites do.


Yes, what I was alluding too was that new tooling tends to make it easier for client resources to explode out of control.

Don't get me wrong, jsx is great. but it should be used in the right place.


For most apps that I've use, jQuery and DOM would be able to do it, just no where near as well as react or vue. We're building web apps that are highly responsive with lots of UI effects. Even when it seems like there isn't much FE code going on, you'll normally find that majority of it is now done with preloading and XHR calls to give a smoother feeling. As long as a site fully loads within 2 seconds then I'm ok with that.


> Giving a trivial function such as search box on a webpage or some animation results

If you want great UI and UX, these are anything but trivial. This is why taking an off the shelf solution is preferable to reinventing the wheel in the pursuit of marginal improvements of speed (not mentioning 3rd party solutions are often still faster than what you could whip up in a day or 2).


My theory is more stuff almost directly means less user experience. More stuff always requires more design knowledge, which is an extremely scarce resource.


Anyone remember the5k? https://the5k.org/

In my day, we'd tie an onion to our belt.


Would this page qualify? - https://quixical.com/welcome


What did they do to the scrolling!


War and Peace 1.3mb. Ah well, a two pager I guess.


This site is listing transfer size, so if you serve it with "Content-Encoding: gzip" (as you should) you should be all set.


Or we could zip it up and then use javascript to unzip it and display on the client side.


This is already built into the browser. Your browser sends "Accept-Encoding" listing what decoders it has (ex: "gzip, deflate, br"), and then the server may encode the body with any of them. If it does, it will send "Content-Encoding: gzip" or similar to indicate which it chose.


Back in my day we did our websites in 5k.

https://the5k.org


What's a good framework for building lightweight websites? Or am I better off just writing HTML?


Depends what sort of website really. If it’s best described as an app, Svelte seems good. Rather than having a big feature-filled runtime it does most of its work at compile time, so your bundle is almost always much smaller than something like React or Vue.

If it’s just a static or almost static content I’d lean more towards just plain HTML, maybe with a little server templating.


Where's the Nginx landing page.


I want to blacklist sites over a certain threshold. Is there an easy way to do that?


There might be an opportunity to write minimalist web clients for popular services.


Hopefully the creator has plans to automate the Github issue verification process.


I suspect this site's page will soon not qualify to be a part of this club :)


On a curious note, what's Hacker News score? Seems ala carte minimal no?


Chrome Devtools says a very respectable 62.6 kB


Curious, even with cache disabled I get somewhat less (13.2KB) but I assume that's what came over the wire and then expands after it's inflated(?)


What's an easy way to measure total resources downloaded for a page?


On Firefox you can open up the console (right click -> Inspect Element), head to the "Network" tab and check the very bottom of that panel. It should give you details on how much time it took and the size of your request. You can refresh your page and it refreshes the data at the bottom as well.

I believe it should be similar on Chrome and Safari as well, just locate the Network tab in the browser console.


Existing search engines should add a query term to filter by page size.


Please accept email suggestions, I'd like to avoid needing github


It should be something like 100kb compressed, not 1MB uncompressed


Any other Indian website in the list other than Zerodha.com?


Ironically this site is taking upwards of a minute to load


> These things are a cancerous growth on the web.

lmfao a little dramatic. JS heavy sites _can_ be great experiences if they are developed properly just like <1MB sites can be garbage if they are poorly developed.


to make the 'small impact' much bigger - can someone turn this list into a search engine ?


1MB? no no, tres commas club please!


We are finally just starting to see a contender for web3.js to get the same functionality without a bloated package size


Can't they automate this?


Reminds me of a dollar store.


text.npr.org - wow!


lol Jira is 24 MB.


YES


I'm pretty happy with my bloated mess. Glad youtube and Spotify aren't limited to 1meg

Glad Google Maps lets me view the world in 3D and with street views and make custom maps, and read reviews with photos, etc. Don't care that it takes more than 1meg.

Love Apple's beautiful pages like the Macbook Air page at 14meg


I don't think anyone is going to argue that there are sites that provide huge utility in exchange for their size. No one is suggesting we google maps should clock in at less than a MB. But the point is that there's a continuing trend of websites' sizes growing much faster than their functionality - often with the former to the detriment of the latter.


what if those experiences could be provided with a smaller footprint? Try to think bigger than yourself. It would no doubt be faster but also would consume less bandwidth, less storage, less compute = less power meaning good for the environment.


I tweeted about text.npr.org this morning. Excellent resource.


Can you guys stop spamming the guy's github issues page?


He asked to add pages there :)


They give folks on HN an opportunity for self promotion, what do you think will happen :)


Guilty as charged :P

They quite literally did ask for it though.


My question for these kinds of complaints is, how do is it proposed we implement the same features and value that can be provided currently, with fewer downloas?

Could we have Facebook without a huge download? Is this saying the web should not have these features? Are we intended to download each new application separately on a PC like we do mobile?

I see so many complains about "the bloated web", but no solution that let's us keep the tremendous value the internet has given to the world. It just comes across so short-sighted and reactionary.


In an age of 4k streaming being frugal with 1MB websites for the sake of it is kind of missing the point. Yes, we should care how we build stuff, and we should not use resources like there is no tomorrow, but on HN there is a sentiment that basically says that everything on the web after year ~2010 is terrible, hard to use and wasteful.


Here's the disconnect, which is why these "webpages need to be slim!" sites tend to make me think they're greybeard nostalgia for an internet that doesn't really exist anymore.

Your 1mb webpage is approximately 60ms worth of Disney+ streaming.

Your 1mb webpage is approximately 1.7s worth of Zoom chat.

Your 1mb webpage is approximately 1.9s worth of Tiktok video.

Unless you are specifically targeting low-bandwidth users, you need to worry about product market fit long before you worry about slimming your js bundles down.


> Unless you are specifically targeting low-bandwidth users

There aren't some tiny number of low bandwidth users with some esoteric internet problem, there is a significant divide due to technological and geographic reasons. Averages are very misleading when the majority of people in cities connected to various fiber end points keep getting crazier and crazier speeds while 50% of the US is stuck on ADSL, with a theoretical max of 20Mbit down - add geographical limitations and that's often far lower due to line noise. Where I live in the UK it's also just a matter of luck, I live in the middle of a major city and yet in the 5 places I've lived over the last 10 years the only option was ADSL, and never >6Mbit.

I haven't even started the argument about other countries with poorer internet infrastructure.

When you assume being able to download at 1MiB/s is basic, you make the internet suck for a huge chunk of the population, they are not the majority - but barely.


> Averages are very misleading when the majority of people in cities connected to various fiber end points keep getting crazier and crazier speeds while 50% of the US is stuck on ADSL, with a theoretical max of 20Mbit down - add geographical limitations and that's often far lower due to line noise.

Can you cite your source on "50% of the US is stuck on ADSL"? Furthermore, do you mean 50% of the US land area or 50% of the US population? The former is plausible, the latter is not.


Even if you are able to get palatable speeds, bandwidth caps are an issue for some people.

For example, I'm stuck with 40GB/month in rural Ontario (ie, 5 minutes outside of a city), which means I share roughly 1300mb per day with my household. I'm constantly watching the bandwidth meter tick up in my menu bar.


You are not only conflating megabytes (MB) with megabits (mb) but you are also conflating latency (a site loading) with throughput (a buffered video playing).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: