Compare it to people who really care about performance — Pornhub, 1.4 MB
Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.
I never really understood why SPAs became so popular on the web. It’s like we suddenly and collectively became afraid of the page reload on websites just because it’s not a wanted behaviour in actual web applications.
I have worked with enterprise applications for two decades, and with some that were build before I was born. And I think the React has been the absolute best frontend for these systems compared to everything that came before. You’re free to insert Angular/Vue/whatever by the way. But these are designed to replace all the various horrible client/server UIs that came before. For a web-page that’s hardly necessary unless you’re g-mail, Facebook or similar, where you need the interactive and live content updates because of how these products work. But for something like pornhub? Well PHP serves them just fine, and this is true for most web sites really. Just look at HN and how many people still vastly prefer the old.reddit.com site to their modern SPA. Hell, many people still would probably still prefer an old.Facebook to the newer much slower version.
> It’s like we suddenly and collectively became afraid of the page reload on websites
I used to work at a place where page reloads was constantly an issue brought up as a negative. They couldn't be bothered to fix the slow page loads and instead avoided page changes.
I argued several times that we should improve performance instead of caring about page reloads, but never got through to anyone (in fairness, it was probably mostly cos of a senior dev there).
At some point a new feature was being developed, and instead of just adding it to our existing product, it was decided to use an iframe with the new feature as a separate product embedded.
I think there are a couple of legitimate uses of iframes, but in most cases, it’s not something you want to use.
If you want to take payments in your application using a vendor like Nodus, ie, if you have an app that is being used by a CSR/Salesperson, and they need to take CC or eCheck data, showing the payment application in an iframe (within the context of the app) lets you have that feature within the application but importantly, means you don’t have to be PCI DSS compliant yourself
Opening the payment flow in a separate window while displaying a message like "please complete the payment with your chosen payment provider" is almost as good from a usability standpoint and a lot better when considering security best practices.
I love SPAs. I love making them, and I love using them. The thing is, they have to be for applications. When I'm using an application, I am willing to eat a slower initial load time. Everything after that is faster, smoother, more dynamic, more responsive.
> Everything after that is faster, smoother, more dynamic, more responsive.
IF and only IF you have at least med- to high-end computer and smartphone. If you have low-end hardware you first have to wait for that 20MB to load AND get to use slow and choppy app afterwards. Worst of both worlds, but hey, it's built according to modern standards!
> But for something like pornhub? Well PHP serves them just fine,
Kind of fun to make this argument for Pornhub when visiting their website with JavaScript disabled just seems to render a blank page :)
> how many people still vastly prefer the old.reddit.com site to their modern SPA
Also a fun argument, the times I've seen analytics on it, old.reddit.com seems to hover around/below 10% of the visitors to subs. But I bet this varies a lot by the subreddit.
> visiting their website with JavaScript disabled just seems to render a blank page :)
Average people don't disable Javascript. So they predictably don't spend much time trying to make the site work for people who aren't their core audience.
Been possible to do without JavaScript before, and is still possible today :) Where there is a will, there is a way. I'd still argue Pornhub is a bad example for the point parent was making.
Why SPAs became popular? Because they "feel" native on mobile. Now you have page transitions and prefetch which really should kill this use case.
IMO the bloat he talks about on the post is not representative of 2024. Pretty much all frontend development of the last 2 years has been moving away from SPAs with smaller builds and faster loading times. Fair enough it's still visible in a lot of sites. But I'd argue it's probably better now than a couple years ago.
Personally, I don't see why one would hate SPA concept. I like it, because it enables to lower the traffic and make interactions... normal? And to use local computing when possible, instead of adding 2 and 2 over HTTP.
When people hate SPAs they actually hate terminally overweight abominations that no one (i.e. google) keeps in check, and that's the reason they are like that. Why should on-page interaction be slow or cost one 20MB of downloads? No reason. replaceChild() uses the same tech as <a href>. SPA is not a synonym for "impotent incompetence 11/10".
If humanity didn't invent SPAs, all these companies would just make half-a-minute loading .aspx-es instead. Because anything else they can not.
> Why SPAs became popular? Because they "feel" native on mobile.
They... Don't :) The absolute vast majority of them is a shittier, slower, clunkier version of a native app
> IMO the bloat he talks about on the post is not representative of 2024. Pretty much all frontend development of the last 2 years has been moving away from SPAs with smaller builds and faster loading times.
He literally lists Vercel there. One of the leaders in "oh look at our beautiful fast slim apps". Their front page loads 6.5 megabytes of javascript in 131 requests to those smaller bundles.
Well to stay within the example porn website of OP, because it is not an SPA you cant really make a playlist play in full screen - the hard page reload will require you to intersct to go fullscreen again on every new video. Not an issues in SPAs (see youtube).
I feel like the term SPA has since ceased to have any meaning with the HN crowd.
I mean i do generally agree with your sentiment that SPAs are way overused, but several of the examples of TFA arent SPAs, which shoould already show you how misguided your opinion is.
depending on the framework, SPAs can start at ~10KB. really, the SPA is not the thing thats causing the bloat.
You’re right. People miss that SPAs aren’t heavy by definition, but by accident and/or acquire bloat over time. You can totally have a super-useful, full featured, good looking and fast SPA in 150 KB gzipped including libraries. Only it takes knowledge and discipline to do that.
>Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.
Well, I do remember the myriad of shady advertisement tactics that porn sites use(d), like popups, popunders, fake content leading to other similar aggregation sites, opening partner website instead of content, poisoning the SEO results as much as they can, and so on. Porn is not the tech driver people make it up to be, even the popular urban legend around the Betamax vs VHS is untrue, and so is that they drive internet innovation. There is a handful of players who engineer a high quality product, but it's hardly representative to the industry as a whole. Many others create link farms, dummy content, clone websites, false advertisement, gaming the search results, and so on. Porn is in high demand, it's a busy scene, and so, many things happen related to it, and that's about it.
The current state of snappy top-level results is I think the result of competition. If one site's UX is shitty, I think the majority of the viewers would just leave for the next one, as there is a deluge of free porn on the internet. So, the sites actually have to optimize for retention.
These other websites have different incentives, so the optimized state is different too. The user is, of course, important, but if they also have shareholders, content providers, exclusive business deals, monopoly, then they don't have to optimize for user experience that much.
I generally agree and understand. The reasoning is fine. But comments like this make me somewhere between sad and contemptuous towards the field. This neutral explanation supports the baseline that no professional can vocalize anywhere and retain their face. I'm talking youtube focus & arrows issues here, not rocket science. Container alignment issues [1], scrolling issues [2], cosmic levels of bloat [$subj], you name it. Absolutely trivial things you can't screw up if you're at all hireable. It's not "unoptimized", it's distilled personal/group incompetence of those who ought to be the best. That I cannot respect.
Yeah, it's not my favorite experience either, and I found it really hard to be indifferent, especially in my earlier years. The goal is almost never a perfect product. And it's also a result of many people's involvement, who often have different values than I do, or what I expect of them.
i worked in that field. one of the main reasons adult entertainment is optimised so heavily is because lots of users are from countries with poor internet.
countless hours spent on optimising video delivery, live broadcasts (using flash back in the day, and webrtc today), web page sizes... the works.
Modern PHP is a pretty good language. Not my favorite, but it's not antiquated. And there's tons of websites built with it (granted Wordpress and Drupal are the majority of them).
Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.
Also, I'll give a pass to dynamic apps like Spotify and GMail [1] if (and only if) the navigation after loading the page is fast. I would rather have something like Discord which takes a few seconds to update on startup, than GitLab, which makes me wait up to two seconds for every. single. click.
The current prioritisation of cold starts and static rendering is leading to a worse experience on some sites IMO. As an experiment, go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically. I click through hundreds of GitHub pages daily. Please, just serve me an unholy amount of JavaScript once, and then cache as much as possible, rather than making me download the entire footer every time I want to view a pipeline.
[1]: These are examples. I haven't used GMail and Spotify
Compression helps transfer but your device still has to parse all of that code. This comes up in discussions about reach because there’s an enormous gap between iOS and Android CPU performance which gets worse when you look at the cheaper devices a lot of the public use where new Android devices sold today perform worse than a 2014 iPhone. If your developers are all using recent iPhones or flagship Android devices, it’s easy to miss how much all of that code bloat affects the median user.
I happen to develop a JS-App that also has to be optimised for an Android Phone from 2017. I don't think the amount of JS is in any way related to performance. You can make 1MB of JS perform just as poorly as 10MB.
In our case, the biggest performance issues were:
- Rendering too many DOM nodes at once - virtual lists help.
- Using reactivity inefficiently.
- Random operations in libraries that were poorly optimised.
Finding those things was only possible by looking at the profiler. I don't think general statements like "less JS = better" help anyone. It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly? ...
I'm not a JS developer but I imagine that the amount of JavaScript code isn't the most relevant part if most of it isn't being called. I mean, if you have some particularly heavy code that only runs when you click a button, is that really parsed and causes overhead before the button is clicked?
If all 10mb is in a single JS file, and that file is included in a normal script tag in the page’s HTML, then parsing the 10mb will block UI interaction as the page loads.
Once the browser parses 10mb, it’ll evaluate the top level statements in the script, which are the ones that would set up the click event handler you’re referencing.
If the entire page is rendered by JavaScript in the browser, then even drawing the initial UI to the screen is blocked by parsing JS.
The solution to this for big apps is to split your build artifact up into many separate JS files, analogous to DLLs in a C program. That way your entry point can be very small and quick to parse, then load just the DLLs you need to draw the first screen and make it interactive. After that you can either eagerly or lazily initialize the remaining DLLs depending on performance tradeoff.
I work on Notion, 16mb according to this measurement. We work hard to keep our entry point module small, and load a lot of DLLs to get to that 16mb total. On a slow connection you’ll see the main document load and become interactive first, leaving the sidebar blank since it’s a lower priority and so we initialize it after the document editor. We aren’t necessarily using all 16mb of that code right away - a bunch of that is pre-fetching the DLLs for features/menus so that we’re ready to execute them as soon as you say, click on the settings button, instead of having awkward lag while we download the settings DLL after you click on settings.
That’s a broader claim than that page supports. They describe how they avoiding fully generating the internal representation and JITing it, but clearly even unused code is taking up memory and CPU time so you’d want to review your app’s usage to make sure that it’s acceptable levels of work even on low-end devices and also that your coding style doesn’t defeat some of those optimizations.
You can theoretically have a somewhat fast enough large JS app but it's going to be an uphill battle.
You have to make regular bundle analysis otherwise the cache won't work if you deploy too much and package updates and new additions are likely to break the performance analysis you've just made before.
Less JS = better performance is a simplified model but very accurate in practice in my opinion, especially on large teams.
"- Rendering too many DOM nodes at once - virtual lists help"
Yup, despite all the improvements, the DOM is still slow and it is easy to make it behave even slower. Only update what is necessary and be aware of the performance bottleneck forced reflow. Every time you change anything and then get clientWidth for example and then change something else - you will make the DOM calculate(and posibly render) everything twice.
I found the chrome dev tools to be really helpful with spotting those and other stuff. But sure, if you update EVERYTHING anyway, including waiting for the whole data transfer, with any click, when you just want some tiny parts refreshed, you have other problems anyway.
> the DOM is still slow and it is easy to make it behave even slower
I think this should be more nuanced: the DOM itself has been fast for 10-15 years but things like layout are still a concern on large pages. The problem is that the DOM, like an ORM, can make it easy to miss when you’re requesting the browser do other work like recalculating layout, and also that as people started using heavier frameworks they started losing track of what triggers updates.
Lists are an interesting challenge because it’s surprisingly hard to beat a well-tuned browser implementation (overflow scrolling, etc.) but a lot of people still have the IE6 instincts and jump for implementing custom scrolling, only to find that there are a lot of native scrolling implementation features which are hard to match, and at some point they realize that what they really should have done was change the design to make layout easier to calculate (e.g. fixed or easily-calculated heights) or displayed fewer things at once.
> I think this should be more nuanced: the DOM itself has been fast for 10-15 years
It's faster than it was 10-15 years ago. It's still extremely slow.
> things like layout are still a concern on large pages.
> it easy to miss when you’re requesting the browser do other work like recalculating layout
You can't say things like "DOM is fast" and "oh, it's fast if you exclude literally everything that people want to be fast".
> and also that as people started using heavier frameworks they started losing track of what triggers updates.
I don't know if you realise, but on the very same devices where you're complaining about "large pages and oh my god layout" people are routinely rendering millions of objects with complex logic and animations in under 5 milliseconds?
I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM. Things like creating or modifying elements will run at tens of millions per second on an old iPhone _but_ there are operations like the one you mentioned which force the browser to do other work like style calculation and layout, and if you inadvertently write code which is something like “DOM update, force recalc, DOM update” in a loop it’s very easy to mistakenly think that the DOM is the source of the performance problem rather than things like the standard web layout process having many ways for different elements to interact.
And, yes, I’m not unaware that different display models have different performance characteristics. Modern browsers can run into into the millions of objects range but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less. This is why there have been various ways to turn off some of the expensive work (e.g. fixed table layout) and when APIs like canvas, WebGL, and WebGPU use different designs to allow people who need more control to avoid taking on costs their apps don’t need.
> I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM.
No, I'm referring to DOM as Document Object Model.
> Things like creating or modifying elements will run at tens of millions per second on an old iPhone
Not in the DOM :)
> but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less.
That's why I'm saying that the DOM is not fast. It's excruciatingly slow even for the most basic of things. It's, after all, designed to display a small static page with one or two images, and no amount of haphazard hacks that accumulated on top of it over the years will change it. It will, actually, make it much worse :)
There is a reason why the same device that can render a million of objects doing complex animations and logic in under 5ms cannot guarantee smooth animations in DOM. This is a good example: https://twitter.com/fabiospampinato/status/17495008973007301... (note: these are not complex animations and logic :) )
> No, I'm referring to DOM as Document Object Model.
This is what I was talking about: you’re talking about the DOM but describing things like layout and rendering. Yes, nobody is saying that abstractions which do less won’t be faster - that’s why things like WebGL exist! – but most of the performance issues are due to things which aren’t supported in WebGL.
If you aren’t using something slow like React you could do on the order of hundreds of thousands element creations or updates per second in the mid-2010s – checking my logs, I was seeing 600k table rows added per second on Firefox in 2015 on a 2013 iMac (updates were much faster since they didn’t have as much memory allocation), and browsers and hardware have both improved since then.
To be clear, that’s never going to catch up with WebGL for the kinds of simple things you’re focused on - displaying a rectangle which doesn’t affect its peers other than transparency is a really tuned fast path with hardware acceleration – but that’s like complaining that a semi truck isn’t as fast as a Tesla. The web is centered on documents and text layout, so the better question is which one is fast enough for the kind of work you’re doing. If you need to move rectangles around, use WebGL - that’s why it exists! - but also recognize that you’re comparing unlike tools and being surprised that they’re different.
> This is what I was talking about: you’re talking about the DOM but describing things like layout and rendering.
Ah yes, because layout and rendering are absolutely divorced from DOM and have nothing to do with it :)
> that’s never going to catch up with WebGL for the kinds of simple things you’re focused on - displaying a rectangle which doesn’t affect its peers other than transparency is a really tuned fast path with hardware acceleration
You will never catch up with anything. It's amazing how people keep missing the point on purpose even if they eventually almost literally repeat what I write word for word, and find no issues with that.
Here are your words: "The web is centered on documents and text layout". Yes, yes it is. And it's barely usable for that. But then we've added lots of haphazard hacks on top of it. The rest I wrote here: https://news.ycombinator.com/item?id=39485437
There's a range of options between "can render millions of objects with complex animation and logic" in a few milliseconds and "we will warn you when you have 800 dom nodes on a static page, you shouldn't do many updates or any useful animations"
Somehow people assume that what HTML does is so insanely hard that the problem of 2D layout should not just be relevant to modern day supercomputers, but to be so slow as to be seen with the naked eye.
The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around.
We could do most of the frankly laughable HTML layouts at least in early 2000s, perhaps earlier.
Well, definitely earlier, as Xerox that influenced the Mac is from 1970s
"The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around."
Well yeah, all those hacks of html that we cannot get rid of, because of backwards compatibility are probably the main reason the DOM is slow. HTML was made to view documents after all and not design UI's. And now it is, what it is. But there are options now!
So yes, it is definitely possible to build snappy 2D layouts. I build one with HTML, using only a subset and that worked out somewhat allright .. but now I am switching to WebGL. And there is a world in between in terms of performance.
> It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly?
I totally agree - my point was simply that people sometimes focus on network bandwidth and forget that a huge JS file can be a problem even if it’s cached. What you’re talking about is the right way to do it - I try to get other developers to use older devices and network traffic shaping to get an idea for those subjective impressions, too, since it’s easy to be more forgiving when you’re focused on a dedicated testing session than, say, if you’re trying to use it while traveling and learning that the number of requests mattered more than the compressed size or that you need more robust error handling when one of the 97 requests fails.
Even decently powerful phones can have issues with some of these.
Substack is particularly infuriating : sometimes it lags so badly that it takes seconds to display scrolled text (and bottom of text references stop working). And that's on a 2016 flagship : Samsung Galaxy S7 ! I shudder to think of the experience for slower phones...
(And Substack also manages to slow down to a glitchy crawl when there are a lot of (text only !) comments on my gaming desktop PC.)
> Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.
Because for a single page load, decompressing and using the scripts takes time, RAM space, disk space (more scratch space used as more RAM gets used), and power (battery drain from continually executing scripts). Caching can prevent the power and time costs of downloading and decompressing, but not the costs of using. My personal rule of thumb is: the bigger the uncompressed Javascript load, the more code the CPU continually executes as I move my mouse, press any key, scroll, etc. I would be willing to give up a bit of time efficiency for a bit of power efficiency. I'm also willing to give up prettiness for staticness, except where CSS can stand in for JS. Or maybe I'm staring at a scapegoat when the actual/bigger problem is sites which download more files (latent bloat and horrendously bad for archival) when I perform actions other than clicking to different pages corresponding to different URLs. (Please don't have Javascript make different "pages" show up with the same URL in the address bar. That's really bad for archival as well.)
Tangent: Another rule of thumb I have: the bigger the uncompressed Javascript load, the less likely the archived version of the site will work properly.
While you are right that there is a cost, the real question is to know whether this cost is significant. 10 Mb is still very small in many contexts. If that is the price to pay for a better dev ex and more products then I don't see the issue.
> go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically
And it's also the only part of it that doesn't work on slow connections.
I've had a slow internet connection for the past week, and GitHub file tree literally doesn't work if you click on it on the website, because it tries to load it through some scripts and fails.
However, if, instead of clicking on a file, I copy it's url and paste it into the browser url bar, it loads properly.
I have a connection that might be considered slow by most HN readers (1~2MB/s) and the new github file viewer has been a blessing. So snappy compared to everything else
gmail is terrible, idk if it's just me but i have to wait 20 seconds are marking an email as read before closing the tab. otherwise it's not saved as read
spotify has huge issues with network connectivity, even if i download the album it'll completely freak out as the network changes. plain offline mode would be better than its attempt at staying online
Gmail has this annoying preference which you can set: set email to read if viewed for x seconds. Mine was set to 3 seconds. Which is I guess why I sometimes would get a reply on a thread and I had to refresh multiple times to get rid of the unread status.
GitHub's probably the worst example of "Pjax" or HTMX-style techniques out there at this point…I would definitely not look at that and paint a particular picture of that architecture overall. It's like pointing at a particularly poor example of a SPA and then saying that's why all SPAs suck.
is there a good example of reasonably big/complex application using pjax/htmx style that sucks less? Because GitHub isn't making a good case for that technology
Interesting that you mention GitHub file tree. I recently encountered a periodic freezing of that whole page. I've profiled for a bit and found out that every few seconds it spends like 5 seconds recomputing relative timestamps on the main thread.
From the article: “To be honest, after typing all these numbers, 10 MB doesn’t even feel that big or special. Seems like shipping 10 MB of code is normal now.
If we assume that the average code line is about 65 characters, that would mean we are shipping ~150,000 lines of code. With every website! Sometimes just to show static content!
”
Any piece of software reflects the organization that built it.
The data transferred is going to be almost entirely analytics and miscellaneous 3rd party scripts, not the javascript actually used to make the page work (except for the "elephant" category which are lazy loading modules i.e. React). Much of that is driven by marketing teams who don't know or care about any of this.
All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.
Maybe the more meaningful concern to have is that marketing has more control over the result than the people actually doing the real work. In the case of the "elephant" pages, the bloat is with the organization itself. Not just a few idiots but idiots at scale.
> All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.
google tag manager is the best tool for destroying your page performance, previous job had google tag manager in the hands of another non-tech department. I had to CONSTANTLY monitor the crap that was being injected in the production pages. I tried very hard to get it removed.
I remember associating Google with lean and fast: Google the search (vs Yahooo) and Chrome (vs IE/FF (i'm talking about when Chrome was released))... chrome on itself had not much of an UI and it was a feature.
I recently came back from a road trip in New Zealand - a lot of their countryside has little to no cell coverage. Combined with roaming (which seems to add an additional layer of slowness) and boy did it suck to try to use a lot of the web.
Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways
Offline is still miles and miles better than patchy Internet. If spotify thinks you have Internet it calls the server to ask for the contents of every context menu, waiting for a response for seconds before sometimes giving up showing a menu and sometimes falling back to what would have been instant if it was in offline mode.
I really loathe their player.
Not only that, there are many apps with no online aspect to them that have facebook sdk or some other spyware that does a blocking call on app startup and the app won't start without it succeeding, unless you are completely offline.
Especially annoying when one is using dns based filtering.
> Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways
Also, Spotify (at least on iOS) seems to have fallen into the trap of thinking there is only "Online" and "Offline", so when you're in-between (really high latency, or really lossy connection), Spotify thinks it's online when it really should be thinking it's offline.
But to be fair, this is a really common issue and Spotify is in no way alone in failing on this, hard to come up with the right threshold I bet.
I've noticed BBC Sounds has the opposite problem. If you were offline and then get a connection it still thinks you're offline. Refreshing does nothing. You need to restart the app to get online.
I live in London which typically gets great signal everywhere. Except in the Underground network, where they're rolling out 5G but it's not there yet.
Please Spotify, why do I need to wait 30 seconds for the app to load anything when I don't have signal? All I want to do is keep listening to a podcast I downloaded.
Wait... iOS doesn't have an offline music app anymore either? Google replaced the "Play Music" app (which could also play offline music files) with "Youtube Music" a few years ago (not sure if that works with offline files, I switched to a third party app), but I thought iOS still had one (precisely because they used to sell the iPods, specifically the iPod touch which was more or less an iPhone lacking the phone part)?
To be honest the offline support of Apple Music even though exists, is on par with Spotify. It will bug you about “turn on wifi, you’re offline”, ask you to provide or update payment information, fail at synchronisation time to time with music purchased on iTunes, and overall work unreliably when you just want to listen to some songs you’ve copied from your physical CD’s. Like the whole thing trying hard to push you to use their subscription service.
i think they just need a more aggressive timeout value to fallback to offline mode. i wonder where their engineering made it too complicated to weigh out these scenarios.
One thing completely ignored by this post, especially for actual web applications, is that it doesn't actually break JS files down to see why it is so large. For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more. I still agree that 2.5 MB is too much even after accounting that fact and some optional features can and should be lazily loaded, but as it currently stands, the post is so lazy that it doesn't help any further discussion.
> For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more.
Almost none of those are loaded in the initial bundle, are they? All those come as data from the server.
How much JS do you need for `if data.transliteration show icon with audio embed`?
> Almost none of those are loaded in the initial bundle, are they?
In my testing, at least some input methods are indeed included in the initial requests (!). And that's why I'm stressing it is not "one-interaction" app; it is interactive enough that some (but not all) upfront loading might be justifiable.
Don't want to hate on the author's post but the screenshots being slow to load made me chuckle, understandable as images can be big and there were a lot, but just found it a little ironic.
These days, slow-loading images usually mean that somebody hasn't bothered to use any of the automatic tooling various frameworks and platforms have for optimized viewport- and pixel density-based image sets, and just stuck in a maximum size 10+ MB image.
100% agree. Most of these apps could definitely use some optimization, but trivializing them to something like "wow few MBs of javascript just to show a text box" makes this comparison completely useless
Was going to mention this, almost any company's brand site will have tracking and analytics libraries set in place. Usually to farm marketing and UX feedback.
Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks
> Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks
Some vendor SDKs can be built and bundled from NPM, but most of them explicitly require you fetch their minified/obfuscated bundle from their CDN with a script tag. This is so they don't have to support older versions like most other software in the world, and so they can push updates without requiring customers to update their code.
Try to use vendors that distribute open-source SDKs, if you have to use vendors.
It is when (a) that data collection takes up a significant amount of bandwidth whenever I visit your website, and (b) I don't trust that that data collection really is as anonymous as the website says (or even thinks).
The major players here are explicitly not anonymous, they are designed to keep track of people over time so that they can collate habits and preferences across different sites to better target advertising. Yes, your AB test script isn't doing the same thing, but is it really adding any value to be as a consumer, or is it just optimising an extra 0.01% revenue for you?
That data can be gathered with self-hosted JS if the devs were allowed to implement it. The infatuation with third party analytics is just a more elaborate version of leftpad.
Marketing snippets are rarely implemented by devs, they get dropped into a text box or added to Google Tags by marketing/SEO peeps.
If it is put in by a developer, the budget for that is like an hour to copy paste the code snippet in the right spot. Few are going to pay the hours required for an in house data collection layer that then has to integrate with the third party if that's even an option.
At least that is my experience through agency work. Maybe a product owner company could do it.
Not to be rude to the industry either, but I don't see why the assumption would bet that an in house dev has the chops to not make the same mistakes a third party does.
its not about having the chops to do it well, its about not importing every feature under the sun just because you want to do intern's first marketing analytics campaign.
earlier in the conversation someone talked about pasting a snippet. We're talking about the "chops" to not paste a snippet that is hundreds of thousands of lines long. A snippet so long it would crash many editors.
Typically the person including the JS tag that then fetches the massive third party payload has no idea that's what is happening.
It is very common to get multiple departments and contracted companies sticking their misc JS in since every marketing SaaS tool they use has it's own snippet. Your SEO guy wants 3 trackers, your marketing has another 5, and you sell on XYZ online market and they have affilliate trackers etc.
No devs engaged at any point and the site performance isn't their responsibility. They can't do their job without their snippets so the incentives are very sticky, and the circus goes on.
It's kind of like an NPM dependency tree of martech SaaS vendors...
In a previous job I had to declare war against google tag manager (tool that let marketers inject random crap in your web application without developer input). Burned some bridges and didn't win, performance is still crap.
After those things it is the heavy libs that cause performance problems, like maps and charts, usually some clever lazy loading fixes that. Some things I personally ran into:
- QR code scanning lib and Map lib being loaded at startup when it was actually just really small features on the application
- ALL internationalisation strings being loaded at startup as a waterfall request before any other JS ran. Never managed to get this one fixed...
- Zendesk just completely destroys your page performance, mandated through upper-management, all I could do was add a delay to load it
After that then it comes just badly designed code triggering too many DOM elements and/or rerenders and/or waterfall requests.
After that comes app-level code size, some lazy loading also fixes this, but it is usually not necessary until your application is massive.
It's easy to test with adblock in place. For instance, the Gitlab landing page went from 13 megabytes to "just" 6 megabytes with tracking scripts blocked. The marketing department will always double the bloat of your software.
Not sure whether you looked at the requests in the screenshots, but the tracking script code alone for many of these websites takes up megabytes of memory.
Pornhub needs to be small. Jira will download once then be loaded locally until it gets updated, just like an offline app. Pornhub will be run in incognito mode, where caching won't help.
JIRA transfers like 20MB of stuff everytime you open the board, including things like 5MB JSON file with list of all emojis with descriptions (at least the last time I profiled it).
It’s not '80s anymore, nobody cares about your porn. I have bookmarks on the bookmarks bar right next to electronics/grocery stores and HN. And if you’re not logged in, how would PH and others know your preferences?
When I said I didn't watch porn on company laptops, only my personal one, the 5 Apple employees at a party all said they watched porn on company laptops. I was in the minority.
Pornhub has a team dedicated to mobile and consoles.
I worked in IT throughout high school and college. Trust me: Old married dudes it's a coin flip on if you're gonna get Pornhub or one of its neighbors that MindGeek owns in their bookmarks; if they didn't have it bookmarked, there's still a 50% chance that it's in the history. A surprising number of women had at least some porn in their history.
My friend doesn't want porn to show up in the address bar, that's why she uses a different profile for it. Incognito mode is not good, since it actually forgets history and bookmarks.
YouTube feels really snappy to me, but Figma is consistently the worst experience I have ever felt for web apps. Jira is horrible and slow also though.
YouTube does not feel snappy to me anymore, its still one of the better experiences I have on the internet, but quite bad from years before.
I just tested my connection to youtube right now, just a tiny bit over 1.2 seconds from not using it for a few days. A fresh, no cache, no cookies, the entire page loaded in 2.8 seconds. A hot reload on either side varied between 0.8s to 1.4 seconds. All done with at most ublock as an extension on desktop chrome with purported gigabit speeds from my ISP.
That speed is just OK, but definitely not the 54ms response time I got to hit google's server to send me the HTML document that bears all the dynamic content on youtube
Figma is very surprising to me, that bullshit somehow is PREFERRED by people, getting links from designers from that dogshit app screeches my browser to speeds I haven't seen in decades, and I don't think I'm exaggerating at all when I say that
On desktop they load 2.5MB of JS and 12 MB of Javascript to show a grid of images. And it still takes them over 5 seconds to show video length in the previews.
And the code is not at all economical. It's 80% copy-paste with little deviations. There is no attempt to save by being clever either, it's all just good old vanilla JS. And no zipping, no space reduction. The code is perfectly readable when opened with the "View page source" button.
The trick is - zero dependency policy. No third party, no internal. All the code you need, you get along with the HTML file. Paradoxically, in the long run, copy-paste is a bloat preventor, not a bloat cause.
You can do the same with dependencies and "modern" JS toolkits. Dependency itself is not a cause but a symptom; websites and companies are no longer incentivized to reduce bloats, so redundant dependencies are hardly pruned.
Remember The Website Obesity Crisis [1] article from 2015, since then [2] things only got worse, and it is been almost 10 years already, well will be next year.
Is it foolish to say that in 10 more years you wont be able to navigate the web on a circa 2015 PC ? If nothing changes seems like it.
My old macbook from 2013 with latest Firefox is already can not handle loading https://civitai.com web page with 23.98 MB of JavaScript, it is just hangs for half a minute while trying to render this disaster of web frontend.
It is not just web, mobile all in one apps got so large that 2013 phone the same way unable to load them, and guess what, half of them are written on top of web tech stack, why three comma ,,, budget companies can not afford to write native application ?
The state of the web is very sad.
Most people with a fiber connection don't even notice how slow it became.
But when you are still on a 2Mbps connection, this is just plain horrible. I'm in this case, it's terribly painful. Because of this, I can't even consider not using an ad/tracker blocker.
Would love to see this test with Ublock origin enabled.
Tracking is a bit heavy, but from what I've looked at, the app code is usually much worse. I've looked at what Instagram and JIRA ship during the initial load and it's kinda crazy.
What happens when you use modern apps on iphone 3 or first nexus phone? I don't understand, do people think that with better, faster computers and network speed we should focus on smaller and smaller apps and websites?
Your iPhone CPU doesn't suddenly become an iPhone 3G CPU sometimes, but network availability does vary a lot.
You may also one day find yourself on a flaky 3G connection needing access to some web app that first loads twenty megabytes of junk before showing the 1 kB of data you need, and then it's clearer what the problem is here.
new raspberry pi compete with smartphones of the past, which then have comparable compute than servers of the yester years. moore's law has allowed developers to push more and act fast at the cost of being optimal. many such cases.
Yeah, I'm saying the relevance of that statement is pretty low because most of us don't experience that, certainly not enough to tip the needle of JS culture.
I'm leaving in France, not off-grid and not very far from a big city. We still don't have fiber connection available yet. What I'm saying is that we are a lot in this situation, even in developed countries.
Those that accept shipping 10mb bundle clearly forget that not everyone have the same connection they have in their office.
To the point where the web is mostly unusable if you don't disable ads. I'm not against ads, but the cost is just to high for my day to day use of internet.
So true, we build large complex frameworks, abstractions over abstractions.
Try to make things easy to build and maintain.
But I think the problem is that many developers that using these frameworks not even know the Javascript basics.
Of course there are smart people at these large companies.
But they try to make things easy instead of learn people the basics.
We over engineer the web applications, create too much layers to hide the actual language.
20 years ago, every web developer can learn building websites by just check the source code.
Now you can see the minified Javascript after a build, and nobody understand how it works, even the developers that build the web application don't recognize the code after the build.
I love Javascript, but only pure Javascript, and yes, with all his quirks.
Frameworks don't protect you from the quirks, you have to know it so you don't make quirks, and with all the abstraction layers, you not even know what you are really building.
Keep it simple, learn Javascript itself instead of frameworks, and you downsize the Javascript codebase a lot.
Pretty sure the situation wouldn't change if it wasn't minified.
Recently I had to add a couple of mechanics into sd-web-ui, but found out that the "lobe theme" I used was an insufferable pile of intermingled React nonsense. Returned to sd-web-ui default look that is written in absolutely straightforward js and patched it to my needs easily in half an hour.
This is a perfect example based on a medium-size medium-complexity app. Most sites in TFA are less complex. The delusions that frontend guys are preaching are on a different level than everything else in development.
That spike is only visible when Drupal/Magento/WordPress lenses are selected, and disappears with the top ~1M websites, so I assume it is a very long tailed behavior.
Even if you are not using an high resolution display like the sibling comment says, the images are reasonably sized (they are a few hundreds KB each). I have seen landing pages with 20-30 MB images for no good reason.
Some of my (non-programming) colleagues don't seem to be able to wrap their head around image size. And someone who taught courses to communication/marketing students told me, it took 2 hours to explain resolution and all that to them, and then half even didn't get it. Yeah, I can hear you: "something that easy? must have been a bad teacher," but the concepts are rather weird for non-techies. So those people become responsible for updating the website content, and upload whatever the graphical artists show them. And designers like to zoom in, a lot, so often that's a 20MB png, where a 200kB jpeg would suffice.
> I like the conversation about web performance, but you should make sure you practice what you preach
I'd say the author is practicing what he preaches, the JS is just 4,6 kB.
There is some optimizations [1][2][3] that can be done to the images but I wouldn't fully disqualify the article because of that.
The websocket connection is kinda odd though, I tried reading the code but didn't fully catch the purpose, it just says something about pointers.
I work with newly-released prisoners and homeless people who are mostly on free Lifeline phones. They typically get 15GB of data a month. This is used up in 2-3 days on average. After that they can sometimes get 2G data, but it is impossible to use Google Maps to get to an interview, or even to download their email or fill out a job application online. Because the phone is no longer useable they often get lost or stolen.
I regularly come across web sites with >250MB home pages these days. It doesn't take many of those to kill your entire data allotment.
I only know that i know nothing, but lately libraries (not my own code) have been getting increasingly bloated... mostly true for big-ticket stuff such as Firebase or Meta's stuff.
Anything homemade performs faster than ever though, so engines are getting better but my code has stayed as simplest as ever and finding improved performance.
it's always bothered me that this dogma exists. Somehow web apps need to be super frugal with code size, while apps distributed on other (native) platforms never have such a problem. Somehow it's the bloated web that blocks the access for children in Affrica, but they can download bloated android apps just fine?
Maybe, just maybe, the problem isn't the size of the javascript, it's how broken the entire web stack (specifically caching & PWA) is, that makes a trivial thing like code size a problem.
If Atlassian did't use a full minute to update a certain roadmap view on my MBP from January I would be one step closer to agreeing with you even if it was still 50 MB.
But I am old enough to remember Gannt charts didn't use to take that long on old Pentium processors back in school, way before Git was invented.
Another thing is the sheer yuck of it:
If a typical web app was lots of business code, maybe. But when you look at the network tab and it feels like you are looking at the hair ball from a sink drain, lots of intertwined trackers to catch everything that passes, that is another story.
yes, the problem with the narrative is that web apps that are logic heavy are lumped together with content based sites who cannot justify their code bloat.
> while apps distributed on other (native) platforms never have such a problem
Could you give an example? I was an android app dev over 5 years ago and there was a huge push for lower app size everywhere. Google even made Android App Bundles to fight this issue specifically.
however big, you're only required download an app once. Next time there is an update that you cannot download, generally that won't block you from using the app at that point in time.
There's always a speed/storage tradeoff. Apps should be more economic too, but you download native apps once, and web apps almost every time you open them. So indeed, caching could help, but how large would your cache have to be? Big enough to hold all 50MB downloads for every website you visit? That's an awfully large cache. So I'd say economy is more necessary for web-apps than for native/store apps.
I just checked my web usage on my work computer. The last 120k opened URLs were on 8300 unique hosts. 8300 * 50MB is not feasible.
Web sites should be frugal. If content websites, even more so if they even lack comment sections, are getting huge that's a pure "skill issue". It's not our place to speculate why what should have been a content site became a web app.
are you deliberately not getting the point for some reason? How big is considered bloated in native apps, how big can a native app can get before it hurts accessibility because people cannot download it? Is it a few MB?
Web apps are soon going to be if not already matching native apps in terms of complexity yet we are still distracted from the real problem and quibble about some arbitrary and frankly pathetic code size restriction. Fix the root problem with PWA or something.
> How big is considered bloated in native apps, how big can a native app can get before it hurts accessibility because people cannot download it?
I'd say if it takes more than 50 megabytes to display a list of text, it's a problem :)
> yet we are still distracted from the real problem
Yes, I agree, "how broken the entire web stack" is the main problem. And the ungodly amounts of javascript you end up for the simplest problems is the symptom. However, neither caching nor PWA are the specific main problems in the web's brokenness :)
> the ungodly amounts of javascript you end up for the simplest problems
replace javascript with any native programming language, this has been always a problem with UI programming, but it's not THAT big of a problem because in other platforms people download the app once, instead of every time. There is a long list of problems in software engineering, but sorry to disagree with you, "the binary/script is too big" is no where near the top.
It sucks that we blew up CDNs for security reasons.
How nice it would be if sites using React could use the React already in cache from visiting another site!
I keep wanting to have some kind of technical answer possible here. Seems hard. And who cares, because massive bundles are what we do now anyways, in most cases. But it sucks that the web app architecture and web resource architecture are both massively less capable than they were 20 years ago.
React was one example, of a library that might be used repeatedly. The premise isn't that react and react alone's core lib is reused (did you include react.dom too?), but that many different libs become cached over time & space.
While I agree with the article, I am also obsessed with keeping the size as low as possible, I can't stop thinking about how it feels like the Author is somewhat mixing up webapps with websites.
Websites requiring that much JS for doing very simple static tasks is bloat, but the same bar should not be set for webapps. It should still be required and a high priority to keep the bundle size low but webapps should be considered as different category. Websites can (and should) function without JS, webapps cannot.
Another thing is that the Author only looked at the visible elements, for example on Google Translate and Outlook. What he did not consider is that there is a lot of more apps accessible behind the menus.
If you take a closer look at Outlook at first glance, sure, it's just a simple app to display your emails. But on the sidebar you have access to a lot more features like calendar, contacts and the office suite.
The React site part of this is not real. The author ticked "Disable cache" which means the same code (which powers the interactive editable sandboxes they're scrolling by) gets counted over and over and over as if it was different code.
If you untick "Disable cache", it's loaded once and gets cached.
That's also not true? I'm navigating between pages, and it does get served from cache for all subsequent navigations.
The only case when this code gets loaded is the literally the first cold load of the entire site — and it's only used for powering live editable interactive sandboxes (surely you'd expect a in-browser development environment to require some client-side code). It doesn't block the initial rendering of the page.
I think the issue isn't with the methodology (disabling cache), but rather the erroneous conclusion that the React.dev website continually requests data as somehow problematic when it's a sideeffect of disabling browser cache.
Also, FWIW, OP is one of the authors of react.dev and a member of the react core team (not that it's relevant to the objection).
And here I am feeling bad that my WASM C64 emulator doesn't fit into 64 KBytes anymore ;) (it's grown to 116 KBytes total compressed download size over the years).
Jeepers, how does this even happen? I've been developing a fairly complex app with Nuxt, Apollo client, and PrimeVue, and paid no attention to size whatsoever. Yet the most complex page in the app with the most module dependencies loads only 3.8 megs, and that's not even a minified build. Same page from the Nuxt dev server throws 24.4M at me, but I'm pretty sure it's pre-loading everything. Do the big players just not do any code splitting at all?
On the other hand, node_modules weighs in at 601M. Sure I've got space to burn on the dev box, but that's reason #1 I'm not doing yarn zero-install and stuffing all that into the repo.
I'm at 329 kB of JS. I've been building on the same app for 10 years. That's a lot of cruft build up. And I'm still nowhere close to any of these numbers. I've got React, jQuery, lodash, and I don't know what else in there.
i lived in a trailer and had spotty wifi via at&t hotspot. i also have starlink but trees made it unreliable. You realy don't understand how bad js bloat is until you have a shitty internet connection.
I suspect ChatGPT is so bloated because they (inefficiently) include entire Markdown parser and code highlighting libraries. Add that various tracking libraries and you have a big bundle.
I have to also wonder what percentage of each of those web logic figures come from code that's there to silently do telemetry rather than part of e.g. a UX framework
You can see from the recording that it's downloading the same few files from Codesandbox over and over again, as the iframes used for the examples are being unloaded and reloaded on scrolls and because the author disabled caching.
The author could've scrolled forever and the number would've gone up indefinitely.
Exactly, the result would've been different if the author would not have disabled caching.
In this case it's because the iframes are loaded/unloaded multiple times, but we also spawn web workers where the same worker is spawned multiple times (for transpiling code in multiple threads, for example). In all those cases we rely on caching so we don't have to download the same worker code more than once.
1) In React, re-renders don't destroy the DOM. So nothing would happen if iframes were re-rendered.
2) Rather, we intentionally unload interactive editor preview iframes to improve memory usage when you scroll away from them. We do load them again when you scroll up — and normally that would be instant because the code for them would get cached. But the author has intentionally disabled cache, so as a result they get arbitrarily high numbers when scrolling up and down.
Looking at the screenshots in the article, the readings are wrong. You are reading the first number as if that's the amount of JS being loaded, but it's the second number (i.e. if it says 6 MB / 3MB, it's 3 MB of JavaScript, out of 6 MB total page size).
Coincidently, I have written a browser extension to navigate Hacker News comments using Vi-style key bindings [1]. It has no compilation steps, no npm. It is mostly a 1kLoC file of vanilla JavaScript.
Modern frameworks are definitely needed for large applications, but there is no need all that complexity when the scope is reduced.
Just checked my SaaS platform. The entire application front end is 1.3 MB. But 300KB of that are font files and 490KB is for a cryptographic library for blockchain authentication.
The thing is; I didn't use any bundling or minification. Also, it loads faster than most of the websites mentioned in this article and that's with minimal optimizations and my server being located on the other side of the world.
In general, look for dependencies. Any popular enough bundlers should have a feature or plugin to show the bundle statistics (e.g. webpack-bundle-analyzer or rollup-plugin-analyzer). Audit your dependency to see if that's not requested nor needed, and try to remove them with a finer-grained dependency or leaner library or rewriting, in the order of preference. That alone is enough for usual JS apps, because not much people do so...
Turning on any kind of performance monitoring can help. They usually have ones for every platform. For the web, for example, I used YSlow in the past, but there are several alternatives: https://alternativeto.net/software/yahoo-yslow/
Also worth noting some of these website (such as Linear) pre-load other pages once the current page is done loading. The actual JavaScript on the page seems to be about 500kb (as opposed to 3mb)
This also happens with other software, and it's arguably even worse. Some mobile apps take up hundreds of megabytes because of all of the bloat that they feel the need to bundle.
I have a large and complex (ERP/MRP/inventory system for electronics) ClojureScript application and I was worried about my compiled code size being around 2.45MB.
This puts it into perspective. I heard complaints about ClojureScript applications being large, which I think is true if you write a "Hello, world!" program, but not true for complex applications.
Also, Google Closure compiler in advanced compilation mode is a great tool. Of course, since it is technically superior, it is not in fashion, and since it is hard to use correctly, people pretend it isn't there.
It would be useful if author sorted the requests by size. Most of this junk anyway is analytics, heatmaps, tracking and all that bullshit. Ofc you can make <1MB sites easily with the most complex UI and functionality, but the business just doesn't demand this or care while pennies are flowing. Caching and compression is also very important especially for virtualised sites like react-dev which author did not understood, they are essential features packed and turned on in every browser and in "real-life" test i wouldn't disable them.
At this point, blog posts like these just look like "rage bait" for web developers.. What's the point of it? What's the alternative?
The biggest reason why this topic is a topic, is due to the browser developer tools letting anyone glance at these details easily. If this wasn't a low hanging fruit blog post, it would also try to figure out if this is isolated to web development or can we see this across the board (hint: look at how big games have become, is it only textures though?).
I don't understand why you'd consider this "low hanging fruit". What could the author have done to make it a high quality submission in your eyes?
The alternative is to have more awareness of the amount of dependencies you really need, of when you actually need a framework with a runtime, and so on. He mentioned a fair share of essentially static landing pages that really have no reason to ship so much crap. And even though it's not explicitly mentioned: this isn't just a potential issue for end users. This code likely makes life hard for the developers as well. With every dependency you get more potential for breaking changes. With every layer it gets harder to understand what's going on. The default shouldn't be to just add whatever you want and figure it out later, the default should be to ask yourself what really needs to be added. Both in terms of actual code, but also layers, technologies, frameworks, libraries.
It would be much more interesting to analyse one of the sites in detail and consider what could be done to reduce the code size, looking at where it's coming from and what it does. Or to find out why it might be that some landing pages are shipping a lot of JS (could be because they are landing pages for web apps?). Or consider performance more holistically (are pages shipping a lot of code, but lazy loading or otherwise optimising it so that pages still perform well?). Or maybe compare these web application sizes to mobile or desktop equivalents too (where it's surely easier to optimise amount of code shipped).
The article is just a lot of vague pointing at sites and insinuating (not even asserting) that they're too bloated or not. I don't get a sense of whether it's worse that Medium ships 3mb of code or Soundcloud ships 12. There's a lot of bad faith "this is just a text box" for sites which clearly do much more than that too.
> It would be much more interesting to analyse one of the sites in detail and consider what could be done to reduce the code size, looking at where it's coming from and what it does.
No. No it wouldn't. It's not the job of strangers on the internet to do the job of incompetent developers.
> Or to find out why it might be that some landing pages are shipping a lot of JS (could be because they are landing pages for web apps?)
How does being a landing page for a web app excuse downloading 6-10 MB of javascript to show two pages of static text and images?
> Or consider performance more holistically (are pages shipping a lot of code, but lazy loading or otherwise optimising it so that pages still perform well?).
> There's a lot of bad faith "this is just a text box" for sites which clearly do much more than that too.
Not clearly. Not clearly at all.
---
Edit: note on the incompetence.
If you embed Youtube player in your website, Lighthouse will scream at you for being inefficient and loading too many resources. Nearly all of those issues will come from youtube.
Lighthouse will helpfully provide you with a help page [1] listing wrappers developed by other people to fix this. Chrome's "performance lead" even penned an article[2] on lazy loading iframes and linked to a third-party youtube wrapper which promises 224x speed up over the official embed.
They know. They either are so incompetent that they cannot do the job themselves, or they don't care.
BTW. web.dev is created by web devs at Google. Promoting web development best practices. It takes it ~3 seconds to display a list of articles and the client-side only navigation is broken https://web.dev/articles
This is not a JavaScript issue. It’s a software enginner/people issue. The question is how do you get people to care about performance, security, reliability, etc.? How do you get organizations to care about these issues?
These are hard problems and people have been complaining about software size forever. Back in the early 90s, it was bloated C++ code.
You will also see that all software continues to use more ram, more disk space, more network bandwidth etc. This trend has been going on for decades.
For example, why do we use JSON as an interchange format? It’s relatively slow (i.e. creating it and parsing it is slow), nor is it is not space efficient. Back in the 1980s, the Unix community created RPC and the RPC wire formats were much more efficient because they were binary formats. The reason we use JSON is it makes the developer’s life easier and because developers prefer ease of use to performance.
> The question is how do you get people to care about performance, security, reliability, etc...How do you get organizations to care about these issues?
It's very hard to, unless there is a risk to the bottom line.
Let me pose it differently--apparently ZED is ridiculously fast code editor. Do I want to switch from my vscode investment? Or will I deal with the "bloat"
Many websites listed on the page don't even need JS.[0]
Consider how just a few years ago there used to be an entire suite of alternative frontends to major "web apps"/"social media platforms", which generally worked without any JS, and were created & ran by volunteers. In general, they all provided superior UX by just not loading megabytes and megabytes of tracking code; this would be the alternative.[1]
Now they are slowly evaporating: not because of lack of interest, but because it was affecting the margins of these companies, and they actively blocked the frontends.[2]
So I think of it this way: these megabytes and megabytes of JS do not serve me, the user. It's just code designed to fill the pockets of giant corporations that is running on my computer. Which is, indeed, quite infuriating.
OK, maybe not even that; after all, you get what you pay for. It's just sad that despite the technological possibilities, this is the norm we have arrived at.
[0]: Of course, there is valid use of JS, it's a wonderful technology. I'm talking about cases where you could pretty much write the whole thing in pure HTML without losing core functionality.
[1]: Well, only if there existed a viable financing model besides "selling user data to the highest bidder" :( technologically at least, it's possible.
[2]: See cases of bibliogram, teddit, libreddit, nitter, ...
You're confounding using Javascript for web development and "loading megabytes of tracking code". They are not mutual exclusive, nor have anything to do with the development side of it. Advertising and tracking are a business decision, not technical one.
Well, you're entitled to not visit/use those websites if the megabytes of JS don't serve you. That would be the cost of the decision made to use Javascript by those websites, to have people that share your opinion not using them. And cost is a subjective factor dependent on multiple things and hidden from the user, so when you judge a website to be using Javascript without need, you're basically saying that you know better the cost to the developer than the developer itself.
An absurd, idiotic situation that is endemic to the whole industry deserves every bit of scorn and ridicule. Developers need to be educated about this so they can make the right choices, instead of remaining complacent agents of a shameful situation. It also strengthens my own resolve to support (through my use) those websites that abide by the original principles of the web: they either deliver actual documents (instead of javascript apps), or offer public APIs to access the plaintext data. There is always an alternative out there.
I often have to close a tab because my 12-core Mac mini starts heating the room and the fan sounds like it's about to fly off the axis. This is not specific to Javascript but the ad code doing this is obviously JS.
Speaks more to the general Enshitifcation of the web.
Your editor downloads a 32.6MB ffmpeg WASM binary on every page load.
Throttling the network to "Slow 3G", it took over four minutes of a broken interface before ffmpeg finally loads. (It doesn't cache, either.) A port of the Audacity audio editor to web[1] with WASM takes 2.7 minutes on the same connection, so the binary is totally reasonable, but I think claiming less than 2 MB is disingenuous.
Sorry for that, we just focus on js bundle and don't realize how big the ffmepg.wasm is. Thanks for reminding, next step we will try to rebuild ffmepg.wasm and make it smaller.
Serious question: what is the issue with these paritcular sizes? I know that features/look these websites have are definitely achievable with less JS at a higher engineering cost, but what's the problem with it? 10MB loads in two seconds on an okay-ish desktop connection (correct me if I'm wrong, but most of people don't deploy Vercel apps from their phone from a mountain range with 3G connection). The experience on the websites mentioned is smooth as it can only get; everything is super fast and nice. Every subsequent click is just instant action. That's how web should look like.
Is the problem here that they perform poorly on slower computers/connections? Is it even true? Is there an audience of developers who can't use Vercel or GitLab productively because of that? Any metrics to support that? IMHO optimizing against bundle size/JS sent over the network is one of the worst metrics for performance I could imagine.
But the experience is not smooth as it can get if you're running a bit older HW. Gmail and Slack are notorious for bad performance even when the content displayed is rather simple plain text. As a user I don't really get anything in return when developers decide to use complex JS solutions for simple use cases.
I think it serves as a generic metric for bloat, because nobody really optimizes for size, thereby making it a good untainted metric. As the web gets bloaty and slow, the size of the websites grow as well, which also invites using size as a metric for bloat.
Smooth, fast, nice, these would be good to measure, but it's much harder. I like an interface response time metric, for example [0]. I always lament that interfaces are getting slow - I get that they are nicer, too, but god damn why am I waiting 1 second for anything, when my Pentium III with Win XP was near instant?
Compare it to people who really care about performance — Pornhub, 1.4 MB
Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.