
In spite of an increase in Internet speed, webpage speeds have not improved - kaonwarb
https://www.nngroup.com/articles/the-need-for-speed/
======
bob1029
It doesn't have to be this way. I am not sure when there was a new rule passed
in software engineering that said that you shall never use server rendering
again and that the client is the only device permitted to render any final
views.

With server-side (or just static HTML if possible), there is so much potential
to amaze your users with performance. I would argue you could even do
something as big as Netflix with pure server-side if you were very careful and
methodical about it. Just throwing your hands up and proclaiming "but it wont
scale!" is how you wind up in a miasma of client rendering, distributed state,
et. al., which is ultimately 10x worse than the original scaling problem you
were faced with.

There is a certain performance envelope you will never be able to enter if you
have made the unfortunate decision to lean on client resources for storage or
compute. Distributed anything is almost always a bad idea if you can avoid it,
especially when you involve your users in that picture.

~~~
cactus2093
This type of anti-big-js comment does great on Hacker News and sounds good,
but my personal experience has always been very different. Every large server-
rendered app I've worked on ends up devolving to a mess of quickly thrown
together js/jquery animations, validations, XHR requests, etc. that is a big
pain to work on. You're often doing things like adding the same functionality
twice, once on the server view and once for the manipulated resulting page in
js. Every bit of interactivity/ reactivity that product wants to add to the
page feels like a weird hack that doesn't quite belong there, polluting the
simple, declarative model that your views started off as. None of your JS is
unit tested, sometimes not even linted properly because it's mixed into the
templates all over the place. The performance still isn't a given either, your
rendering times can still get out of hand and you end up having to do things
like caching partially rendered page fragments.

The more modern style of heavier client-side js apps lets you use software
development best practices to structure, reuse, and test your code in ways
that are more readable and intuitive. You're still of course free to mangle it
into confusing spaghetti code, but the basic structure often just feels like a
better fit for the domain if you have even a moderate amount of interactivity
on the page(s). As the team and codebase grows the structure starts to pay off
even more in the extensibility it gives you.

There can be more overhead as a trade-off, but for the majority of users these
pages can still be quite usable even if they are burning more cycles on the
users' CPUs, so the trade-offs are often deemed to be worth it. But over time
the overhead is also lessening as e.g. the default behavior of bundlers is
getting smarter and tooling is improving generally. You can even write your
app as js components and then server-side render it if needed, so there's no
need to go back to rails or php even if a blazing fast time to render the page
is a priority.

~~~
andrepd
>The more modern style of heavier client-side js apps lets you use software
development best practices to structure, reuse, and test your code in ways
that are more readable and intuitive.

Sadly, this is probably where the core of the problem lies. "It makes code
more readable and intuitive" is NOT the end goal. Making your job easier or
more convenient is not the end goal. Making a good product for the user is!
Software has got to be the only engineering discipline where people think it's
acceptable to compromise the user experience for the sake of _their_
convenience! I don't want to think to closely about data structures, I'll just
use a list for everything: the users will eat the slowdown, because it makes
my program easier to maintain. I want to program a server in a scripting
language, it's easier for me: the users will eat the slowdown and the company
budget will eat the inefficiency. And so on.

~~~
aryonoco
Code that is easier to read is easier to maintain and easier to debug. Code
that is easier to read and more intuitive will, more often than not, result in
a better product and better experience for the users.

~~~
bananaface
I disagree with the premise. "Readability" is an excuse people use for writing
slow code. It's not an inevitable tradeoff.

Like, most of these people are not saying, "we could do this thing which would
speed up the app by an order of magnitude, but we won't because it will
decrease readability." They have no _idea_ why their code is slow. Many don't
even realise it _is_ slow.

My favourite talking point is to remind people that GTA V can simulate &
render an entire game world 60 times _per second_ , 144 times on the right
monitor. Is that a more complex render than Twitter?

Computers are _really_ fast, it doesn't take garbage code to exploit that.

~~~
Semiapies
How long does it take to start up GTA V on your computer?

~~~
bananaface
More than a website that preloads its structure into the cache and then
transfers blocks of 280 characters, a name & a small avatar, rather than
gigabytes worth of compressed textures.

Is the difference because GTA has more "readable" code?

I have other games that _do_ load up quicker than Twitter, which I _do_ think
is damning, but it's not really the point I'm trying to get across here.

~~~
Semiapies
Well, the "less readable code"—ie, the _goddamn mess_ that a lot of game code
is, slapped together barely under deadline by staffs working 80 or more hours
a week—is part of why AAA games like GTA have so many massive bugs requiring
patches immediately after release.

But then, _you_ brought up GTA and games, which aren't even apples and oranges
with a website. Websites—even the Twitter website—don't require GPUs or
dedicated memory, they don't have the advantage of pulling everything from the
local hard drive, and yet they actually work as designed, not merely in a low-
resolution, low-effects mode on computers more than a couple years old.

And while I wouldn't point out the Twitter home page as remotely fast for a
web site, have you actually even _looked_ at it recently? It shows a lot more
than just a few tweets and avatars. It's got images, embedded video, etc.

~~~
bananaface
This is a dumb argument. My point is that readable doesn't imply slow, and
"readability" is not actually the reason slow things are slow, most of the
time. I don't think you even disagree with me.

There's definitely another discussion to be had about why web tech is so
disastrously slow given what computers are capable of, but it's not worth
having here. We're never going to settle that one, and regardless if you _are_
a web guy, you're stuck with JS.

>It's got images, embedded video, etc.

Bad excuse IMO. Lazy load them.

------
anonyfox
I just rewrote my personal website (
[https://anonyfox.com](https://anonyfox.com) ) to become statically generated
(zola, runs via github Action) so the result is just plain and speedy HTML. I
even used a minimal classless „css framework“ and ontop I am hosting
everything via cloudflare workers sites, so visitors should get served right
from CDN edge locations. No JS or tracking included.

As snappy as I could imagine, and I hope that this will make a perceived
difference for visitors.

While average internet speed might increase, I still saw plenty of people
browsing websites primarily on their phone, with bad cellular connections
indoor or via a shared WiFi spot, and it was painful to watch. Hence, my
rewrite (still ongoing).

Do fellow HNers also feel the „need for speed“ nowadays?

~~~
user5994461
There is no support for comments in the blog and no pictures at all. No
images, no thumbnails, no banner, no logo, no favicon.

Also, no share button. No top/recommended articles. No view counter.

Once you start adding medias it will be quite a bit slower. Once you start
implementing basic features expected by users (comments and related articles
for a blog) it's gonna be yet again slower.

I remember when my first article went viral out of the blue, I think have to
thank the (useless) share buttons for that. Then it did 1TB of network traffic
over the next days, largely due to a pair of GIF. That's how bad pictures can
be.

~~~
FridgeSeal
> no banner, no logo, no favicon...Also, no share button. No top/recommended
> articles. No view counter.

All of which I can live without.

Still the best way of sharing content on the web is via a url, which is
handily provided, so most of these aren't even needed. As for recommended and
view counts, these don't inherently add a lot of value to users. If anything,
it's a nice change to have a page that doesn't try and infer my desires for
once.

~~~
user5994461
Should have said stats instead of counter. As the webmaster, you want to know
how many visitors there are on which pages?

A simple "last 5 articles" in the corner do add value. Users frequently read
more than one article.

~~~
FridgeSeal
You can get that from your logs though?

~~~
user5994461
Usually not, because the hosting doesn't provide access to request logs
(consider github pages, heroku, wordpress, LAMP providers).

------
partiallypro
The main culprit, imo is javascript. People/clients want more and more complex
things, but javascript and its libraries are the main culprit. Image
compression, minification...it helps, but if the page needs a lot of JS, it's
going to be slower.

Slightly off topic, but I have a site that fully loads in ~2 seconds but
Google's new "Page Speed Insights" (which is tied to webmaster tools now) give
it a lower score than a page that takes literally 45 seconds to fully load.
Please someone at Google explain this to me. At least GTMetrix/Pingdom
actually makes sense.

~~~
austincheney
More specifically the culprit is generally unnecessary string parsing. Every
CSS selector, such as querySelectors and jQuery operations, requires parsing a
string. Doing away with that nonsense and learning the DOM API could make a
JavaScript application anywhere from 1200x-10000x faster (not an
exaggeration).

Most JavaScript developers will give up a lung before giving up accessing the
page via selector strings. Suggestions to the effect are generally taken as
personal injuries and immediately met with dire hostility.

~~~
chubot
[citation needed] There are a lot of other reasons why client side JS is slow,
including page reflows, bad use of the network, etc. I'm not a front end dev
but I have fixed many performance problems before, and I've never seen parsing
CSS selectors as a bottleneck.

I've seen some data driven work like this: [https://v8.dev/blog/cost-of-
javascript-2019](https://v8.dev/blog/cost-of-javascript-2019)

I don't think they mentioned parsing CSS selectors anywhere. Shipping too much
code is a problem, because megabytes of JS is expensive to parse, but IIUC
that is distinct from your claim.

~~~
austincheney
> I'm not a front end dev

But I am.

You are correct in that there many other opportunities to further increase
performance. If performance were that important you would also shift your
attention to equally improve code execution elsewhere in your application
stack.

> and I've never seen parsing CSS selectors as a bottleneck.

It doesn’t matter what our opinions are or what we have/haven’t seen. The only
thing that matters are what the performance measurements say in numbers.

EDIT

To everybody asking for numbers I recommend conducting comparative benchmarks
using a perf tool. Here is a good one:

[http://jsbench.github.io/](http://jsbench.github.io/)

I posted a performance example to HN before and people twisted themselves into
knots to ignore numbers they could easily validate and reproduce themselves.

~~~
esperent
> > I'm not a front end dev

> But I am.

So am I, and I disagree with you.

Here's an actual benchmark that you can run[1] (why did you not share an
actual benchmark?). I get, on my old and slow Android, 500k ops/sec for
querySelector.

At 60fps, that allows you to do ~8000 selections per frame, assuming you're
not doing anything else. In reality, any app I've ever encountered probably
has a few hundred querySelector calls, in total, and if the app is well
written, the majority of these are cached meaning they only get called once,
not once per frame.

[1]
[https://www.measurethat.net/Benchmarks/Show/2488/0/getelemen...](https://www.measurethat.net/Benchmarks/Show/2488/0/getelementbyid-
vs-queryselector#latest_results_block)

~~~
austincheney
500k ops is ridiculously slow for accessing the DOM on any modern hardware. It
sounds fast when aren’t comparing it to anything. Compare that to a similar
approach that makes use of the standard DOM methods and no selectors. The only
thing querySelectors do faster is allow accessing elements by attribute.

The reason I refuse to post numbers myself is because:

1\. I provided a tool where people can run any manner of discovery for their
own numbers and see performance differences in various approaches.

2\. People, when presented with a valid comparison will irrationally ignore
results that challenge their opinion.

EDIT:

I looked closer at the measureit experiments and it seems there is some sort
of bias. If you run the same experiment using an element already present in
the page the results are the same for querySelector but 50% greater for the
getElementById approach. Other perf tools I have tried did not display this
kind of bias and they also reported substantially higher numbers for all user
agents, most especially for desktop Firefox.

~~~
daerogami
In response to 1, you're asking people to do their own research when you have
stated the claim. The burden is on you.

As for 2, if it's such a problem, just don't make the argument. It genuinely
comes off as wanting people to agree with you rather than any real interest
towards engaging in actual discussion.

~~~
austincheney
> The burden is on you.

Then just disagree with me.

------
superkuh
The last 5 years there has been a dramatic shift away from HTML web pages to
javascript web applications on sites that have absolutely no need to be an
application. They are the cause of increased load times. And of them, there's
a growing proportion of URLs that simply _never_ load at all unless you
execute javascript.

This makes them large in MB but that's not the true cause of the problem. The
true cause is all the external calls for loading JS from other sites and then
the time to attempt to execute that and build the actual webpage.

~~~
Softcadbury
Yes you're right, but keep in mind that with SPA, only the first loading is
slower, then you don't need to reload again when changing pages.

~~~
superkuh
Try to open 10 actual html websites. Now try to open 10 SPA. Tell me which
feels slower. SPA slowness is not exclusively from all the third party JS and
CSS loads. A lot of it is just innate to being an _application_ instead of a
document.

------
bdickason
The article doesn’t dig into the real meaty topic - why are modern websites
slow. My guess would be 3rd party advertising as the primary culprit. I worked
at an ad network for years and the number of js files embedded which then
loaded other js files was insane. Sometimes you’d get bounced between 10-15
systems before your ad was loaded. And even then, it usually wasn’t optimized
well and was a memory hog. I still notice that some mobile websites (e.g. cnn)
crash when loading 3p ads.

On the contrary, sites like google/Facebook (and apps like Instagram or
Snapchat) are incredibly well optimized as they stay within their own 1st
party ad tech.

~~~
kevincox
Do you know why modern sites are slow? Because time isn't invested in making
them faster. It isn't a technical problem, people take shortcuts that affect
the speed. They will continue to do so unless the website operators decide
that it is unacceptable. If some news website decided that their page needed
to load in 1s on a 256Mbps+10ms connection they would achieve that, with
external ads and more. However they haven't decided that it is a priority, so
they keep adding more junk to achieve other goals.

It's simply Parkinson's Law

~~~
euske
Exactly this, and it's happening everywhere in the software world.

Why bother writing binary search? Linear search is fast enough. Why bother
sharing pointers? Copy all the strings. Easy. Data compression? Forget it.
Deleting files is totally irrelevant, etc, etc.

~~~
ehsankia
Here's another way of looking at it. As long as you're below a certain
threshold, users would much rather you implement 5 new features than spend
that time squeezing 100ms on the load time.

Another related side-reason why it's slow is we use higher and higher levels
of abstraction, exactly to increase development velocity and be able to add
more features quicker. I could write a native app in pure assembly and have it
be blazing app, or I can write a webapp on top of web frameworks running in
Electron, but in a fraction of the time. As long as my app is usable, I'll get
all the user while the other person is still trying to finish their app.

------
sarego
As someone who just recently worked on reducing page load times these were
found to be the main issues

1- Loading large Images(below the fold/hidden) on first load 2- Marketing
tags- innumerable and out of control 3- Executing non critical JS before page
load 4- Loading noncritical CSS before page load

Overall we managed to get page load times down by 50% on average by taking
care of these.

~~~
Wowfunhappy
Can someone who understands more about web tech than me please explain why
images aren't loaded in progressive order? Assets should be downloaded in the
order they appear on the page, so that an image at the bottom of the page
never causes an image at the top to load more slowly. I assume there's a
reason.

I understand the desire to parallelize resources, but if my download speed is
maxed out, it's clear what should get priority. I'm also aware that lazy
loading exists, but as a user I find this causes content to load too _late_. I
_do_ want the page to preload content, I just wish stuff in my viewport got
priority.

At minimum, it seems to me there ought to be a way for developers to specify
loading order.

~~~
dgb23
That is actually the case today!

But it is an opt-in feature, which is not supported in older browsers.

In modern frontend development we are heavily optimizing images now. Lazy
loading is one thing, the other is optimizing sizes (based on viewports) and
optimizing formats (if possible).

This often means you generate (and cache) images on the fly or at build-time,
including resizing, cropping, compressing and so on.

~~~
Bnshsysjab
Is putting all assets into a single png/svg to reduce total requests a dead
practice now?

~~~
tbarbugli
I guess http/2 support on CDN made this a useless (and tedious) optimization

~~~
Polylactic_acid
There was also the issue on an ancient version of IE that it could only load a
few requests at the same time.

------
sgloutnikov
This post reminded me of this quote:

"The hope is that the progress in hardware will cure all software ills.
However, a critical observer may observe that software manages to outgrow
hardware in size and sluggishness. Other observers had noted this for some
time before, indeed the trend was becoming obvious as early as 1987." \-
Niklaus Wirth

~~~
the-pigeon
Yeah this is the way it always will be.

Speed is determined by business requirements, not capabilities.

I have hundreds of opportunities for optimizations in my apps. I could make
them fly. But the business side says the current speed is good enough and to
focus on new functionality. So that's what I do.

~~~
Polylactic_acid
This is certainly how I feel. I see so many things that people complain about
in my work app like not being able to submit with enter or breaking the back
button and it makes me sad because this is not the fault of JS, we could have
had all of that working but just didn't have the time to make it work because
new features are more important.

------
rayiner
It’s comical. I’ve got 2 gbps fiber on a 10 gbps internal network hooked up to
a Linux machine with a 5 GHz Core i7 10700k. Web browsing is just okay. It’s
not instant like my 256k SDSL was on a 300 MHz PII running NT4 or BeOS.
Really, there isn’t much point having over 100 mbps for browsing. Typical web
pages make so many small requests that don’t even keep a TCP connection open
long enough to use the full bandwidth (due to TCP’s automatic window sizing it
takes some time for the packet rate to ramp up).

~~~
thereisnospork
As someone else with gratuitously fast internet I almost wish I could
preemptively load/render all links off of whatever page I'm on in case I
decide to click on one. (I imagine this would be fairly wasteful).

~~~
MrStonedOne
I got a fair increase in responsiveness in my site by preloading links on
hover and pre-render (pre-fetch all sub resources) on mousedown (instead of
waiting for mouse up)

------
joncrane
I've recently started using a Firefox extension called uMatrix and all I can
say is, install that and start using your normal web pages and you'll very
quickly see exactly why web pages take so long to load. The number and size of
external assets that get loaded on many websites is literally insane.

~~~
colmvp
I've been using uMatrix for ages and it was baffling to me how some websites
that are literally just nice looking blogs have an unreal number (i.e. 500+)
of external dependencies.

~~~
jandrese
I love uMatrix but it can be a serious hassle to get an embedded video to play
sometimes. Sometimes I'll allow scripts from the embedding site and suddenly
there are dozens or hundreds of new dependencies popping up and not my video.
At this point I really have to ask myself if it is worth it. Maybe if I'm
lucky its a YouTube video and I can track it down on YouTube's site, but if
not it's going to be a big headache and a lot of reloads before the stupid
thing plays.

~~~
setr
I just got in the habit of turning it off, when I'm either too lazy to bother,
or I'm on a site I'll probably never visit again. sites that are already setup
ofc generally stay that way.

The big headache is when you have a site half-setup -- its correct for all of
your usage, and then you try something new and you get a video that doesn't
load, and you sit there waiting until you realize umatrix probably found
something new

------
joncrane
I think this is a general problem with technology as a whole.

Remember when channel changes on TVs were instantaneous? Somehow along the way
the cable/TV companies introduced latency in the channel changes and people
just accepted it as the new normal.

Phones and computers were at one point very fast to respond; but now we
tolerate odd latencies at some points. Apps for your phone have gotten much
much bigger and more bloated. Ever noticed how long it takes to kill an app
and restart it? Ever notice how much more often you have to do that, even on a
5-month old flagship phone? It's not just web pages, it's everything. The
rampant consumption of resources (memory, CPU, bandwidth, whatever) has
outpaced the provisioning of new resources. I think it might just be the
nature of progress, but I hate it.

~~~
formerly_proven
> Somehow along the way the cable/TV companies introduced latency in the
> channel changes and people just accepted it as the new normal.

The technical reason is that digital TV is a heavily compressed signal [1]
(used to be MPEG2, perhaps they have moved on to h.264) with a GOP (group of
pictures) length that is usually around 0.5-2 seconds. When you switch
channels, the MPEG-2 decoder in your receiver needs to wait until a new GOP
starts, because there is no meaningful way to decode a GOP that's "in
progress".

[1] And the technical reason for the compression is that analog HD needs a lot
more bandwidth than analog NTSC/PAL/SECAM, while raw HD transmission would
need an absurd amount of bandwidth per channel (about a gigabit/s for
1080p30). So HD television pretty much requires the use of digital
compression. Efficient digital video compression requires GOP structures.

~~~
zozbot234
Most video players can decode an "in progress" stream just fine. This
obviously involves quite a few artifacts for the first 0.5–2 seconds or so,
but seeing artifacts is generally preferable to seeing a totally blank screen.

~~~
magicalhippo
Our cable box has three decoders. My gf can watch one channel and record two
others at the same time.

Yet does it use any of those extra decoders when not recording to proactively
decode the previous and next channel, or something smart like that? No, of
course not...

~~~
jlokier
Recording doesn't use a full decoder.

The incoming channel data stream is saved as-is. It will need a demultiplexer
to separate out one channel from the multi-channel data stream, but it won't
need to decode that stream, which is the intensive bit. Decoding happens when
you play it back later.

~~~
magicalhippo
Ah, I did check out the specs again and it does indeed say it has a bunch of
tuners, not decoders.

Mental slip then. Thanks for the correction, very interesting.

------
MaxBarraclough
Wirth's Law: _Software is getting slower more rapidly than hardware is
becoming faster_.

[https://en.wikipedia.org/wiki/Wirth%27s_law](https://en.wikipedia.org/wiki/Wirth%27s_law)

~~~
mnm1
This is often intentional. Take a look at any OS or software with animations.
Slowness for slowness sake. The macOS spaces change has such a slow animation,
it's completely useless. Actually, macOS has a ton of animations to slow
things down, but luckily most can be turned off. Not the spaces thing. Android
animations are unbearable and slow things down majorly. Luckily they can be
turned off, but only by unlocking developer mode and going in there. It's
clear whoever designed these things has never heard of UX in their lives. And
since these products are coming from companies like Google and Apple, which
have UX teams, it leads me to think that most UX people are complete idiots.
Or UX is simply not a priority at all and these companies are too stupid to
assemble a UX team for their products. Hard to say which is the case.

~~~
AkelaA
Or, perhaps, maybe you're just not the target audience and those animations
are designed as visual indicators for less experienced users?

Those animations are absolutely a product of well researched UX design, it's
just design that's intended to make the UI more accessible by showing users
the flow of information and how the structure of the interface changes in a
visual manner, rather then design intended to address the needs of power
users. The animations used in the Spaces feature on MacOS is a good example of
that, where apps and desktops slide and zoom around to make it absolutely
obvious that the apps you have open haven't just disappeared. That's quite
important for a fairly advanced desktop manipulation feature like that.

Modern operating systems are designed for broad audiences, and that includes
people who aren't as savvy with technology as we are. That means accepting
some level of tradeoffs between the speed that pro users want, and UI
accessibility that necessitates slowing things down somewhat. In the case of
desktop OS's there's still usually ways for power users to disable that stuff
and of course Terminal for those who don't really need a UI at all. And then
there's a lot of different flavors of Linux that make no attempt at appealing
to a less technical audience.

But just because you're not the target audience doesn't mean the UX team are
"idiots" or that the companies are "stupid". The amount of novice or casual
users is orders of magnitude higher then power users who care only about
efficiency, and for better or worse those users always come first.

~~~
mnm1
I'll believe they have UX teams when they offer an easily accessible option to
turn those things off. There's zero reason why they can't target both use
cases with a simple toggle to turn animations on and off. The stupidity is
expanded when this exists but is not easily accessible. Those that haven't
thought of that yet are indeed stupid (Apple). Some videogames have the same
issue with unskippable cut scenes. Am I not the target audience there either?
If not, then who is? Who wants to watch the same cut-scene a thousand times?
The UX is equally horrific in both cases and in both cases, clearly no thought
went into the UX whatsoever.

------
btbuildem
Part of the problem is analagous to traffic congestion / highway lane counts:
"if you build it, they will come". More lanes get built but more cars appear
to fill them. Faster connection speeds allow more stuff to be included, and
the human tolerance for latency (sub 0.1s?) hasn't changed, so we accept it.

Web sites and apps are sidled with advertising content and data collection
code; these things often get priority over actual site content. They use
bandwidth and computing resources, in effect slowing everything down.
Arguably, that's the price we pay for "free internet"?

Finally (and some others have mentioned this), the software development
practices are partially to blame. The younger generation of devs were taught
to throw resources at problems, that dev time is the bottleneck and not cpu or
memory -- and it shows. And that's those with some formal education; many devs
are self-taught, and the artifacts of their learning manifest in the code they
wrote. This particularly in the JS community, which seems hellbent on
reinventing the wheel instead of standing on the shoulders of giants.

------
weka
I was on AT&T's website the other day
([https://www.att.com/](https://www.att.com/)) because I am a customer and I
was just astonished how blatantly they're abusing redirection and just the
general speed of the page. (ie: Takes 5-10 seconds to load your "myATT"
profile page on 250MB up/down).

It's 2020. This should not be that hard. I've worked at a bank and know that
"customer data" is top priority but at what point does the buck stop? Just
because you can, doesn't mean you should.

------
jameslk
Hundreds of comments yet not one questions the fact that the premise of the
article might be flawed. They're using "onload" event times and calling this
"webpage speed" (there's no such thing as webpage speed btw[0]). It's
especially known onload is not a very reliable metric for visual perception of
page loading[1] (visual perception of loading = what most think of as "page
speed"), that's why we have paint metrics (LCP, FCP, FMP, SI, etc). Tools like
PageSpeed Insights/Lighthouse don't even bother to track onload.

In fact, HTTPArchive (the source of data the article uses) has been tracking a
lot of performance metrics, not just onload. Some have been falling, some have
been rising, and it depends on the device/connection. Also, shaving 1 second
off a metric can make a huge difference. These stats are interesting to ponder
about, but you can't really make any sweeping judgements about it.

It looks like people just want to use this opportunity to complain about
JavaScript and third party scripts, but for above-the-fold rendering, this
isn't usually the only issue for most websites. Frequently it's actually CSS
blocking rendering or other random things like huge amounts of HTML, invisible
fonts, or below-the-fold images choking the connection. Of course, this
doesn't fit the narrative of server-side vs client-side dev very well, so
maybe that's why there's hundreds of comments here without any of them being
an ounce skeptical of the article itself.

[0].
[https://developers.google.com/web/fundamentals/performance/s...](https://developers.google.com/web/fundamentals/performance/speed-
tools#myth_1)

[1]. [https://www.stevesouders.com/blog/2013/05/13/moving-
beyond-w...](https://www.stevesouders.com/blog/2013/05/13/moving-beyond-
window-onload/)

------
speeder
One thing is bothering me is how browsers themselves are becoming ridiculously
slow and complicated.

I made a pure HTML and CSS site, and it still takes several seconds to load no
matter how much I optimize it, after I launched some in-browser profiling
tools, I saw that most of the time is spent with the browser building and
rebuilding the DOM and whatnot several times, the download of all the data
takes 0.2 seconds, all the rest of the time is the browser rendering stuff and
tasks waiting each other to finish.

------
Spearchucker
Yeah. Because modern tech is bloat. Started on a JavaScript-based search tool
the other day. ALL the JavaScript is hand-coded. No libraries, frameworks,
packages. No ads. Just some HTML, bare, bare-bones CSS, and vanilla
JavaScript. Data is sent to the browser in the page, where the user can filter
as needed.

It's early days for sure, and lots of the code was written to work first and
be efficient second, so it will grow over the next few weeks. But even when
finished it will be nowhere near the !speed or size of modern web
apps/pages/things.

[https://www.wittenburg.co.uk/Rc/Tyres/default.html](https://www.wittenburg.co.uk/Rc/Tyres/default.html)

It is possible.

------
zelphirkalt
It rather has slowed down with some websites, or those websites did not exist
back then, because they would not have been possible.

Just today morning, when I opened my browser profile with Atlassian tabs
(Atlassian needs to be contained in its own profile), there were perhaps 7 or
8 tabs, which were loaded, because they are pinned. It took approximately
15-20s of this Core i7 7th Gen, under 100% CPU usage of all cores at the same
time to render all of those tabs. Such a thing used to be unthinkable. Only in
current times we put up with such state of affairs.

As a result I had Bitbucket show me a repository page, Jira showing me a task
list, and a couple of wiki pages, which render something alike markdown. Wow,
what an utter waste of computing time and energy for such simple outcome. In
my own wiki, which covers more or less the same amount of actually used
features, that stuff would have been rendered within 1-2s and with no real CPU
usage at all.

Perhaps this is an outcome of pushing more and more functionality into
frontend client-side JS, instead of rendering templates on the server-side. As
a business customer, why would I be entitled to any computation time on their
servers and a good user experience?

------
austincheney
Not a surprise. Most people writing commercial front end code have absolutely
no idea how to do their jobs without 300mb of framework code. That alone, able
to write to the basic standards and understand simple data structures,
qualifies my higher than average salary for a front end developer without
having to do any real work at work.

~~~
jerf
An uncompressed RGB 1920x1080 bitmap is 6,220,800 bytes. When your webpage is
heavier than a straight-up, uncompressed bitmap of it would be... something's
gone wrong.

We're not quite there, since web pages are generally more than one screen, but
we're getting close. Motivated searchers could probably find a concrete
example of such a page somewhere.

~~~
lstamour
What’s funny is that’s how Opera Mini achieves its great compression for 2G
and 3G network use... it renders mostly server-side, with link text and
position offsets/sizes, last I used it...

~~~
BenjiWiebe
Browsing the internet with opera mini actually feels pretty responsive, until
you come to a site that plain doesn't work.

------
temporama1
JavaScript is not the problem.

Computers are AMAZINGLY fast, EVEN running JavaScript. Most of us have
forgotten how fast computers actually are.

The problem is classes calling functions calling functions calling libraries
calling libraries.....etc etc

Just look at the depth of a typical stack trace when an error is thrown. It's
crazy. This problem is not specific to JavaScript. Just look at your average
Spring Boot webapp - hundreds of thousands of lines of code, often to do very
simple things.

It's possible to program sanely, and have your program run very fast. Even in
JavaScript.

~~~
easterncalculus
I think the problem is that languages like Javascript and object oriented
languages in general actually incentivize this kind of design. Most of the
champions of OOP rarely ever look at stack traces or anything relating to
lower-level stuff (in my experience, in general). Then you take that overhead
to the browser and expect it to scale to millions of users. It just doesn't
make sense. No amount of TCO is going to fix the problem either.

APIs are going to be used as they're written, and as documented. So as much as
there is a problem with people choosing to do things wrong, I think the course
correction of those people is a strong enough force. At least in comparison to
when the design _incentivizes_ bad performance. There's basically nothing but
complaining to the sky when the 'right' way is actually terrible in practice.

------
tines
This assumes that the thing that should be held constant is complexity, and
that the loadtimes will therefore decrease. On the contrary, loadtime itself
is the thing being held (more or less) constant, and the complexity is the
(increasing) variable.

Progress is not being able to do faster the same things we used to do, but to
be able to do more in the same amount of time.

These seem to be equivalent, but they're not, because the first is merely
additive, but the second is multiplicative.

------
draklor40
As a backend dev now working on frontend tasks and primarily with Javscript
and Typescript, I think I might have an insight. Server-side engineering is in
some "well-defined". Software such as the JVM, operating systems behave in a
rather well-defined manner, support for features are predictable and by front-
end standards, things move slowly, providing time for the developer to
understand the platform and use it to his/her best.

The browser platforms are a total mess. An insane number of APIs, a
combinatorial explosion of what feature is supported on what platform. And web
applications move fast. REAL fast. Features are rolled out in days, fixes in
hours and frameworks come and go out of fashion in weeks. It is no longer
possible for devs to keep up with this tide of change and they seem to end up
resorting to do libraries for even trivial tasks, just to get around this
problem of fancy APIs and their incorrect implementation and backwards
compatibility. And needless to say, every dependency comes with its own
burden.

Web platforms are kinda a PITA to work with. On one hand Chrome/Google wants
to redefine the web to suit their requirements and FF, the only other big
enough voice really lags in terms of JS performance. Most devs nowadays end up
simply testing on Chrome and leaving it at that. My simple browser benchmarks
show anywhere between 5-30% penalty in performance for FF vs Chrome.

Unless we slow down the pace of browser API changes and stop releasing a new
version of JS every year and forcing developers to adopt them, I guess slow
web will be here to stay for a while.

------
alkonaut
Despite an increase in computer speed, software isn’t faster. It _does more_
(the good case) or it’s simply sloppy, but that’s not necessarily a bad thing
because it means it was cheaper/faster to develop.

Same with web pages. You deliver more and/or you can be sloppier in
development to save dev time and money. Shaking a dependency tree for a web
app, or improving the startup time for a client side app costs time. That’s
time that can be spent either adding features or building another product
entirely, both of which often have better ROI than making a snappier product.

~~~
tuatoru
Why are you valuing the time of a developer so much more highly than the time
of all the users of the web page?

Page load time affects every user; additional features only improve life for a
few of them.

~~~
jtsiskin
More features, which the user usually prefers over speed.

~~~
tuatoru
Not from my observation.

Most people seem to get more confused and hesitant when pages are loaded with
more features, most of which are irrelevant to their neeeds of the moment. (Of
course flat design makes this hesitation worse.)

And theory talks about "cognitive overload" and "choice paralysis".

------
CM30
You could probably say the exact thing about video game consoles and loading
times/download speeds/whatever. The consoles got more and more powerful, but
the games still load in about the same amount of time (or more) than they used
to, and take longer to download.

And the reasoning there is the same as for this issue for webpage speed or
congestion on roads; the more resources/power is available for use, the more
society will take advantage of it.

The faster internet connections get, the more web developers will take
advantage of that speed to deliver types of sites/apps that weren't possible
before (or even more tracking scripts than ever). The more powerful video game
systems get, the more game developers will take advantage of that power for
fancier graphics and larger worlds and more complex systems. The more road
capacity we get, the more people will end up driving as their main means of
transport.

There's a fancy term for that somewhere (and it's mentioned on this site all
the time), but I can't think of it right now.

~~~
avani
I think you may be referring to Jevon's Paradox:
[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

~~~
CM30
Yeah, that was it.

Thanks for the link1

------
strstr
This is caused by induced demand. This comes up a lot for car traffic [0]. If
you build wider roads, you will almost always just see an increase in traffic,
up to the point where the roads are full again. The metaphor is not perfect,
but I think it is fairly apt.

Expanding infrastructure increases what people can do, and so people do more
things. In some cases, it just decreases the cost of engineering (you can use
more abstractions to implement things more quickly, but at the cost of slower
loading sites). But in the end, you should not expect wider pipes to improve
speeds.

[0]
[https://www.bloomberg.com/news/articles/2018-09-06/traffic-j...](https://www.bloomberg.com/news/articles/2018-09-06/traffic-
jam-blame-induced-demand)

------
jbob2000
It’s the marketing team’s fault. I proposed a common, standardized solution
for showing promotions on our website, but no... they wanted iframes so their
designers could use a WYSIWYG editor to generate HTML for the promotions. This
editor mostly generates SVGs, which are then loaded in to the iframes on my
page. Most of our pages have 5-10 of these iframes.

Can someone from Flexitive.com please call up my marketing coworkers and tell
them that they aren’t supposed to use that tool _for actual production code_?

Can someone also call up my VP and tell them they are causing huge performance
issues by implementing some brief text and an image with iframes?

Can someone fire all of the project managers involved in this for pushing me
towards this solution because of the looming deadline?

~~~
oblio
Is your company making money? If so, they're doing their job :-)

~~~
jbob2000
We're one of the most profitable companies in my country, and we're probably
on the S&P 500.

------
manigandham
The reason websites have gotten worse is because they don't make performance a
priority. That's all it is. Most sites optimize for ad revenue and developer
time (which lowers costs) instead.

------
lmilcin
And they will not.

The reason is that web designers treat newly improved performance as an excuse
to either throw in more load (more graphics, more quality graphics, more
scripts, etc.) or let them produce faster at the cost of performance.

Nowadays it is not difficult to build really responsive websites. It just
seems designers have other priorities.

------
vendiddy
It frustrates me that the same applies to CPU power, RAM, and disk space. We
have orders of magnitude more of each, but the responsiveness of apps remains
the same. At least from my subjective experience.

If someone has a good explanation of what has happened, I'd love to know the
cause and what can be done to fix this.

I understand that some of this has gone to programmer productivity and
increased capabilities for our apps, but what we've gotten doesn't seem
proportional at all.

------
bangonkeyboard
I frequent one forum only through a self-hosted custom proxy. This proxy
downloads the requested page HTML, parses the DOM, strips out scripts and
other non-content, and performs extensive node operations of searching and
copying and link-rewriting and insertion of my own JS and CSS, all in plain
dumb unoptimized PHP.

Even with all of this extra work and indirection, loading and navigating pages
through the proxy is still much faster than accessing the site directly.

------
tonymet
I'm developing a tiny-site search engine. upvote if you think this product
would interest you. The catalog would be sites that load < 2s with minimal JS

~~~
slx26
I'm making a site that's very light, minimal everything... except for a
section that will contain some games. How do you handle those cases?

~~~
tonymet
One of the thresholds I'm using is site size – so if all HTML + JS < 500kb
that would qualify.

------
mensetmanusman
I tested content blockers on iOS.

Going to whatever random media site without it enabled is a couple mb per page
load (the size of SNES roms.. for text!).

With content blockers enabled it was a couple kb per page load.

Three orders of magnitude difference in webpage size due to data harvesting...

Now, imagine how much infrastructure savings we would have if suddenly web
browsing was even just 1 order less data usage. Would be fun to calculate the
CO2 emission savings, ha.

------
habosa
Here on HN we like to complain about JS frameworks and Single Page Apps. Yes,
they can be slow. But they also power some great interactive web experiences
like Figma or Slack that just aren't feasible to build any other way.

The low hanging fruit here is content websites (news, blogs, etc) which are
loaded down with hundreds of tracking scripts, big ads, and tons of JS that
has nothing to do with the content the user came to read.

Try loading this page (which is far from the worst):
[https://www.theverge.com/21351770/google-pixel-4a-review-
cam...](https://www.theverge.com/21351770/google-pixel-4a-review-camera-specs-
price)

Privacy Badger reported 30 (!!!) tracking scripts on that page. Even with PB
blocking those, it still takes ~15s before the page is usable on my MacBook
Pro with a fast connection.

It's just a bunch of text and some picture galleries. It loads like it's an
IDE.

------
paradite
In other news:

In spite of an increase in mobile CPU speed, mobile phone startup time have
not improved (in fact they became slower).

In spite of an increase in desktop CPU speed, time taken to open AAA games
have not improved.

In spite of an increase in elevator speed, time taken to reach the median
floor of an building have not improved.

My point is, "webpage" has evolved the same way as mobile phones, AAA games
and buildings - it has more content and features compared to 10 years ago. And
there is really no reason or need to making it faster than it is right now
(2-3 seconds is a comfortable waiting time for most people).

To put things in perspective:

Time taken to do a bank transfer is now 2-3 seconds of bank website load and a
few clicks (still much to improve on) instead of physically visiting a branch
/ ATM.

Time taken to start editing a word document is now 2-3 seconds of Google Drive
load instead of hours of MS Office Word installation.

Time taken to start a video conference is now 2-3 seconds of Zoom/Teams load
instead of minutes of Skype installation.

~~~
SilasX
>My point is, "webpage" has evolved the same way as mobile phones, AAA games
and buildings - it has more content and features compared to 10 years ago. And
there is really no reason or need to making it faster than it is right now
(2-3 seconds is a comfortable waiting time for most people).

What features? I don't know anything substantive a site can deliver to me
today that it was not capable of 10 years ago. The last major advance in
functionality was probably AJAX, but that doesn't inherently require huge
slowdowns and was around more than 10 years ago.

The rest of your comparisons are dubious:

>Time taken to do a bank transfer is now 2-3 seconds of bank website load and
a few clicks (still much to improve on) instead of physically visiting a
branch / ATM.

This is the same class of argument as saying that (per Scott Adams), "yeah 40
mph may seems like a bad top speed for a sports car, but you have to compare
it hopping". (Or the sports cars of 1910). Yes, bank sites are faster than
going to ATM. Are they faster than bank sites 20 years ago? Not in my
experience.

>Time taken to start editing a word document is now 2-3 seconds of Google
Drive load instead of hours of MS Office Word installation.

Also not comparable: you pay the installation MS Word time-cost once, and then
all future ones are near instant. (Also applies to your Skype installation
example.)

~~~
paradite
> What features? I don't know anything substantive a site can deliver to me
> today that it was not capable of 10 years ago. The last major advance in
> functionality was probably AJAX, but that doesn't inherently require huge
> slowdowns and was around more than 10 years ago.

And.... Hacker News just in time for the rescue:

[https://news.ycombinator.com/item?id=24054382](https://news.ycombinator.com/item?id=24054382)

~~~
SilasX
Okay, a site that was announced less than 24 hours ago. That's not what a
typical site looks like that demonstrates your claim that most of these
bloated sites are only bloated to provide advanced functionality that they
can't otherwise. Did Buzzfeed or the typical news site just start offering
video editing?

------
vlovich123
Parkinson's law at work.

Employees building the web pages are rewarded for doing "work". Work typically
means adding code, whether it's features, telemetry, "refactoring" etc. More
code is generally slower than no code.

That's why you see something like Android Go for entry-level devices & similar
"lite" versions targeting those regions. These will have the same problem too
over time because even entry-level devices gets more powerful over time.

The problem is that organizations don't have good tools to evaluate whether a
feature is worth the cost so there's no back pressure except for the market
itself picking alternate solutions (assuming those are options - some times
they may not be if you're looking at browsers or operating systems where
generally a "compatibility" layer has been defined that everyone needs to
implement).

------
oblio
While I agree with the idea and I am not happy about slow apps, the truth is,
it's focused on technical details.

People don't care about speed or beauty or anything else than the application
helping them achieve their goals. If they can do more with current tech than
they could with tech 10-20 years ago, they're happy.

~~~
Dahoon
>People don't care about speed

Every statically backed research on customer behaviour I have ever seen says
otherwise. The more you slow down the page or app the less customers like and
use it or buy the product being sold. As someone with a homemade site for our
business I can say that it is extremely easy to be faster than 95% of sites
out there and it makes a huge difference, also on Google. Tiny business with
homemade website in top 1-3 on Google was mindbaffling easy because everyone
use too many external sources and preschool level code. Especially the so-
called experts. Most are experts in bloat.

~~~
oblio
If you're small and have actual competition or not that great of a market fit,
sure.

If you're Google, Facebook, Oracle, etc, nobody cares. They just endure it to
get what they really want.

------
the_gipsy
I've made a multiplayer webgame ([https://qdice.wtf](https://qdice.wtf)) that
is under 400kb _total_ download on first load [1]. Even when booting into a
running game it's not much higher.

Load times and bloat are one of my pet peeves, that's why I optimized a lot
for this, although there is _still_ room for improvement.

Everything is self hosted, no tracking bullshit, no external requests. I used
Elm, which apart from being nice for learning FP, has a very small footprint
compared to other DOM abstractions.

[1] Last time I looked, it might have grown a tiny bit due to UGC. I don't
have access to a computer rn.

------
throwaway0a5e
To quote an exec at a major CDN:

"wider pipe fit more shit"

(yes he actually said that, to an entire department, the context was that
people will fill the pipe up with junk if they're not careful and it made more
room to deliver value by not sucking)

~~~
thrownaway954
exactly this. i remember when programmers took the time to make sure their
programs didn't take up alot of memory. as we got more ram, many became lazy
about memory optimization cause well... the computer has plenty. same thing
here with webpages. there was a time where you needed to optimize your site
cause of the modem that everyone used. now everyone has dsl or higher so there
isn't an incentive to optimize your site.

------
jrnichols
Of course not. People remain convinced that the internet will cease to exist
without advertisements all over the place. Web pages are now 10mb+ in size,
making 20 different DNS calls, all of which ad latency. And for what? To serve
up advertisements wrapped around (or laying themselves over) the content that
we came to read in the first place.

Maybe i'm just old, but I fondly remember web pages that loaded reasonably
fast over a 56k modem. these days, if I put anything on the web, I try to
optimize it the best I can. Text only, minimal CSS, no javascript if at all
possible.

I hope more people start doing that.

------
joshspankit
With respect for the people who talk about the technologies involved, server-
side vs client, bandwidth vs latency, etc, etc, etc.

I don’t think any of that is _really_ the core of it.

Humanity sent spaceships to the moon with way less power than a smart _watch_.

After watching tech evolve over my lifetime, the real issue feels like it’s
about the psychological choice:

 __When more power is available we fill it with either less efficient code,
more layers of abstraction, or more features. __

(Besides outliers) this seems to be true no matter what the tech, and is
especially obvious on the web.

------
kbuchanan
To me it’s more evidence that increased speed and reduced latency is not where
our real preferences lie: we may be more interested in the _capabilities_ of
the technology, which have undoubtedly improved.

~~~
ClumsyPilot
To me increasing tendency of Boeing to crash is evidence that safety is not
where our real preferences lie.

To me increasing tendency of junk stocks getting AAA rating is evidence that
profitable investment is not where our real preferences lie.

To me increased prevalence of obesity and heart disease is evidence that
staying healthy and alive is not where our real preferences lie.

------
flyGuyOnTheSly
Wages constantly increase due to economic prosperity (on average, I realize
they have dwindled in the past 50 odd years), and every single year the
majority of people have nothing in their savings accounts.

It's been that way since the dawn of time. [0]

This is a human economy problem, not a technological one imho.

If you give a programmer a cookie, she is going to ask for a glass of milk.

[0] [https://www.ancient.eu/article/1012/ancient-egyptian-
taxes--...](https://www.ancient.eu/article/1012/ancient-egyptian-taxes--the-
cattle-count/)

------
ilaksh
Here's an idea I posted on reddit yesterday. Seemed like it was shadowbanned
or just entirely ignored.

# Problem

Websites are bloated and slow. Sometimes we just want to be able to find
information quickly without having to worry about the web page freezing up or
accidentally downloading 50MB of random JavaScript. Etc. Note that I know that
you can turn JavaScript off, but this is a more comprehensive idea.

# Idea

What if there was a network of websites that followed a protocol (basically
limiting the content for performance) and you could be sure if you stayed in
that network, you would have a super fast browsing experience?

# FastWeb Protocol

* No JavaScript

* Single file web page with CSS bundled

* No font downloads

* Maximum of 20KB HTML in page.

* Maximum of 20KB of images.

* No more than 4 images.

* Links to non-fastweb pages or media must be marked with a special data attribute.

* Total page transmission time < 200 ms.

* Initial transmission start < 125 ms. (test has to be from a nearby server).

* (Controversial) No TLS (https for encryption). Reason being that TLS handshake etc. takes a massive amount of time. I know this will be controversial because people are concerned about governments persecuting people who write dissenting opinions on the internet. My thought is that there is still quite a lot of information that in most cases is unlikely to be subject to this, and in countries or cases where that isn't the case, maybe another protocol (like MostlyFastWeb) could work. Or let's try to fix our horrible governments? But to me if the primary focus is on a fast web browsing experience, requiring a whole bunch of expensive encryption handshaking etc. is too counterproductive.

# FastWeb Test

This is a simple crawler that accesses a domain or path and verifies that all
pages therein follow the FastWeb Protocol. Then it records its results to a
database that the FastWeb Extension can access.

# FastWeb Extension

Examines links (in a background thread) and marks those that are on
domains/pages that have failed tests, or highlights ones that have passed
tests.

------
draaglom
Original data here:

[https://httparchive.org/reports/loading-
speed?start=earliest...](https://httparchive.org/reports/loading-
speed?start=earliest&end=latest&view=list)

The degree to which desktop load times are stable over 10 years is in itself
interesting and deserves more curiosity than just saying "javascript bad"

Plausible alternate hypotheses to consider for why little improvement:

* Perhaps this is evidence for control theory at work, ie website operators are actively trading responsiveness for functionality and development speed, converging on a stable local maximum?

* Perhaps load times are primarily determined by something other than raw bandwidth (e.g. latency, which has not improved as much)?

* Perhaps this is more measuring the stability of the test environment than a fact about the wider world?

[https://httparchive.org/faq#what-changes-have-been-made-
to-t...](https://httparchive.org/faq#what-changes-have-been-made-to-the-test-
environment-that-might-affect-the-data)

If this list of changes is accurate, that last point is probably a significant
factor -- note that e.g. there's no mention of increased desktop bandwidth
since 2013.

------
StopHammoTime
While I don't disagree about this problem, most of the comments in this thread
are lacking significant context and ignoring obvious problems with server side
rendering.

Wordpress is arguably the best and most prominent example of SSR. It is
horrible, and a vanilla install of Wordpress generally returns content in 2-3
seconds.

While Javascript adds bloat to the initial page load, generally it reduces
significantly (or eliminates entirely) further page loads on a domain. For
example, if I have an Vue app, it might take an extra second to load but then
it will never have to load any client-side assets again (technically).

The other thing that makes most of these arguments is that they are
disingenuous when it comes to making arguments about payloads and computing.
It takes may take a significant amount of processing power to generate a json
payload, but it most certainly will take an ever larger amount to generate all
of the scaffolding that goes with a normal HTML page. Redrawing the HTML on
each page load also increases overall network traffic, duplicates load across
every page on the service (see Wordpress, again), and centralises front-end
complexity in the backend.

------
seangrogg
I do feel that this is a multifaceted problem.

On one hand, yes, end-user expectations have gone up. Back in the early
naughts it was perfectly fine to wait ~8 seconds for an image to load block by
block, kicking around the layout and content as it did so - and that was the
status quo. It was fine. Nowadays if I don't get all icons and thumbnail-ready
images near-immediately I assume something is wrong at some layer.

Another factor is how things are going over the wire. It's easy to point to
web developers and say "Why not use SSR everywhere?" while they'll point back
and say "Client-side rendering lets the server scale better". As with most
such complaints, the truth is often somewhere in the middle - SSR should be
aggressively used for static content but if you have a non-trivial computation
that scales linearly it is worth considering offloading to the client,
especially if you're running commodity hardware.

Then there's the question of what we're doing. It very much used to be the
case that most everything I did was over an unsecured connection and virtually
all interactions resulted in page navigation/refresh - never anywhere close to
being below my perception. Nowadays, many actions are below my perception (or
at least eagerly evaluated such that it seems they are) while non-trivial
actions are often going through SSL, requests balanced across multiple app
servers, tokens being authed, and eldritch horrors are invoked by name in
UTF-8 and somehow it all gets back to me around the same time as those page
refreshes were back in the day.

This most certainly isn't to say that we don't have room for improvement: we
most certainly do. But like most systems, as the capabilities improve so,
seemingly, do the requirements and interactions that need to be supported over
it.

------
sriku
On a parallel front, it feels like something similar has happened with
computers too. Laptops have gotten better (more cores, more RAM, SSD) over the
years, but still the more frequent interaction occurrence seems to be me
waiting for the computer to respond because every tiny application or website
now consumes hundreds of GB of RAM .. and memory pressure.

------
redoPop
Load times in this article are attributed to httparchive.org, which gathers
data using WebPageTest. [1] By default, WebPageTest emulates a 5Mbps down /
1Mbps up connection speed for all tests, to provide a more stable basis for
performance comparisons. httparchive.org's load times therefore aren't
affected by improvements in general network speed.

Am I missing something here? httparchive.org is not an appropriate source for
the comparison this article makes. A large repository of RUM data would be
needed for that comparison.

Counterintuitively, the stability of page load times in httparchive.org
suggests that page performance hasn't improved or worsened enough to make much
difference on a 5Mbps connection.

[1] [https://httparchive.org/faq#how-is-the-data-
gathered](https://httparchive.org/faq#how-is-the-data-gathered)

------
mbar84
The worst case of this for me was a completely static site which, sans images)
loaded in under 100ms on my local machine. I inlined styles, all images with
width and height so there is no reflow when the image arrives, no chaining of
css resources, deferred execution of a single javascript bundle, gzip,
caching, the works. Admittedly it was a simple page, but hey, if I can't do it
right there, where then?

Anyway, it all went to s __t as soon as another guy was tasked with adding
share buttons (which I have never once in my life used and am not sure anybody
else has ever used).

I won't optimize any pages over which I don't have complete control. Maybe if
a project has a CI/CD setup that will catch any performance regressions, but
other than that, too much effort, thankless anyway, and on any project with
multiple fronted devs, the code is a commons and the typical tragedy is only a
matter of time.

------
greyhair
Get over it. The modern web sucks. My 2003 Thinkpad T41 was never a raging
powerhouse, but is was "usable". It is no longer usable. Nothing in the
hardware has changed, but the software, the browsers, and the web at large
have all changed drastically.

I do embedded system co-design. I write software for a living, but I work
closely with the hardware teams, including (at my last employer) ASIC
features. I have to flop hats between 'software' and 'hardware' all the time.
And there are clearly times that I throw the software team under the bus.

"Hey the last product ran in 128MB, but memory is cheap, so can we have 4GB
this time? We want to split all the prior pthreads across containers!"

You think I am joking?

Browsers and web content have done the same.

"But look at all the shiny features?"

I don't want the shiny. I just want the content, and fast.

------
mc32
Network latency. Bandwidth is the new MHz.

~~~
superkuh
This could be part of it. The shift to mobile computers which are necessarily
wireless which means a random round trip time due to physics, which means TCP
back-off. That, combined with the tendency to require innumberable external
JS/CDN/etc resources that require setting up new TCP connections work together
to make mobile computers extra slow at loading webpages.

------
johannes1813
This reminds me a lot of Freakonomics podcast episode
([https://freakonomics.com/2010/02/06/the-dangers-of-safety-
fu...](https://freakonomics.com/2010/02/06/the-dangers-of-safety-full-
transcript/)) where they discuss different cases where increased safety
measures just encouraged people to take more risk, resulting in the same or
even increased numbers of accidents happening. A good example is that as
football helmets have gotten more protective, players have started hitting
harder and leading with their head more.

Devs have been given better baseline performance for free based on internet
speeds, and adjust their thinking around writing software quickly vs.
performantly accordingly, so we stay in one place from an overall performance
standpoint.

~~~
pravus
This is known as Jevons paradox in economics and the classic example in modern
times is rates of total electricity usage going up while devices have become
ever more energy efficient.

[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

------
zoomablemind
Another factor is a wider use of all sorts of CMS (WordPress etc) for content
presentation, combined with often slower/underpowered shared hosting and
script heavy themes.

On some cheap hosters it may take a second just to startup the server
instance, that's before any of the outgoing requests are done!

~~~
commandlinefan
Yep - to the executives, saving a couple of (theoretical) hours of development
work is worth paying a few extra seconds per page load. Of course, the
customers hate it, but the customers can't go anywhere else, because the
executives everywhere else are looking for ways to trade product quality for
(imaginary) time to market.

------
rutherblood
there's a really funny thing that happens with websites on phones. i try
clicking something sometime after the page loads and just when i click it,
bam! something new loads up on the page, all these elements shift up or down
and what I just clicked on isn't at its place anymore and i end up clicking on
something completely different. All this happens in split of a second just
between the moments when I look and decide to click on something and before my
finger actually does the tap. This is so so common across the mobile web. Such
a stupid little thing but also highly annoying. Even if we could collectively
solve this one little thing, i would consider our UI sensibility to have
improved over the years.

~~~
cel1ne
I know this from Android, but not from iOS.

------
bbulkow
I am surprised to not see a product reason.

There is one.

Engagement falls off when there is a delay in experience past a certain point.
Usually considered around 100ms to 150ms with extreme drop off at a second or
higher. This has to go with human perception and can be measured through a/b
analysis and similar.

Engagement does not get better if you go faster past that point. Past that
point, you should have a richer experience, more things on the page, whatever
you want. or reduce cost by spending less on engineering, certainly don't
spend more money on a 'feature' (speed) that doesn't return money.

Ad networks are run on deadline scheduling. Find the best ad in 50 Ms, don't
find any old ad as quick as possible.

Haven't others been involved with engagement analysis found the same?

------
mdavis6890
There are three main contributing factors:

\- Users are not the customers, so there's little point in optimizing for
their experience, except to the extent that it impacts the number of users
your customers reach with ads.

\- Users do not favor faster websites, so as long as you meat a minimum
performance bar so they don't leave before the ads load, there's little to
gain from optimizing the speed.

\- For users that do care about load-time, it's hard to know before visiting a
page whether it's fast or not, and by that point the publisher has already
been paid to show you the ads.

A helpful solution would be to show the load time as a hover-over above links,
so that you can decide not to visit pages with long load times.

------
mostlystatic
HTTP Archive uses an emulated 3G connection to measure load times, so of
course faster internet for real users won't make the reported load times go
down.

Am I missing something?

[https://httparchive.org/faq](https://httparchive.org/faq)

------
swyx
Wirth's Law reborn: "Software is getting slower more rapidly than hardware is
becoming faster."

[https://en.wikipedia.org/wiki/Wirth%27s_law](https://en.wikipedia.org/wiki/Wirth%27s_law)

------
Tomis02
The problem of course isn't the internet "speed" but latency. ISPs advertise
hundreds of Mbps but conveniently forget to mention latency, average packet
loss and other connection quality metrics.

~~~
opportune
Correct me if I’m wrong but I believe ISPs use a combination of hardware and
software to “throttle down” network connections as they attempt to download
more data. For example if I try to download a 10GB file on my personal
computer I’ll start off at something like 40mbps and it will take 15 seconds
before they start allowing me to scale up to 300mbps. I assume that when
downloading things like websites, which should only take tens or hundreds of
ms, this unthrottling could also be a significant factor in addition to
latency, depending on what the throttling curve looks like.

Also ISPs oversell capacity, which they’ve probably always done, so even if
you’re paying for a large bandwidth that doesn’t mean you’ll ever get it.

------
XCSme
Could it also be that servers resources in general are lower and that there
are more clients/instance than before?

With all this virtualization, Docker containers and really cheap shared
hosting plans it feels like there are thousands of users served by a single
core from a single server. Whenever I access a page that is cached by
Cloudflare it usually loads really fast, even if it has a lot of JavaScript
and media.

The problem with JavaScript usually occurs on low-end devices. On my powerful
PC most of the loading time is spent waiting for the DNS or server to send me
the resources.

------
wuxb
You will find gems just by checking the third-party cookies associated to
those websites. I can see cookies from dexdem.net, doubleclick.net, and
facebook.com on my chase.com account home page.

------
thewileyone
I used to support a web-based enterprise system used worldwide. After the
business management escalated a complaint that the system was slow, I
technically showed that them all the bells and whistles, doohickeys, widgets,
etc that they insisted on as features, took up nearly 10MB to download, while
Amazon, in comparison, took less than 1MB.

This was taken under advisement and then we got new feature requests the next
day that would just add more crap to the download size. But they never
complained again.

------
rootsudo
I remember reading a story of how engineers at Youtube were receiving more
tickets/complaints about connectivity in Africa after reducing the page
loading time.

They were confused and thinking w/ the bandwidth limitations and previous
statistics it didn't make sense, something about previously not having useful
statistics.

Turns out due to reducing the page size, they finally were able to have
African users load the page, but not the video.

I thought that was interesting, and puts it right at that email at lightspeed
copypasta.

------
climate-code
This is an example of Jevon's Paradox - increases in the efficiency of use of
a resource lead to increases in consumption of that resource -
[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

I wrote about this here - [https://adgefficiency.com/jevons-
paradox/](https://adgefficiency.com/jevons-paradox/)

------
ErikAugust
Part of the reason I created Trim
([https://beta.trimread.com](https://beta.trimread.com)) was simply the
realization that I didn’t want to load 2-7 MB of junk just to read an article.

Trim allows you to often chop off 50% - 99% of the page weight without using
any in-browser JavaScript.

Example:
[https://beta.trimread.com/articles/31074](https://beta.trimread.com/articles/31074)

------
k__
I'm thinking about this problem often lately.

Just a few weeks ago I saw a size comparison of React and Preact pages. While
Preact is touted as a super slim React alternative, in real-life tests the
problem were the big components and not the framework.

This could imply that we need to slim down code at a different level of the
frontend stack. Maybe UI kits?

This could also imply that frontend devs simply don't know how to write
concise code or don't care as much as they say.

------
malwarebytess
It's a hard pill to swallow that 20 years since I started using the internet
websites perform worse on vastly superior hardware, especially on smartphones.

------
wilg
Is it really that surprising that developers would rather spend a performance
budget on adding new features instead of further improving performance?

------
ZainRiz
This sounds a lot like the old argument of developers not being careful about
how much memory/cpu they use. Engineers have been complaining about this since
the 70s!

As hardware improves, developers realize that computing time is way cheaper
than developer time.

Users have a certain latency that they accept. As long as the developer
doesn't exceed that threshold, optimizing for dev time usually pays off.

------
HumblyTossed
Because the focus is now on time to market and developer speed and anything
else anyone can think of _except_ the end user experience.

------
b0rsuk
Website Obesity Crisis, a witty talk by Maciej Cegłowski (2015):
[https://www.webdirections.org/blog/the-website-obesity-
crisi...](https://www.webdirections.org/blog/the-website-obesity-crisis/)

I would post the text version, but somehow the CDN is down.

------
KingOfCoders
Same with most computers. Recently migrated from an iMacPro to a Ryzen 3900X
with Linux. Linux feels so much faster then the iMac, which when I use it now
feels very sluggish for a 8core Xeon 32gb machine. This just shows how
computers are kept slow so you want to upgrade </foilhatoff>

------
sreekotay
I think this misses the point. Latency becomes the dominate factor very very
quickly. Increases to webpage speed only really help for large file downloads
(and sometimes not those) and increased user/device concurrency.

Edit: this also explains the REVERSE trend in mobile.

------
EGreg
Oh yeah try this URL on your mobile phone:

[https://yang2020.app/events](https://yang2020.app/events)

It has lazyloading of images, components, memoizing the tabs, batching
requests, the works. Actually it can be made a lot faster using browser
caching.

------
jorblumesea
The issue at its core is HTML. It was not designed for complex, rich
interfaces and interactivity that modern web users want. So JS is used. Which
is slow and needs to be loaded.

The heavy use of JS is basically just hacking around the core structure of the
internet, HTML/DOM problems.

------
andirk
"Progressive Enhancement" touts using the basic building blocks first and
growing from there, but it hasn't been updated to explain when and where to
offload the computations. Are there any best practices in circulation about
balancing the workload?

------
zzo38computer
Try to avoid CSS, JavaScript, animations of any kind, and especially inline
pictures and videos. This can reduce the time needed to load it greatly.
(There are times where the things I listed are useful, but they should
generally be avoided.)

------
darkhorse13
Part of the problem is that modern JS frameworks make it incredibly easy to
mess up performance. I have seen mediocre devs (not bad, but not great) make a
mess of what should be simple sites. Not blaming the frameworks, but it is
still a problem to be addressed.

------
hpen
I think most software is built with some level of tolerance for performance,
and the stack / algorithms / feature implemented are chosen to meet that
tolerance. Basically, as hardware gets faster, it's seen as a way to make
software cheaper.

------
wintorez
I think we need to start differentiating between webpage speeds and web
application speeds. Namely, a webpage would work if I disable the JavaScript
in my browser, but a web application would not. By this definition, web page
speeds has improved a lot.

------
fouc
How dare you! You dang web developers developing with your fancy high speed
internets. Stop that.

Turn on network throttling, make it about 1mbps (that's 125 KB/s, which is
insanely fast!).

Also turn off asset caching. Always experience the REAL speed of your dang
website!

Thanks :)

------
michaelcampbell
What's the old adage; software gets slower faster than hardware gets faster?

------
Konohamaru
Niklaus's Law strikes again.

~~~
schmudde
I think it's a good law, for what it's Wirth.

------
staycoolboy
throughput vs latency.

If I want to download a 1GB file, I do a TLS handshake once, and then send
huge TCP packets. I can get almost 50MB/s from my AWS S3 bucket on my 1GB
fiber, so it takes ~20secons.

However, If I split that 1GB up into 1,000,000 1KB files, I incur 1,000,000
the handshake penalty, plus all of the OTHER overhead with nginx/apache and
file system or whatever is serving the request, so my bandwidth is
significantly lower. I just did an SCP experiment and got 8MB/s average
download speed and cancelled the download.

The problem here is throughput is great with few big files, but hasn't
improved with lots of little files.

------
jll29
As I told people in '93, it will all go downhill from here (when folks started
using GIFs instead of "<hr>" tags...)

------
jungletime
This would be a useful metric to use in ranking websites. The bloat of a web
page seems to be inversely related to the value of the contents. The best
websites have little in the way of graphics, but are information dense.

------
andy_ppp
This is a phenomenon of all progress though right? The more roads you have the
more cars you get, the faster a computer the slower the software. Andy’s law:
the faster something gets the lazier humans can become.

------
teabee89
There's a fancy name for this effect: Jevons paradox
[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

------
MrStonedOne
I blame tcp startup.

and the the fact that chrome hasn't added https/3 in mainline as even a flag
even though the version that their sites use has been enabled by default on
mainline chrome for years.

------
peetle
Despite an increase in speed, people insist on adding more to the web.

------
ffggvv
this isn’t that surprising when you see they mean “throughput” and not
“latency” when they talk about speed.

webpages aren’t super large files so it would depend more on the latency of
the request not Mbps

~~~
em-bee
exactly. internet speed never was the issue. i am in a place where
international network speed is a fraction of the domestic speeds. (often less
than one mbit) yet websites are still just as fast. it all depends on how fast
the server responds to the request, and almost never on how much the site has
to load, unless there is a larger amount of images involved.

------
dariosalvi78
Server side rendering: page loads super fast, cool. You click on a menu: wait
for the new page to come. Click on another button: wait. Click, wait, click,
wait, click, wait...

------
snow_mac
Blame React, Angular or any of the other javascript libraries....

------
emptyparadise
I wonder how much worse things will get when 5G becomes widespread.

~~~
sumoboy
You know for sure your phone bill will increase.

------
calebm
"A task will expand to consume all available resources."

------
sirjaz
The larger problem is that the web was never meant to be used the way we use
it. We should be making cross-platform apps that use simple data feeds from
remote sources

------
Jeaye
Title should be "Despite an increase in Internet speed, webpage speeds have
not improved", since webpage speeds have not acted in spite of internet speed.

------
correct_horse
Blinn's law says: As technology advances, rendering time remains constant.
Usually applied to computer (i.e. 3D) graphics, but seems applicable here too.

------
azinman2
It’s much like the problem of induced demand in transportation: more capacity
brings more traffic. More JavaScript. More ad networks. More images. More
frameworks.

------
sirjaz
The problem is that we are trying to make websites do what they were never
meant to. We should be making cross-platform apps that use simple data feeds.

------
bilater
But could the argument be made that we are loading a shit load more content so
even though it feels slower you're getting a richer UX to work with?

~~~
Dahoon
Richer is often worse though. So the worst of both sides.

------
WalterBright
Compiler speeds haven't improved much, either. The reason is simple - as
computer speeds improved, we asked the compilers to do more.

------
JJMcJ
There are still sites with very simple HTML/JS/CSS, and they load so fast it's
almost like magic.

------
perfunctory
Every time I see a headline like this one I have to think about two things -
Jevons paradox and climate change.

------
MattGaiser
We discuss the every growing size of web pages here regularly, so this is not
all that surprising.

------
pmarreck
In spite of massive increases in processing power, boot times have also not
improved

------
MangoCoffee
> webpage speeds have not improved

we keep abusing it beyond what a web (html) page supposed to do.

------
sushshshsh
Really? My text only websites that I've written and hosted for myself are
really snappy. I wouldn't know the feeling.

All I needed to do was spend a weekend scraping everything I needed so that I
could self host it and avoid all the ridiculous network/cpu/ram bloat from
browsing the "mainstream" web

------
dainiusse
The same as smartphone - cpu's improved but Battery still holds one day

------
tekkiweb
Try to reduce the size of visuals and meda like images, videos, etc

------
zwaps
To give a negative example I recently came across, check out
[https://www.scmp.com/](https://www.scmp.com/)

Now, it's pretty much a normal news website in that it shows a long list of
articles, some pictures and then text.

I am running a standard laptop computer given to me by my company. My internet
connection is pretty fast. Even with ads blocked on the entire website, that
thing is slooow.

1\. The pictures have an effect where they are rendered in increasing quality
over time, supposedly so you see them earlier. This doesn't work, as they load
much more slowly than normal HTML pictures that load instantly given my
internet connection.

2\. The scrolling is more than sluggish. This is, in part, because the website
only loads new content after you scroll down. So instead of having a website
that loads and where you can just scroll, which would make TONS of sense for a
website where you quickly want to check the headlines, you have this terrible
experience where every scrolling lags and induces a new "loading screen".

3\. If you click on an article, it is loaded as a single page app with an
extra loading screen, which is somehow slow for some reason.

4\. Once in the article, the scrolling disaster continues. But now even the
text loads slowly while you scroll. How can you not just have the text load
instantly? It's a news website. I want to read! I don't want to scroll, wait
for the load, and then continue to read.

5\. There is a second scrolling bar besides my browser's scrolling bar. Why?
Who thought that's a good idea? The scrolling bar's top button disappears
behind the menu bar of the website. Why?

6\. To use this website, one needs to scroll through the whole article to get
it to load, then scroll back up, then read. Still, each time the menu bar
changes size due to scrolling, my computer gets sluggish.

7\. Javascript Pop ups. Great.

8\. Every time your mouse cursor moves accidentally over any element of the
website, gigantic pop ups show up out of nowhere and you can't continue
reading. Annoying!

This website presents news. It's not better at it than earlier ones, it's
worse. None of the things make the experience any better and it gives no more
benefit to reading news than older, plain html news websites. The reading
experience is an unmitigated disaster for no reason whatsoever. Who greenlit
this? Why?

If you are a web developer, you work in a business where the state of the art
has notably gotten worse. A lot worse. At this stage, I would be seriously
worried about the reputation of the profession if I were you. Sad!

------
nicbou
I spent a decent amount of time making my website how I want the web to be:
fast, straightforward and unintrusive.

Making it fast was pretty easy. Remove anything that isn't directly helping
the user, compress and cache everything else, and use HTTP2 Server Push for
essential resources. There were other optimisations, but that took me below
the 500ms mark. At ~300ms, it starts feeling like clicking through an app -
instant.

(it's [https://allaboutberlin.com](https://allaboutberlin.com))

However, there's no point in serving slimy GDPR notices, newsletter prompts
and SEO filler text at lightning speed. Those add a lot more friction than an
extra 500ms of load time.

------
jozzy-james
just for a little snark to defend client side use - lemme know when you find a
responsive bin packing algorithm i can do server side that doesn't choke out
the dom

------
anticensor
Not a news: Wirth's law has been known for a long time.

------
cozzyd
Oh but they have... assuming you leave ublock enabled :)

------
qazpot
Because the web sucks while Internet does not.

------
rayrrr
Moore's Law + Parkinson's Law = Stasis

------
giantg2
Quite frankly, this is bigger than the server vs client comments I've seen.
This is not some new phenomenon. The efficiency of code and architecrure has
declined over time for at least the last 30 years. As compute and storage
costs have come down dramatically, the demand for labor has gone up. Who
decides what's really important in a project - the business. That comes down
to cost. If you can save money by using cheap hardware and cheap architecture,
then save money by using your human resources for output vs efficient code...

------
innocentoldguy
_cough_ JavaScript _cough_

------
mfontani
Ads rule everything around me

------
scoot_718
Just block javascript.

------
jiggawatts
An observation I've made over decades is that people stop optimising when
things get "good enough". That threshold is typically 200ms-2s, depending on
the context. After that, developers or infrastructure people just stop
bothering to fix issues, even if things are 1,000x slower than they "should
be" or "could be".

Call this the _performance perceptibility threshold_ , or PPT, for want of a
better term.

There's a bunch of related problems and effects, but they all seem to come
back to PPT one way or another.

For example, languages like PHP, Ruby, and Python are all notoriously "slow",
many times slower than the equivalent program written in C#, Java, or
whatever. When they were first used to write websites with minimal logic,
basically 90% HTML template with a few parameters pulled from a database, this
was _okay_ , because the click-to-render time was dominated by slow internet
and slow databases of the era. There was, a decade ago, an acceptable trade-
off between developer-friendliness and performance. But inevitably, feature-
creep set it, and now enormous websites are entirely written in PHP, with 99%
of the content dynamically generated. With rising internet speeds and dramatic
performance improvements in databases, PHP "suddenly" became a huge
performance pain point.

In that scenario, the root cause of the issue is that the attitude that
"PHP/Python/Ruby" is _acceptable_ because lightweight code using them falls
under the PPT is a false economy. Eventually people will want a lot more out
of them, they'll want heavyweight applications, and then having locked into
the language is now a mistake that cannot be unwound.

The most absurd example of this is probably Python -- designed for quick and
dirty lightweight scripting -- used for big data and machine learning, some of
the most performance intensive work currently done on computers.

Similarly, I see astonishingly wasteful network architectures, especially in
the cloud. Wind the clock back just 10 years, and network latencies were
vastly lower than mechanical drive random seek times. Practically "any"
topology would work. Everything split into subnets. Routers everywhere.
Firewalls between everything. Load balancers on top of load balancers.
Applications broken up into tier after tier. The proxy talking to the app
layer talking to a broker talking to a service talking to a database talking
to remote storage. Nobody cared, because sum fell under the PPT. I've seen
apps with 1/2 second response times to a trivial query, but that's still
"acceptable". Multiply that by the 5 or so roundtrips for TCP+TLS for every
layer, because security must be end-to-end these days, and its not uncommon to
see apps starting to approach the 2 second mark.

These days, typical servers have anywhere between 20 to 400 Gbps NICs with
latencies measured in tens of microseconds, yet apps are responding 10,000x
slower even when doing no processing. Why? Because everyone involved has their
own little problem to solve, and nobody cares about the big picture as long as
the threshold isn't exceeded. HTTPS was "easy" for a bunch of web-devs moving
into full-stack programming. Binary RPC is "hard" and they didn't bother,
because for simple apps it makes "no difference" as both fall under the PPT.

Answer me this: How many HTTPS client programming libraries (not web
browsers!) actually do TCP fast open _and_ TLS 1.3 0-RTT handshakes? How many
do that by default? Name a load balancer product that turns those features on
by default. Name a reverse proxy that does that by default.

Nobody(1) turns on jumbo frames. Nobody does RDMA, or SR-IOV, or cut-through
switching, or ECN, or whatever. Everybody has firewalls for no reason. I say
no reason, because if all you're doing is doing some ACLs, your switches can
almost certainly do that at wire-rate with zero latency overheads.

It always comes back to the PPT. As long as a design, network, architecture,
system, language, or product is under, people stop caring. They stop caring
even if 1000x better performance is just a checkbox away. Even if it is
something they have already paid for. Even if it's free.

1) I'm generalising, clearly. AWS, Azure, and GCP actually do most of that,
but then they rate limit anyway, negating the benefits for all but the largest
VM sizes.

------
pea
So this has gotten to the point for me where it is a big enough burning
painpoint where I would pay for a service which provided passably fast
versions of the web-based tools which I frequently have to use.

In my day-to-day as a startup founder I use these tools where the latency of
every operation makes them considerably less productive for me (this is on a
2016 i5 16GB MBP):

\- Hubspot

\- Gmail (with Apollo, Boomerang Calendar, and HubSpot extensions)

\- Intercom (probably the worst culprit)

\- Notion (love the app - but it really seems 10x slower than a desktop text
editor should be imo)

\- Apollo

\- LinkedIn

\- GA

\- Slack

The following tools I use (or have used) seem fast to me to the point where
I'd choose them over others:

\- Basecamp

\- GitHub (especially vs. BitBucket)

\- Amplitude

\- my CLI - not being facetious, but using something like
[https://github.com/go-jira/jira](https://github.com/go-jira/jira) over actual
jira makes checking or creating an issue so quick that you don't need to
context switch from whatever else you were doing

I know it sounds spoiled, but when you're spending 10+ hours a day in these
tools, latency for every action _really_ adds up - and it also wears you down.
You dread having to sign in to something you know is sluggish. Realistically I
cannot use any of these tools with JS disabled, best option is basically to
use a fresh Firefox (which you can't for a lot of Gmail extensions) with
uBlock. I tried using Station/Stack but they seemed just as sluggish as using
your browser.

It's probably got a bunch of impossible technical hurdles, but I really want
someone to build a tool which turns all of these into something like
old.reddit.com or hacker news style experience, where things happen under
100ms. Maybe a stepping stone is a way to boot electron in Gecko/Firefox (not
sure what happened to positron).

The nice things about tools like Basecamp is that because loading a new page
is so fucking fast, you can just move around different pages like you'd move
around the different parts of one page on an SPA. Browsing to a new page seems
to have this fixed cost in people's minds, but realistically it's often
quicker than waiting for a super interactive component to pull in a bunch of
data and render it. Their website is super fast, and I think their app is just
a wrapper around the website, but is still super snappy. It's exactly the
experience I wish every tool I used had.

IMO there are different types of latency - I use some tools which aren't
"fast" for everything, but seem extremely quick and productive to use for some
reason. For instance, IntelliJ/PyCharm/WebStorm is slow to boot - fine. But
once you're in it, it's pretty quick to move around.

Can somebody please build something to solve this problem!

~~~
midrus
Ohh God, can't upvote this enough. I feel the same. I work as a frontend dev
and I just can't believe the amount of stuff we do in the user's browsers just
because it's convenient for us, developers or because asking backend devs to
do it will take longer or because it's the way to do it in React. SPAs can be
faster of course, but most of the times they are not, and they are a lot worse
than their equivalent rails or django app because your company just doesn't
have the resources Facebook has. And even Facebook is terribly slow, so not
sure what the benefits are at the end of the day.

Talking of reddit, I just cannot use it. I rely on old.reddit.com for now and
the day it goes away I will only use it from a native client on my phone, or
just not use it anymore.

I feel like repeating myself in every single comment I do on this topic but I
really believe that tools such as Turbolinks, Stimulus or my favourite: unpoly
are highly underrated. If we put 20% the effort we put on SPAs on building
clean, well organized and tested traditional web applications we would be in a
much better place, and faster (in the sense of shipping and of performance).

We should focus more on the end user and the business and a bit less on what's
cool for the developers.

~~~
pea
Yep, there are some websites which _should_ be SPAs because they are actual
applications - for instance, I can't imagine Google Docs or Trello as an MPA.

But, many websites are a graph of documents (like Reddit), so trying to model
them as an SPA just massively increases complexity and introduces some really
tricky problems. We moved from an SPA -> MPA and haven't looked back (with
Intercooler/Stimulus/alpine).

One of the main parts is that, because you don't need to manage and reconcile
state in two places, you have much less complexity. When we need a single
component that needs to be very interactive (for instance, we have an
interactive table viewer which allows sorting and searching), we embed a
little bit of React or whatever -- but that's kind of a last resort, and it's
as stateless as possible.

I think handling state sensibly in a pure SPA architecture is actually much
more complex than people give it credit for. A Redux + React + REST
architecture can be done properly - but it also introduces a huge number of
potential rabbit holes which have a high ongoing maintenance cost, especially
if you do not have a team of very experienced FE engineers.

New Reddit is a great testament to just how badly it go can when you fight
against "the web as a collection of documents" and what browsers originally
did. For instance clicking on the background of a Reddit post to then navigate
you "back" in the SPA, instead of using your browser back button - it's
actually insane.

None of this is to say that templates can't be a bit painful themselves at
times too - not sure what happened to
[https://inertiajs.com/](https://inertiajs.com/), but I quite like the idea of
that approach too.

