
The web sucks if you have a slow connection - philbo
https://danluu.com/web-bloat/
======
ikeboy
>When I was at Google, someone told me a story about a time that “they”
completed a big optimization push only to find that measured page load times
increased. When they dug into the data, they found that the reason load times
had increased was that they got a lot more traffic from Africa after doing the
optimizations. The team’s product went from being unusable for people with
slow connections to usable, which caused so many users with slow connections
to start using the product that load times actually increased.

~~~
dba7dba
Funny anecdote about the freeway system of southern California.

When they were initially planning the system in 1930s, 40s, they were planning
to have the system in use for next 100 years. So they built over sized roads
(like 10 lane freeway, without having to stop for traffic lights, that go
THROUGH center of a major city).

When the system proved so car friendly, more and more people moved in and
bought cars. Within in a short period of time (much shorter than 100), the
system is completely jammed.

Always look for unintended consequences...

~~~
chiph
The original designers of the interstates didn't want the roads to go through
the downtown areas. The idea was for the high-speed roads to go _near_ cities,
and have spur roads (3-digit interstate numbers that start with an odd number)
connect them - like the design of the original Autobahn.

But there was a coalition of mayors and municipal associations that pressured
Congress to have the roads pass through their towns (jobs! progress!).
President Eisenhower was not amused, but he found out too late to change the
design.

A consequence of this was the bulldozing of historically black-owned property
to make way for the new roads.

~~~
dba7dba
They didn't really NEED cars to move people around because Los Angeles area
already had a GREAT light rail system called the Red Line. The current walkway
on Venice Beach was what's left of that line. Can you imagine? An above ground
light rail system running parallel to a beach in LA?

They RIPPED it out thanks to lobbying by car companies and tire companies. Yay
to lobbyists.

Now, it takes a billion to build a few miles of a subway/lightrail system that
practically goes no where...

~~~
chiph
The Red Line is the new Metro system (which was destroyed in 1997's _Volcano_
). The electrified light-rail from the 1930's was the LA Railway.

[https://en.wikipedia.org/wiki/Los_Angeles_Railway](https://en.wikipedia.org/wiki/Los_Angeles_Railway)

GM, Firestone, and several other companies were indicted in 1949 for
attempting to form a monopoly over local transit. The semi-urban legend part
(it was never definitively proved there was a plot behind it all) was the
ripping out of the streetcars, replacing them with GM-made bus networks.

[https://en.wikipedia.org/wiki/General_Motors_streetcar_consp...](https://en.wikipedia.org/wiki/General_Motors_streetcar_conspiracy)

------
gabemart
Something I have had at the back of my mind for a long time: in 2017, what's
the correct way to present optional resources that will improve the experience
of users on fast/uncapped connections, but that user agents on slow/capped
connections can safely ignore? Like hi-res hero images, or video backgrounds,
etc.

Every time a similar question is posed on HN, someone says "If the assets
aren't needed, don't serve them in the first place", but this is i)
unrealistic, and ii) ignores the fact that while the typical HN user may like
sparsely designed, text-orientated pages with few images, this is not at all
true of users in different demographics. And in those demos, it's often not
acceptable to degrade the experience of users on fast connections to
accommodate users on slow connections.

So -- if I write a web page, and I want to include a large asset, but I want
to indicate to user agents on slow/capped connections that they don't _need_
to download it, what approach should I take?

~~~
curun1r
This seems like the thing that we'd want cooperation with the browser vendors
rather than everyone hacking together some JS to make it happen. If browsers
could expose the available bandwidth as a media query, it would be trivial to
have different resources for different connections.

This would also handle the situation where the available bandwidth isn't
indicative of whether the user wants the high-bandwidth experience. For
example, if you're on a non-unlimited mobile plan, it doesn't take that long
to load a 10mb image over 4G, but those 10mb chunks add up to overage charges
pretty quickly, so the user may want to set his browser to report a lower
bandwidth amount.

~~~
rev_bird
>If browsers could expose the available bandwidth

I don't know why this seems like such an imposition, but I think I'd be
uncomfortable with my browser exposing information about my actual network if
it didn't have to. I have a feeling way more people would be using this to
track me than to considerately send me less data.

That said, browser buy-in could be a huge help, if only to add a low-tech
button saying, "request the low-fi version of everything if available." This
would help mobile users too -- even if you _have_ lots of bandwidth, maybe you
want to conserve.

~~~
TeMPOraL
Indeed; as an user, I don't want _the site_ to decide what quality to serve me
based on probing my device. It'll only lead to the usual abuse. I want to
specify I want a "lightweight" version or "full experience" version, and have
the page deliver an appropriate one on demand.

~~~
CaptSpify
I remember when websites used to have "[fast internet]" or "[slow internet]"
buttons that you could use to choose if you wanted flash or not. Even though I
had a high-speed, I chose slow because the site would load faster.

------
Someone1234
I found out this the hard way.

T-Mobile used to offer 2G internet speeds internationally in 100+ countries
included in Simple Choice subscriptions. 2G is limited to 50 kbit/s, that's
slower than a 56K modem.

While this absolutely fine for background processes (e.g. notifications) and
even checking your email, most websites never loaded at these speeds.
Resources would time out, and the adverts alone could easily exceed a few
megabytes. I even had a few website block me because of my "ad blocker"
because the adverts didn't load timely enough.

Makes me feel for people in like rural India or other places still only at 2G
or similar speeds. It is great for some things, not really useable for general
purpose web browsing any longer.

PS - T-Mobile now offers 3G speeds internationally; this was just the freebie
at the time.

~~~
Jakob
Disable JavaScript. You’ll be surprised at how most of the web still works and
is much faster. Longer battery life on mobile, too.

~~~
zeveb
> You’ll be surprised at how most of the web still works and is much faster.

And you'll be more secure, and you'll retain more of your privacy.

I find 'this site requires JavaScript' to be another way of saying, 'the
authors of this site don't care about you, your security or your privacy, and
will gladly sell all three to the highest bidder.'

~~~
danappelxx
Well, that's quite unfair. JavaScript is also used for creating interactive
web applications - not just tracking users. Really your attitude comes off
unnecessarily aggressive.

~~~
Gracana
Obviously things like gdocs need JavaScript, but blogs and news sites and
forums sure don't.

~~~
jdormit
I think it depends on what the JavaScript is used for. I agree that blogs and
news site should be static, but forums - and in general, sites with a high
degree of user interactivity - can see significant UX improvements with some
JavaScript, for things like asynchronous loading, changing the UI without
reloading the page, and even nice animations (although many of those can be
done in CSS these days). However, graceful degradation is very important -
disabliing JavaScript on these sites shouldn't break them, merely impact the
UX.

[Edit] "blogs and news sites should be static" -> this should read "blogs and
news sites don't need JavaScript"

~~~
Gracana
Agreed, enhancements are good (and often nice on a modern devices with all the
bells and whistles enabled), so long as it degrades nicely.

------
geforce
Sad thing is that most of the web sucks on rather fast connections too. Pages
being almost 5mb of data, making multiple dozens of requests for librairies
and ads. Ads updating in the background, consuming evermore data.

I don't notice it much on my PC, since I've got a FTTH connection, but on LTE
and 3G, it's very noticeable. Enough that I avoid certain websites. And that's
nowhere near slow by his standards.

I do agree that everyone would benefit from slimmer websites.

~~~
Terr_
I have Javascript off-by-default, and about 80% of the time it simply makes
everything better.

Oh, sure, a few sites need JS (and get whitelisted) and some just have minor
layout quirks... But I can actually scroll down and read the text of a news
article rather than suffering through waiting times and input-latency as
Javascript churns.

~~~
dman
Same here - I would highly recommend people to at least try this once and get
a reminder of how fast sites can be.

------
etatoby
I design and write my company's framework, that other devs use to write
websites and webapps.

I base my work on existing technologies (lately Laravel, which means Symfony,
Gulp, and hundreds of other great libraries) but I always strive to:

1\. Reduce the number of requests per page, ideally down to 1 combined and
compressed CSS, 1 JS that contains all dependencies, 1 custom font with all
the icons. Everything except HTML and AJAX should be cacheable forever and use
versioned file naming.

2\. Make the JS as optional as possible. I will go _out of my way_ to make
interface elements work with CSS only (including the button to slide the
mobile menu, various kinds of tooltips, form widget styling, and so on.)
Whenever something needs JS to work (such as picture cropping or JS popups)
I'll make sure the website is usable and pretty, maybe with reduced
functionality or a higher number of page loads, even if the JS fails to load
or is turned off. Also, the single JS file should be loaded at the end of the
body.

2b. As a corollary, the website should be usable and look good both when JS is
turned off, and when it's turned on but still being loaded. This can be
achieved with careful use of inline styles, short inline scripts, noscript
tags, and so on.

3\. Make the CSS dependency somewhat optional too. As a basic rule, the site
should work in w3m, as pointed out above. Sections of HTML that make sense
only when positioned by CSS should be placed at the end of the body.

I consider all of this _common sense,_ but unfortunately not all devs seem to
have the knowledge, skill, and/or time allowance to care for these things,
because admittedly they only matter for < 1% of most website's viewers.

~~~
zelias
I don't completely agree. If you're working on an SPA that targets higher-
income (e.g. better internet) consumers, a developer could be forgiven for
doing as much as possible using JS. I choose to sacrifice the <1% of my target
users who have JS turned off or have poor connections to benefit the UX of the
other 99% of users. I think the time and resource investment in strictly
adhering to these guidelines is cost-prohibitive for many lean engineering
teams, particularly those at early-stage startups.

I get that the web was designed to be optimized for HTML/CSS first, JS last.
However, the web was also not originally designed to support web applications
as complex as the marketplace currently supports. As the web matures as the
only universal application platform (to compete with various native
platforms), I think a paradigm shift is required -- towards replacing as much
markup with programmatic code as possible. Such a paradigm shift is required
for complex web applications to compete with native environments going
forward.

Of course, none of this applies if your organization just requires static
websites. Choose the right tool for the job, and all that.

------
SwellJoe
I travel fulltime and my primary internet is 4G LTE. But, even though I spend
$250 per month on data, I still run out, and end up throttled to 128kbps for
the last couple days of the data cycle. The internet is pretty much unusable
at that rate. I can leave my email downloading in Thunderbird for a couple of
hours and that's usable (gmail, however is not very usable), and I can read
Hacker News (but not the articles linked, in most cases). Reddit kinda works
at those speeds. But nearly everything else on the web is too slow to even
bother with. When I hit that rate cap, I usually consider it a forced break
and take a walk, cook something elaborate, and watch a movie (on DVD) or play
a game.

So, yeah, the internet has gotten really fat. A lot of it seems
gratuitous...but, I'm guilty of it, too. If I need graphs or something, I
reach for whatever library does everything I need and drop it in. Likewise, I
start with a framework like Bootstrap, and some JavaScript stuff, and by the
time all is said and done, I'm pulling a couple MB down just to draw the page.
Even as browsers bring more stuff into core (making things we used to need
libs for unnecessary) folks keep pushing forward and we keep throwing more
libraries at the problem. And, well, that's probably necessary growing pains.

Maybe someday the bandwidth will catch up with the apps. I do wish more people
building the web tested at slower speeds, though. Could probably save users on
mobile networks a lot of time, even if we accept that dial-up just can't
meaningfully participate in the modern web.

~~~
EvilTerran
Incidentally, you may find GMail's "basic HTML view" works better when your
connection's throttled:

[https://support.google.com/mail/answer/15049](https://support.google.com/mail/answer/15049)

And as for reddit, their old mobile view is still available at the "i."
subdomain - it's so much lighter-weight than the dreadful JS-laden one they
introduced a while back, it's the only way to use reddit on mobile IMO:

[https://i.reddit.com](https://i.reddit.com)

~~~
ashark
I use gmail's Basic HTML interface _all the time_. AJAXy gmail and Inbox
balloon to incredible levels of memory use pretty quickly, and are _slower_
for most interactions than the full-page loads on Basic HTML, which means that
someone somewhere lost track of WTF they were supposed to be doing all of this
for.

It's easily worth the loss of a couple features.

~~~
stuckagain
This is commonly stated but not true under all conditions. The full-blown
GMail UI has extensive latency-hiding capabilities. The basic HTML UI has no
latency-hiding features of any kind. If you are on a high-latency connection
but you have some bandwidth available, you will have a much better experience
with the full UI. Otherwise you face the full latency for every action.

The Inbox UI is for some reason irredeemable. It is slow under all conditions.

------
nommm-nommm
What really has baffled me lately is Chase's new website. They did a redesign
around, maybe 6 months ago, to make it "more modern" or something, I guess.

Now the thing just loads and loads and loads and loads. And all I want to do
is either view my statement/transactions or pay my bill! Or sometimes update
my address or use rewards points. That's not complicated stuff. I open it up
in a background tab and do other stuff in-between clicks to avoid excessively
staring at a loading screen.

I just tried it out, going to chase.com with an empty cache took a full 16
seconds to load on my work computer and issued 96 requests to load 11MB. Why!?

I then login. The next page (account overview) takes a full 32 seconds to
load. Yep, half a minute to see my recent transactions and account balances.
And I have two credit cards with zero recent transactions.

I am just baffled as to who signed off on it!! "This takes 30 seconds to load
on a high speed connection, looks good, ship it."

~~~
sonar_un
Chase's website is just awful for just about anything.

It's particularly terrible if you are ever trying to use award points. The
site is painfully slow, even on the fastest of connections.

~~~
enobrev
To be fair, the Chase website was awful before. It was just awful and slightly
faster.

------
iLoch
> Why shouldn’t the web work with dialup or a dialup-like connection?

Because we have the capability to work beyond that capacity now in most cases.
That's like asking "why shouldn't we allow horses on our highways?"

> Pretty much everything I consume online is plain text, even if it happens to
> be styled with images and fancy javascript.

No doubt, pretty much everyone who works on web apps for long enough
understands that it's total madness. The cost however, in supporting people so
far behind as to only be able to serve them text is quite frankly
unmanageable. The web has grown _dramatically_ over the past 20 years both in
terms of physical scale and supported media types.

The web is becoming a platform delivery service for complex applications. Some
people like to think of the web as just hyper text, and everything on it
should be human parse-able. For me, as someone who has come late to the game,
it has never seemed that way. The web is where I go to do things: work, learn,
consume, watch, play. It's a tool that allows me to access the interfaces I
use in my daily life. I think there's a ton of value in this, perhaps more
than as a platform for simple reading news and blogs.

I look forward to WebAssembly and other advancements that allow us to treat
the web as we once treated desktop environments, at the expense of human
readability. It doesn't mean we need to abandon older + simpler protocols,
because they too serve a purpose. But to stop technological advancement in
order to appease the lowest common denominator seems silly to me.

~~~
__jal
> Because we have the capability to work beyond that capacity now in most
> cases. That's like asking "why shouldn't we allow horses on our highways?"

Horses on highways would cause accidents. I have yet to see a fast-moving web
page crash in to a slow-moving one and shut down the router. Analogies work
better when there is connective tissue between the concepts in play.

More generally, the vast bulk of the problem is not human readability or
interactivity over http, but more a matter of _insane amounts of unnecessary
gunk_ being included in web pages because of faulty assumptions about the
width of pipes.

More generally, I find myself moving in the opposite direction. I find that
many SaaS services' interests don't align with mine, so I'm going back to
local applications. I don't trust others with most of my data, so the only
service that sees much of it only sees encrypted blobs (for offsite backup).
I've always run my own mail, and have slowly been expanding the services I
host as I bring more of this stuff in-house. And so on. But I realize I'm in a
minority.

But the nice thing is that it gives me an intranet and "other" grouping that
is very straightforward, so that the browser instances that touch untrusted
(not-mine) services can run in "bastion" VM, locked down nicely and reset to a
pristine state at will, not to mention allowing some stupid networking tricks
that are sometimes useful.

~~~
iLoch
> More generally, the vast bulk of the problem is not human readability or
> interactivity over http, but more a matter of insane amounts of unnecessary
> gunk being included in web pages because of faulty assumptions about the
> width of pipes.

Doesn't affect the vast majority of users.

> But I realize I'm in a minority.

Yes, your statements are pretty anecdotal and don't really relate to the vast
majority of internet users.

I'm sure your setup works great for you, but it sounds like a ton of overhead,
none of which is required if you have fast internet and don't give a shit
about what's going on (like nearly everyone who uses the internet.)

~~~
TeMPOraL
> _don 't really relate to the vast majority of internet users._

It's not that they don't. It's that you don't care.

Because why should you care about something that doesn't meaningfully increase
ad revenue or sales? Why should you care that the 2 extra seconds of pageload
on a fat pipe, and a fraction of a cent of extra electricity burned, when
multiplied by a million of your US users add to over 500 man-hours and few
kilograms of coal wasted. Not to mention the site being unreliable or unusable
in trains, rural areas and larger buildings in which an user doesn't have Wi-
Fi access.

And the problem wouldn't be as big if it was just you. The problem is,
everyone else thinks the same way, so all the waste mentioned above _adds up_.
All because people are too lazy to not put useless gunk - which often requires
more work to add to your site than to refrain from using it in the first
place.

~~~
ericd
And the funny thing is that it's even been shown that increasing speed
increases usage and revenue/sales, so there's not even that excuse. Slow pages
break flow, which cause people to realize that they've already wasted too much
time on your site, and were supposed to have done xyz 15 minutes ago.

------
diggan
Something that sticks out looking at the table. How can some sites simply FAIL
loading? I mean, there is something inherently wrong with our web today, where
if my internet is very slow and _could_ load a page in 80 seconds if I just
leave it like that, the server itself could have configured the timeout to be
60 seconds. So I can never load the page?!

The assumption is here that both points of the connection is based on earth.
When we have these hard timeout limits, how will stuff even remotely work when
we are a interplanetary species or even from orbit around earth?

~~~
joeyh
I chatted about just this timeout issue with an engineer from a major CDN
while he was at my house enjoying the dialup. Seems like simply a matter of
resource management; slow connections do use more resources. Most CDN
customers don't care or don't know that a few percent of the US population is
getting their web browsing broken by timeouts, so there's no push back.

(NASA has their wacky ways around the issue for ISS residents, something like
VNC to a ground-based browser IIRC.)

~~~
rmc
> _(NASA has their wacky ways around the issue for ISS residents, something
> like VNC to a ground-based browser IIRC.)_

Wait, VNC? Won't that use oodles more bandwidth than proxying HTTP?

~~~
crooked-v
It's about lag, not bandwidth.

~~~
ttepasse
I remember astronaut Alex Gertz somewhere saying the VNC was also for security
reasons. Keep in mind that most infrastructure on the ISS was installed in the
middle 00s and that the Thinkpads were possible running Windows XP and IE 6
then.

~~~
milesrout
I certainly bloody hope they're not running Microsoft software on the
International Space Station.

~~~
eggsome
They were in 2001:
[https://m.theregister.co.uk/2001/04/27/nt_4_0_sp7_available/](https://m.theregister.co.uk/2001/04/27/nt_4_0_sp7_available/)

This is one of my favorite NT4 tidbits :)

------
whiddershins
After spending a month in Mexico, including regions with spotty/inconsistent
service from one minute to the next, I think the problem goes deeper.

Browsers are IMO terrible at mitigating intermittent and very slow
connections. Nothing I browse seems to be effectively cached other than Hacker
News. Browsers just give up when a connection disappears, rather than holding
what they have and trying again in a little bit.

The only thing I used which kept working was DropBox. DropBox never gives up,
it just keeps trying to sync and eventually it will succeed if there is any
possibility of doing so.

I understand the assumptions of the web are different than an app like
Dropbox, but I think it might be a good idea to reexamine those assumptions.

~~~
kalleboo
Back in my dialup days (90's), I used to use Opera since it had great tools
for dealing with poor connections. E.g. IIRC you could have it only show
images that were already cached, with a handy button to async load in new
images that weren't already displayed.

------
20years
Most of the web really sucks on fast internet connections too. Thanks to so
many web developers thinking every dang thing needs to be a single page app
using a heavy JavaScript framework. Add animation, badly optimized images and
of course ads and it becomes really unbearable.

We keep repeating our same mistakes but just in a different way.

~~~
amiga-workbench
I saw a sarcastic comment a while back saying that webdevs should be forced to
work on a Pentium II machine and they would cut their bullshit, I laughed, and
moved on.

But after seeing many examples where sites were built on huge iMacs with no
care for users running off a battery, slower network connection or with an
average 1366x768 display I somewhat agree with the sentiment.

~~~
scaryclam
I tend to run web frontends in lynx (or links) to see if they can degrade well
enough. If the core user flows don't/can't work then there's a big problem
with the UI.

------
E6300
> The main table in this post is almost 50kB of HTML

Just for fun, I just took a screenshot of that table and made a PNG with
indexed colors: 21243 bytes.

~~~
chrismorgan
And converted to using single-character class names and reducing the CSS
needed, it can be down to about 3KB, sans-compression.

(I manually minified the whole source; the original is 53313 bytes, 12438
gzipped, while my minified source is 25628, 10124 gzipped. Most of the bloat
in the tables compresses really well, as is common with such things.)

~~~
wingerlang
Out of curiosity, how did you do it? I did as well, and I used Sublimes multi-
cursor functionality + some manual work to replace classes where it should be.
Mainly because i saw an interesting problem and I like (love) using the psuedo
automation tool that is Sublime 2.

Just curious how you went about it, if it was /all/ manual or some interesting
technique.

My result was not too shabby, 7kb I think it was when I stopped due to the
time cropping up.

~~~
chrismorgan
Mostly fairly manual, with a bunch of regular expressions and things like
sorting the CSS block by background-color (Vim: `:sort /{/`). It was tempting
to slurp it in Python, gargle it about a bit and spit it out neatly
refactored, but I didn’t do it that way. A small quantity of Vimscript would
also have been fairly straightforward. But no, I did it the hard way out of
the wrong type of laziness. (Why did I do it at all? Who knows.)

------
Filligree
Not related to the contents of the article, but please add a max-width styling
to your paragraphs. 40em or so is good.

~~~
minikites
I don't disagree, but your web browser doesn't need to fill your entire
screen.

~~~
Filligree
It, um, only fills two-thirds.

That's a compromise. Too many sites don't deal well with thin browsers, but I
need space for my terminals. This width usually works, although I sometimes
have to do a bit of horizontal scrolling to get the article fully visible. (As
opposed to the sadly inevitable sidebars.)

Margins are good, though.

------
schoen
Joey Hess (joeyh) has been writing about this for a long time (because he uses
dial-up at his home). Here is a recent thread about a 2016 blog post on this:

[https://news.ycombinator.com/item?id=13397282](https://news.ycombinator.com/item?id=13397282)

~~~
robocat
> Please, please, if your site requires AJAX to work at all, then retry failed
> AJAX queries.

Anyone here have information on the most reliable heuristics to do retries?

Or information on the implementations used by say Gmail or Facebook?

~~~
niyazpk
I have seen
[https://www.wikiwand.com/en/Exponential_backoff](https://www.wikiwand.com/en/Exponential_backoff)
used pretty regularly in many places.

~~~
robocat
I don't see how that is relevant.

1\. There is extra connection information, or information can be sampled. E.g.
query to see if anything is responding.

2\. Our user just wants to get action ASAP. Not necessary to be a good
citizen, our user just wants it to work.

3\. Heuristics depend on what works in practice. HTTP/S is a comp layered
protocol so it is hard to know what is right.

4\. Connection conditions are extremely varied, mobile connection type,
overseas location, ISP, IPv6, proxies, VPNs, etc all affect the connection
parameters so finding a reasonable heuristic is hard.

5\. Sampling connection information is difficult, because when it fails you
also fall to log it.

------
fenwick67
By far the worst site I regularly use, from a page loading perspective, is my
local newspaper.

It takes about 10 seconds before it loads to a usable state on a T1
connection.

If I pop open an inspector, requests go on for about 30 seconds before they
die down. It's about 8MB.

[http://www.telegraphherald.com/](http://www.telegraphherald.com/)

~~~
hvidgaard
If you wait 10 seconds to get the news you are more patient than me. I would
quickly go elsewhere unless they deliver high quality news.

~~~
fenwick67
Most of my news I do get from elsewhere, but it's really the only local news
source.

------
tetha
I might need a reality check here because this is feeling weird.

I'm currently building a web-based application to store JVM threaddumps. This
includes a JS-based frontend to efficiently sort and filter sets of JVM
threads (for example based on thread names, or classes included in thread
traces). Or the ability to visualize locking structures with d3, so you can
see that a specific class is a bottle neck because it has many locks and many
threads are waiting for it.

I'm doing that in a Ruby/Vue application because those choices make the app
easy. You can upload a threaddump via curl, and share it with everyone via
links. You can share sorted and filtered thread sets, you can share
visualizations with a mostly readable link. This is good because it's easy to
- automatically - collect and upload thread ddumps, and it's easy to
collaborate with a problematic locking situation.

So, I'd call that a fairly heavy web-based application. I'm relying on JS,
because JS makes my user experience better. JS can fetch a threaddump, cache
it in the browser, and execute filters based on the cached data pretty much as
fast as a native application would. Except you can share and link it easily,
so it's better than visualvm or TDA.

But with all that heavywheight, fast moving web bollocks... Isn't it natural
to think about web latency? To me it's the only sensible thing to
webpack/gulp-concat/whatever my entire app so all that heavy JS is one big
GET. It's the only sensible thing to fetch all information about a threaddump
in on GET just to cache it and have it available. It's the only right thing to
do or else network latency eats you alive.

Am I _that_ estranged by now by having worked on one low-latency, high-
throughput application by now? To avoid confusion, the threaddump storage is
neither low-latency, nor high-throughput. Talking java with 100k+ events/s and
< 1ms in-server latency there.

------
Tade0
Kudos to the author for making the post readable using a 32kbps connection.

My apartment does not have a landline, not to mention any other form of wired
communication, so my internet connection is relegated to a Wi-Fi router that's
separated by two walls(friendly neighbour) and a GSM modem that, after using
the paltry 14GB of transfer it provides, falls back to a 32kbps connection.

Things that work in these circumstances:

\- Mobile Facebook(Can't say I'm not surprised here).

\- Google Hangouts.

\- HN (obviously).

\- A few other videoconferencing solutions(naturally in audio only mode).

Things that don't work, or barely work:

\- Gmail.

\- Slack(ok, this one sort of works, but is not consistent).

\- Most Android apps.

\- Github.

EDIT: added newlines.

~~~
molsongolden
Have you tried the HTML version of Gmail?

[https://mail.google.com/mail/h/](https://mail.google.com/mail/h/)

I've used this on my kindle keyboard while traveling but the free data speed
might still have been faster than 32kbps.

------
Entangled
Can't browsers provide a service like

txt://example.com

that shows web content in plain text, no images, no javascript, nothing,
something like readability but directly without loading the whole page first?

It would also be good for mobile connections.

* Wikipedia should be the first site to offer that txt: protocol, Google second.

* Btw, hacker news is the perfect example of a text only site.

------
franciscop
I totally agree. I used to have a really bad mobile connection up until a few
years ago (Spain), and still when I use up all my mobile internet it reverses
to 2G.

So I know the pain and decided I wouldn't do the same to my users as a web
developer. I created these projects from that:

\- Picnic CSS: [http://picnicss.com/](http://picnicss.com/)

\- Umbrella JS (right now website in maintenance):
[http://github.com/franciscop/umbrella](http://github.com/franciscop/umbrella)

Also I wrote an article on the topic:

\- [https://medium.com/@fpresencia/understanding-gzip-
size-836c7...](https://medium.com/@fpresencia/understanding-gzip-
size-836c74b66c0b)

Finally, I also have the domain [http://100kb.org/](http://100kb.org/) and
intended to do something about it, but then I moved out of the country and
after returning things got much better and now I have decent internet so I
lost interest. If you want to do anything with that domain like a small
website competition just drop me a line and I'll give you access.

~~~
tluyben2
Where did you live in Spain? In Spain my mobile internet is far better than,
let's say, in parts of the UK I work, let alone in China. And I live in the
mountains down south, an hour away from the nearest city. Best so far have
been Thailand + Cambodia. Just blazing fast, even in the rainforest with
multiple laptops/phones tethered and cheap as chips. If I can have anywhere
between 3G/4G stable, everything I need (including almost all heavy sites work
fine); in the south of Spain I get enough to load heavy sites and Skype,
download torrents, watch Netflix per device connection. In Cambodia I could do
all that in the rainforest, away from everything, for a fraction of the price,
with 4 devices tethered. I was impressed. The connection here in Hong Kong I'm
on now is worse than that, and that's in the middle of the city.

But yes, developers (including) me not accounting for slow connections is a
pet peeve of mine. As I often do it myself, I do understand the issue; it is
client constraints, time/money constraints and audience. But it does annoy me
when often used sites (notably airline sites and banking sites) are top heavy
and their apps time out because _yes_ I do have a bad connection often.

~~~
franciscop
In Valencia, but this was around 4 years ago and I had a data plan that was
also 3-4 years old because I used wifi almost everywhere. As I was a student
back then the only problem was the bus from my home to the university and
back.

I was with Hacker Paradise for 3 months through SE Asia and I totally agree. I
have screenshots yet-to-tweet comparing the great packages from Thailand with
the prices in Spain and it's absolutely ridiculous.

~~~
tluyben2
Not only Spain; most of the EU.

------
jaclaz
A unit of measure I find appropriate is the "Doom", 2015 prediction:

[https://twitter.com/xbs/status/626781529054834688](https://twitter.com/xbs/status/626781529054834688)

------
jlardinois
> In the U.S., AOL alone had over 2 million dialup users in 2015.

I've seen this figure a few times before, and I wonder every time who these
users are. Specifically I'm curious what the breakdown is between people who

\- Really don't have a better option available (infrastructure in this country
is unbelievably bad in some places, so I wouldn't be surprised at a large size
for this group)

\- Are perfectly happy with the dialup experience so they don't switch to
something better

\- Don't know there are better options so they stay with dialup

\- Don't even realize they never cancelled AOL and are still having it auto-
debited every month

\- Some other option I didn't think of

------
gwu78
"Pretty much everything I consume online is plain text..."

Yes.

My kernel, userland, third party software and configuration choices, the
entire way in which I use the computer, are optimized for consuming plain
text.@1

As a consequence, the web is very fast for me compared to a user with a
graphical browser. This is why every time some ad-supported company claims
they are offering a means to "make the web faster" it makes them appear to me
as even more dishonest. They are, at least indirectly, the ones who are
responsible for slowing it down. They are promising to fix a problem they
created, but will never really deliver on that promise. Conflict of interest.

@1 I find there is no better way to optimize for fast, plain text web
consumption than to work with a slow connection. It is like when a batsman
warms up with weights on the bat. When he takes the weights off, the bat feels
weightness, and the velocity increases. When I spend a year or so on a slow
connection and adjust everything I do to be as bandwidth efficient as
possible, then when I get on a "fast" connection, the speed is incredible.

I also use the same technique with hardware, working with a small, resource
constrained computer. When I switch to a larger, more powerful one, such as a
laptop, the experience is that I instantly have an enormous quantity of
_extra_ memory and screen space, for free. I do not need a HDD/SSD to work. My
entire system _and storage_ fits easily in memory.

Now if I do the opposite, if everyday I only worked on a large, powerful
computer with GB's of RAM with a fast connection, then switching to anything
less is going to be an adjustment that will require some time. I would spend
significant time making necessary adjustments before I could get anything else
done.

------
AngeloAnolin
"Google’s AMP currently has > 100kB of blocking JavaScript that has to load
before the page loads"

Wasn't it that Google was claiming that by using AMP, you can actually make
web pages load faster as it is a stripped-down form of HTML[1].

From what I am hearing from the author (Dan), bare html with minimal JS and
CSS should (in theory/reality?) load pages faster.

[https://moz.com/blog/accelerated-mobile-pages-whiteboard-
fri...](https://moz.com/blog/accelerated-mobile-pages-whiteboard-friday)

------
smacktoward
Looking at that first table, one question jumps out at me: what the heck is
Jeff Atwood doing on pages at _Coding Horror_ that makes them weigh 23MB?

I mean, I'm all for avoiding premature optimizations, but 23MB for one page is
just... wow.

EDIT: As a sanity check, I just tried loading the CH home page from a cold
cache myself. Total weight: _31.26MB_. Yowch.

~~~
accountface
Appears to be mostly lack of image optimization (and he loves gifs). A common
issue with blogs.

~~~
codinghorror
Images are meticulously optimized, problem is, retina is expensive in file
size.

~~~
minikomi
Seems that images such as the superman image [0] or pinball image [1]
currently on the front page are much much larger than they should be -- body
max-width is 700 (70% of 1000px). Even for retina that's overkill. If you want
to get really fancy you could restrict all (served) image widths to under
700px and make a 1400px @2x version to use in a srcset.

[0] [https://blog.codinghorror.com/content/images/2017/01/help-
ke...](https://blog.codinghorror.com/content/images/2017/01/help-keep-your-
school-all-american.jpg)

[1] [https://blog.codinghorror.com/content/images/2016/11/pro-
pin...](https://blog.codinghorror.com/content/images/2016/11/pro-pinball-
timeshock-ultra-4k-1.jpg)

------
nedt
One thing that isn't mention is webfonts. On 2G I can load the whole page,
CSS, JS and some images, but can't read anything because the fonts aren't
loaded yet. Here is a gallery of a couple of examples:
[https://imgur.com/gallery/wfjoT](https://imgur.com/gallery/wfjoT)

------
stevoski
My team has just started work on a new SaaS product. We are taking articles
like this to heart and aiming to keep pages light and fast. We are using very
little JavaScript.

Let's see if the market rewards us or punishes us for this approach...

~~~
tropo
There are more than enough other ways for the market to punish you. :-/

Your approach helps with reliability (fewer 3rd-party and browser needs) and
accessibility (workable with lynx and screen readers) too. Latency makes
people want to scream.

It's also important to remember that, to the customer, you are just another
browser tab. The customer's computer is not dedicated to you. They could have
100 or more other tabs open. The customer may even have reason to open more
than one copy of your site simultaneously, with same or different login, and
same or different browser. Bogging down their computer makes them unhappy and
resentful.

------
roadbeats
Not just web, mobile apps also suck when you have slow connection. For
example, you can't open Itunes when you're on GPRS. It's trying to connect to
Apple Music and locks you in a screen with a big Apple logo. Same as Spotify.
Just try your apps with GPRS :) I camp every weekend so noticed how much they
suck long time ago

------
andrewstuart2
Did most of the web suck when we were on 28k or 56k modems? I'd argue that it
didn't, and yet even with the light weight of pages back then, it was
incredibly slower than today's pages (even heavy ones) load over our much-
faster connections.

So really, I think what the author is observing is that _having experienced
high-speed reliable connections_ , it is very disappointing to move to a much
slower connection. For the emerging tech markets, I can imagine the experience
would not be great if the load was long enough to cause timeouts and
connection failures, but at the same time, the 99% experience, as it probably
was when the web was born, is "holy crap look at everything I have access to
now!"

Yes, there are some really terribly optimized and redirect-happy sites out
there and yes, you should do everything you can to make your page speedy.
Everybody benefits when you do. I think, though, that this is more of a case
of "let's be thankful for and aware of what we have," and "if you suddenly
have a slower connection you might find yourself annoyed" more than "most
sites suck on slow connections."

~~~
dragonwriter
> Did most of the web suck when we were on 28k or 56k modems?

Yes, lots of the early web sucked over dialup.

~~~
cgh
While waiting for some JS-laden crapfest to load earlier, it occurred to me
that I haven't heard the term "World Wide Wait" in many years. But here I am
experiencing it all over again.

------
zeveb
> Pages are often designed so that they’re hard or impossible to read if some
> dependency fails to load. On a slow connection, it’s quite common for at
> least one depedency to fail. After refreshing the page twice, the page
> loaded as it was supposed to and I was able to read the blog post, a fairly
> compelling post on eliminating dependencies.

 _slow clap_

His data on steve-yegge.blogspot.com is particularly unfortunate: Steve's
(excellent) posts are almost completely pure text, and there's no reason for
them to fail to download or display, except that Google demands that one
execute JavaScript in order to get a readable page.

> if you’re browsing from Mauritania, Madagascar, or Vanuatu, loading
> codinghorror once will cost you more than 10% of the daily per capita GNI.

Maybe the social-justice angle can convince some people to shed their
megabytes of JavaScript and embrace clean, simple, _static_ pages? There's
probably some kid in rural Ethiopia who might have been inspired to create
great things, if only he'd been able to read Steve Yegge's blog.

> The “ludicrously fast” guide fails to display properly on dialup or slow
> mobile connections because the images time out.

 _slow clap_

> Since its publication, the “ludicrously fast” guide was updated with some
> javascript that only loads images if you scroll down far enough.

Incidentally, is there any way we can enforce the death penalty against people
who load images with JavaScript? HTML already has a way to load images in a
page: it's the <img> element. I shouldn't be required to hand code execution
privileges over to any random site on the Internet in order to view text or
images.

------
hnarn
A lot of people here are talking about how 2G connections are "almost
unusable" and how this should be optimized server-side and so on. I'd just
like to point out that there are browser that cater to this specific
demographic (slow connections).

Ever since the days of running Java applications on my old Sony Ericsson
phone, Opera Mini has been my favorite. As far as the browser is concerned,
the website can be as heavy as it wishes -- it will pass through Operas proxy
and be compressed according to user preferences. This could include not
loading any images (nothing new), or load all images with very low quality.
You can also select whether you want things like external fonts and JS to
load, or if you want to block that too. When I moved to a new country my first
SIM card had one of those "unlimited but incredibly slow" plans. Opera Mini
was a life saver.

I guess my point is that we shouldn't get stuck in optimization paralysis if
there is no sound and standardized server-side way to solve this issue (and
there doesn't seem to be). It would be nice if browsers had a way to tell web
servers that they're operating under low bandwidth, like the do-not-track
flag, but AFAIK this does not exist.

Until that exists, and I don't mean to suggest we go back to the days of "Made
for IE9" here, maybe some responsibility needs to be shifted to the client
side. As long as you design your websites in a sane way, they will pass
through these low bandwidth proxies with flying colors. Maybe you don't need
to spend hundreds or thousands of man-hours optimizing your page when you
could insert a discrete indicator at the top of the screen for anyone taking
longer than X seconds to load that there are many browsers available for low
bandwidth connections, and that they might want to try them out?

------
amelius
But HN almost never sucks, even on slow connections. That's why, when I'm on
mobile, I only read the comments and not the articles :)

By the way, here's how we can collectively make the web faster, safer and more
fun to use: [1]

[1]
[https://news.ycombinator.com/item?id=13584980](https://news.ycombinator.com/item?id=13584980)

------
gumby
I was exasperated by his mobile example. Why? This is my life with Comcast
(the faster of the two "choices"!) in Palo Alto. I also have Comcast in my ski
house in the sticks and it's faster than Palo Alto. But my wired connection is
so slow that I sometimes use my phone on LTE to read a page that hangs on
Comcast.

------
jayajay
Lately my Pixel has been achieving sub KBps speeds on very good Wifi
connection (laptop in same room 100MBps), and it reminded me of the old days
with dial-up on win 98 -- but worse. The estimated download-time for the
LinkedIn app (70MB... gg) was a whopping 6 months! What a great way to get me
to guzzle up my mobile data.

------
anigbrowl
I really wonder how much time designers and developer actually spend on
thoughtful testing vs. a/b or automated testing. Sometimes the problems on
websites just seem so.... _clueless_.

My current pet hate is news sites that float up a modal window asking me to
turn off my ad blocker because bidness. OK, I turn off AdBlock Pro for that
domain, turn off HTTP switchboard, and it still won't load. Why? I dunno, try
again, still won't load. OK, guess I'm never coming back. Obviously it must be
some other extension, but without any technical details how can I tell?

For that matter why did anyone think it was ever a good idea to float dialogs
over web pages to get people to share (not submit) their email address? Has
anyone ever looked at how poorly these display on mobile devices? Or how
making it hard to close floating dialogs is a really good way to annoy people?

------
atbentley
> if we just look at the three top 35 sites tested in this post, two send
> uncompressed javascript over the wire, two redirect the bare domain to the
> www subdomain, and two send a lot of extraneous information by not
> compressing images

So uncompressed javascript and images are bad, but I thought apex domain to
www subdomain redirection was an optimisation as the apex domain can often
only point to a single server but the subdomain can point to a range of
geographically well distributed CDNs. So rather than going to North America
for every request, the browser only needs to do it once than the rest can come
from a regional CDN. Am i misunderstanding something, does this also break
down on a slow connection?

~~~
Symbiote
The apex domain can only use A records, i.e. point directly to an IP address.
It can have multiple A records, ebay.com does so:

    
    
      host ebay.com
      ebay.com has address 66.135.216.190
      ebay.com has address 66.211.162.12
      ebay.com has address 66.211.181.123
      ebay.com has address 66.211.185.25
      ebay.com has address 66.211.160.86
      ebay.com has address 66.135.209.52
    

Without a CNAME (alias) record, eBay need to control the DNS resolution. Most
people using a CDN don't, so they must use a subdomain.

~~~
atbentley
Ah, I was unaware that there could be multiple IPs on an A record, thanks for
that. If I'm understanding this right though, the extra IPs would just be for
redundancy and resilience and cannot be relied for geographic routing? In this
case ebay.com redirects to www.ebay.com.

~~~
deftnerd
It's not that there are multiple IP's in the A record, it's that there are
multiple A records, each with an IP address.

For geographic routing, there is a clever trick that can be utilized using a
technology called Anycast. Anycast is basically a way of assigning the same IP
address to multiple machines so requests to that IP address results in
connecting with the one that's the closest to you, route wise.

Providers sometimes use Anycast DNS Name Servers and configure them to provide
the different IP addresses depending on which name server people connect to.

So, if someone wants to determine the IP address of ebay, their DNS client
connects to ns1.ebay.com and asks "hey, what's the IP addresses for the A
records for ebay.com" and ns1.ebay.com replies with the list.

But ns1.ebay.com might be an Anycast DNS Name Server that's close to them and
it provides the list of IP addresses closest to that name server. Someone on
another continent might reach a name server with the same name and ip address,
but it's a different machine in a different data center. It would provide a
list of IP addresses on that continent.

I do something similar with one of my sites. I rent three VPS's from buyvm.net
(who has Anycast setup) that have the same IP address and are located in Las
Vegas, New Jersey, and Luxembourg. I pay less than $10 a month in total and
run my DNS name servers there.

Clients that connect to the name server in Las Vegas get an IP pointing to a
Digital Ocean load balancer in San Francisco proxying data from a few front-
end VPS's.

Clients that connect to the name server in New Jersey get an IP pointing to an
OVH Canada load balancer near Montreal.

Clients that connect to the name server in Luxembourg get an IP pointing to an
OVH load balancer in the North of France.

The result is a responsive service that has amazingly low latency for the US
and the EU. Gonna try to set up some infrastructure in Singapore soon to make
things faster for Australia and Asia.

------
gimagon
Could a lighter weight website serve more users for the same dollar of
bandwidth as bloated website?

It seems to me there's a business strategy, where rather than pushing for more
ads, a website pushes for lighter weight and promises its few advertisers a
wider audience.

~~~
yellowapple
I actually had a similar idea about radio stations.

Currently, FM radio stations are so typically clogged with commercials that I
just switch back and forth whenever the music stops. The sole exception in my
area is KZTQ "Bob FM", which has a neat policy: 60 minutes (ish) of nonstop
music (aside from their normal station ID stuff), followed by at most two or
three commercials, then repeat. I've found that the commercial breaks are
short enough that I'm more willing to actually listen to them, since I know
that the music will be back in less than a minute or so.

I reckon that has a significant value-add in terms of ad impressions, and thus
could offset the normally-decreased ad revenue by charging more per ad.

------
jmcdiesel
As someone who had fiber internet then had to spend a year and a half on
1.5MBPS DSL ... (hell) ... I can say I agree that it sucks...

I can also say that at no point did I feel entitled for it to work better for
me. I don't understand this level of entitlement (i dont like your ads, i dont
like your layout, i dont like your visual effect ...) ... just leave the site.

The modern web isn't simple static pages... its not going to revert to that,
either. We're developing actual applications in the browser now... those
aren't easily translated to static, simple pages...

This is today's "grumpy old engineer" argument...

------
mmagin
The other issue I have with web page bloat: memory-constrained mobile devices
are able to cache far fewer pages than a desktop computer, and navigating
among multiple tabs, etc. gets slowed down to internet connection speed.

------
warcher
I'm gonna read the article, I promise, but is the title really "If your
internet is bad, the internet is bad"?

~~~
warcher
I'm not gonna lie, if my fat high-res site images make life a lil harder in
Vanuatu but convert a bunch of black-turtleneck d-bags in San Francisco to
customers, I know which side of the bread the butter is on.

~~~
legostormtroopr
I think thats probably part of the point of the article. I would read it less
of a statement like "make you webpages smaller", and read it more as "be aware
of the bloat of modern webpages".

If your target audience has great internet, then ignore optimising for size.
But be aware that people travel, and your market may change, so what is ok in
SF may become unusable if they go on holidays, move offices or need to work of
roaming data due to an outage.

~~~
warcher
Ah, I'm just joking around. Mostly. ;) I agree with and mostly implement the
vast majority of google's recommendations vis a vis site weight and speed
(when I have time/budget to do so), because I regard making sites _fast_ as a
signifier of competent professionalism. Any hack can make a sucky, slow, heavy
website. Making a website that really cooks is one of several things I use to
justify my rate. ;)

------
Swizec
Slow connection is okay, it's just slow. Now spotty connection, or high
latency, _that_ 's the killer.

Webapps that make 50 requests to download all the JavaScript and CSS and talk
to the API and get 3 images really really really don't behave well when 12 of
those 50 requests fail or take 30 seconds to complete. Honestly, I'd rather
have slow internet than packet lossy internet.

Still don't know why, but my Xfinity router routinely gets into a state where
it drops the first 10 or so packets of any request. The first `ping 8.8.8.8`
takes 3 seconds, the rest are the usual 0.1 second. Terrible.

~~~
riskneural
What do you mean by latency in this context?

~~~
richardwhiuk
Probably really means highly variable latency, i.e. jitter, where the RTT
spikes horribly, which will cause packets to assumed to be dropped.

------
meriobrudar
Wow, really? Who knew overuse of JS and fancy graphical effects where they're
not needed could negatively impact user experience? Could it be that all the
web devs using 20 CDNs, cramming 900 frameworks, 100 externally provided
analytics, advertisement providers and fancy layout eye-candy were wrong all
along? What a surprise!

I'm already sick when I have to visit a webpage and it won't even load
ANYTHING if I don't enable scripts on it. At least load the god damn text, I
don't care if it'll look like trash, just don't show me a blank page...

The irony is that everyone calls for people to not use Flash, and then they go
out of their way to recreate the abysmal experience without it, so really
nothing changed as far as UX goes. Remember when pages didn't load at all
unless you had flash installed? Well here's some nostalgia for you, won't load
unless you run all the JS on the page and then you have to "enjoy" a bloated
joke of a website, but Jesus does it have eye-candy!!!

~~~
minxomat
Every time I get angry about this I'll open
[https://purecss.io/](https://purecss.io/) or
[http://skytorrents.in](http://skytorrents.in) and look at the source. It's a
form of meditation to browse fast websites.

~~~
solidr53
[http://motherfuckingwebsite.com](http://motherfuckingwebsite.com)

~~~
aembleton
That uses Google Analytics

------
markplindsay
See also: The Website Obesity Crisis[0] by Maciej Ceglowski

[0]
[http://idlewords.com/talks/website_obesity.htm](http://idlewords.com/talks/website_obesity.htm)

------
gkya
It sucks if you have a fast connexion too, because then your CPU and RAM
suffer instead. And as you add addons to rectify the many offending web pages,
then the performance penalty of those quickly equal to that of crappy JS. I
was so happy with Xombrero as my browser, but it's stagnant and insecure now.
I do like my Firefox but with all the blocking addons it's slow, and without
them it's slower (not that it's its fault).

------
mathgenius
I wonder if there is room for a product, a kind of browser-in-a-website, that
would eat those big-ass webpages (server-side) and spit out just the text and
(heavily compressed) jpegs. With a little layout to match the original
website. Something like how streaming services adaptively subsample data, or
like how NX tries to compress the X window protocol. Obviously this would be
patchy, but it could be much better than "FAIL".

------
mueslix
Instead of making sites that try to predict the unpredictable, I'd rather ask
the question if TCP is still the right tool to use.

There shouldn't be a reason for a big page with many resources to not load -
it should just be slower. Yet I can make the same observations as soon as my
mobile signal drops to EDGE: the internet is essentially unusable as soon as
there's packet loss involved and the roundtrip-times increase. Interestingly
mosh often still works beautifully in such scenarios. So instead of focusing
on HTTP2 or AMP (and other hacks) to make the net faster for the best-case
scenario, I'd rather see improvements to make it work much more reliably in
less than perfect conditions. Maybe it's time for TCP2 with sane(r) defaults
for our current needs.

~~~
Uristqwerty
Rather than a "TCP2" that, based on name alone, would be far too likely to aim
for semi-backwards compatibility but tweak a few things to be slightly better
in general but mostly just better for the specific use cases of the one or
three top contributing companies, why not just push for the adoption of one of
the existing alternative transport layer protocols?

For example, there's SCTP. From what little I've read about it, it seems as if
it has most of the benefits of both TCP and UDP, with the main downside that
some firewalls and routers may need to be upgraded. Being an existing
protocol, however, there are already working implementations and some amount
of network support. Maybe it's even fully usable as-is today!

~~~
tepmoc
SCTP can't go through NAT (there is ietf draft in works for that). But SCTP
already exist in your favorite browser Chrome, Firefox both use usrsctplib
([https://github.com/sctplab/usrsctp](https://github.com/sctplab/usrsctp)) to
provide sctp over udp for webrtc signaling.

But there also need to be sctp over udp over dtls (or just sctp over dtls)
happen as you can't use TLS with SCTP unordered mode or multihoming.

SCTP slowly gain traction in userspace besides being only in mobile operator
networks (lte)

------
georgehaake
Out in the country enough that I have 3 meg area wifi with a wife who enjoys
streaming and Facebook and two boys who enjoy online gaming and streaming. Not
much left for me. All you can eat at least and avoiding satellite.

Oh, we find Amazon, IMDb and Facebook are the biggest pigs on a slow
connection.

------
bsukn
It only sucks if you've experienced a fast connection.

We generally don't target hardware from 98, why should we target bandwidth
from 98? Current smartphones and computers are really powerful, and most
applications are targeted towards those devices. Native apps don't have this
insane requirement to support hardware from 2 decades ago.

The web is so much more than text in 2017. And before you whine about the ads
and useless stuff, go read a tabloid and whine about the waste of paper, or
try to watch tv and whine about the electricity and time you're wasting
watching advertisements.

Media has and always will be like that.

The time spent on backwards compatibility and optimizations are usually not
worth it anyway.

Do I think mostly text sites should be 5mb? Obviously not.

~~~
therealmarv
For you it's bandwidth from 98. If you go outside your obviously modern
country it's a 2017 problem and it will stay this way for a long time. It's
not so much about backwards compatibility... it's more like "keep it working"
with slow bandwidth. Posting this from Philippines where I'm currently happy
with a stable 750Kb/s connection.

------
crispyambulance
> Let’s load some websites that programmers might frequent... > All tests were
> run assuming a first page load...

ehh, but is that really a good test for sites people "frequent" ?

What happens to the heatmap when we're talking about subsequent page loads!

------
oregontechninja
My main source of clients are people suffering from website bloat because they
have no idea how to build a website. They jump on every shiny JavaScript
library they need and load 8 different versions of bootstrap and then 5 fonts
from various sources, all from CDNs. I wish I were exaggerating, but it's such
a mess. In every single case, 90% was garbage, and all they really needed was
a nice semantic css sheet. Unless you are developing a web-app, or 100% need
you ajax calls, you don't need JavaScript. Is this the same for others or am I
just in a less technically inclined area?

------
ge96
Yeah I take my 100Mbps connection lightly, developing an image-oriented web
app for the Philippines and holy crap the one guy was lucky to get 0.3Mbps.

So... had to severely redo the code to pull 50px wide images, blur them in,
and only load the visible (depending on screen dimension) then a 2-second max
refresh thingyyyy (yeah I'm just making this loader-interrupter thing) it's
been a mess I feel pretty stupid sometimes. Why can't I get this...
JavaScript. Yeap I am lucky to have Google Fiber (and I have the cheaper plan
too)

------
tlow
Quora is unusable on a slow connection. It literally shows a popup that
obscures content if you lose high speed connectivity or drop packets.

However, the web is even worse if you have no connection at all. This is
important because if we provide internet access at a municipal level, we can
reach 100% adoption among our pluralistic educational system and progress to
primary learning materials that are web based (CA 60119 for example prohibits
any primary educational materials not available to all students both in the
classroom AND AT HOME).

------
EGreg
I have a different suggestion.

Build software that can work on a distributed architecgure. So people in
Ethiopia can run their stuff on intranets and mesh networks and only
occasionally send stuff around the world.

What broadband has really caused is this assumption that the computer is
"always online". Apps often break when not online. When in reality there
shouldn't even be "online/offline" but rather "server reachable/unreachable".
And you should be building offline first apps, with sync across instances.

------
kemps4
I live in a rural area. There are three options for internet - satellite
(limited data allowance - but decent speed), dial-up or a local ISP with a
Motorola canopy system. I chose the last option. I get 100 KB/sec max download
speed (on a good day). Divide by the 4-5 people in the house regularly using
the Internet and it gets really slow, really quick. Many times I just give up
and shut the computer off or I browse using Lynx.

And nope - no cell phone signal here either..

------
samuell
> The flaw in the “page weight doesn’t matter because average speed is fast”
> is that if you average the connection of someone in my apartment building
> (which is wired for 1Gbps internet) and someone on 56k dialup, you get an
> average speed of 500 Mbps. That doesn’t mean the person on dialup is
> actually going to be able to load a 5MB website.

As someone mentioned below too, the median value would make much more sense in
this case (which it often does, it seems).

~~~
AstralStorm
Median also does not make sense. Dialup is not as common as other connections
anymore. You would get pretty high ADSL Mbps as the median.

Generally the distribution of bandwidth is multimodal, similarly of latency.

------
LyalinDotCom
YESSS!! ever have your 4G connection drop to shit? well imagine like that but
like 24/7 on your wired connection, that's what many people live with today :(

------
joeyh
I'm very impressed with Dan's methodology here, and it matches my own
experiences with dialup.

One thing I wonder about is, it seems many dialup ISPs these days provide some
kind of "accellerator", probably a web proxy that avoids some of the issues
with timeouts, perhaps compresses some content etc. So it might be that many
of the remaining dialup users don't experience quite as many problems as Dan
found.

------
bambax
> _A pure HTML minifier can’t change the class names because it doesn’t know
> that some external CSS or JS doesn’t depend on the class name._

After everything has been parsed, it would know (the browser knows).

Couldn't a proxy service produce super lightweight, compiled web pages? I seem
to remember Opera used to offer something along those lines, but I may be
wrong.

Would there be commercial value in building such a tool?

------
Aoyagi
But like, if you don't fill your website with megabytes of useless bloat,
you'll get called out, because "it's 2017".

------
0xc001
I think about this a lot. And I think it's really easy for a page weight
argument to fall into an "old man yells at cloud" tone. But I also want the
industry to move towards simpler HTML and such, so, I've been thinking up an
argument that companies will buy. I'm really bad at it though. Maybe the extra
African market will open up new ad revenue?

------
dangoldin
Shameless plug but I did something similar in 2014 and used PhantomJS to
analyze the content of the top 1000 Alexa sites:
[http://dangoldin.com/2014/03/09/examining-the-requests-
made-...](http://dangoldin.com/2014/03/09/examining-the-requests-made-by-the-
top-100-sites/)

------
kyleblarson
I live in a very remote town in the North Cascades in Washington state and
work remotely in development. I'm on a 1.5mb DSL connection and while it's
slow it's consistent and I rarely have issues with Skype / Hangouts / Slack /
Git / normal work. Downloading large data dumps is another story, but you
learn to plan ahead.

------
tmaly
I remember dial-up on a really slow modem back in the bbs days.

I was reminded of the slow connection with T-Mobile 2 years ago while in the
Philippines. They give you free data in 120 countries, but its throttled.

This was my main motivation for rewriting my side project using highly
optimized css and not a large framework that uses web fonts and bloated
libraries.

------
ddebernardy
Try it with a bad connection _and_ a 1st-gen iPad. :-)

You basically need to disable JS altogether to have a chance to even view many
websites. And some, well, just crash the browser regardless.

It's amazing how much the web evolved in the past few years...

There used to be a time where supporting 10+ year old browsers was matter of
factly. No longer.

------
Sir_Cmpwn
Website I'm currently working on has no JS and weighs an average of 15 KiB per
page. Loads in <20ms.

------
0mp
There is a project called txti which provides a free hosting for a simple
websites edited in Markdown: [http://txti.es](http://txti.es)

The idea is to make the content available to all the web users as the fast
connection is not as common as we might think.

------
Shivetya
I will be blunt, you would be amazed at the sites that suck even when you have
1g. I used to always think, damn my DSL is slow until I was at 1g and some
sites did not improve and how many of the applications I have which can update
are throttled

------
SnowingXIV
I feel a good solution to this problem, or at least covers a fair amount of
users is having your website work well with safari reader. Anytime even on
fast connections, I often find myself loading up a page with reader instead.

------
logicallee
Tangentially related:

As affects web apps, some of this is a conscious choice by network designers.
First, click on your profile on Hacker News and turn on Showdead. You can then
read this thread and my comment in it:

[https://news.ycombinator.com/item?id=13597673](https://news.ycombinator.com/item?id=13597673)

While the poster wasn't a web engineer specifically (or didn't say it) so much
of the web architecture isn't built for front-loading payloads. But instead,
on eventually getting there, through the magic of TCP/IP and letting users
wait for a few dozen seconds as pages load.

I disagree with it and think these engineers are wrong and make the wrong
decisions (optimize for the wrong things) and that this makes everyone poorer-
off.

Thanks for listening. (Happy to discuss any replies here.)

------
johnnydoe9
Can confirm, am using horrible internet right now. Googleweblight is a
lifesaver for reading article, not sure why it hasn't been mentioned by I
recommend everyone facing speed issues to try it.

------
kordless
I just realized this is why the growth of network speed is increasing at a
lower rate than that of compute. Even though they both continue to grow in
capacity, the accelerations are different.

------
dguillot
The worst of all I think it's NHL.com It appears to me that they have been
asked to be "responsive" in terms of viewability instead of functionality.
Good luck using this site.

------
known
AKA
[https://en.wikipedia.org/wiki/Tragedy_of_the_commons](https://en.wikipedia.org/wiki/Tragedy_of_the_commons)

------
julianj
Why hasn't someone implemented a kind of low bandwidth accessibility option?
(Or is there one?) I would imagine this would be akin to the multipart text
only email.

------
tlanc
It does. I'm on 2G and HN and the article site are the only usable things I've
encountered today [on TMobile's intl roaming thing]

------
coin
> or one of the thirteen javascript requests timed ou

There's the root cause. Why do I need to download executables just to read
static content?

------
kzrdude
Posting from wifi on a plane over the UK: this is apparently not slow
internet, I can read the usual bloated news and blog pages.

------
dsfyu404ed
This applies server side too. Note what sort of sites do and don't go down
when they make the HN or Reddit front page.

------
uvince
Regardless of connection speed it also sucks if you try using LinkedIN's new
website. Nothin' but progress bars.

------
realPubkey
And thats why we need to adapt offline-first.

------
noway421
This post can be seen as exceptional even because of the fact that that page
loads instantaneously. Nothing extra. Bravo.

------
Esau
Bloat: its not just for operating systems.

------
beautifulfreak
Why not make a site that proxies other sites, but retransmits them as fast-
loading? Isn't traffic=dollars?

------
amazon_not
Do you know how to make the web not suck on a slow connection?

Ssh into a shell account and use a text based browser :)

------
k__
What is the size per page one should not overstep?

I mean, yes as small as possible. But are there some size-budgets?

For 3G, 2G etc.

------
Lxr
We need more sites like this! Absolutely no bloat, so nice to use.

------
ainiriand
The web sucks, but it sucks less if you have a fast connection.

------
dwighttk
how much weight would a little css to make the text not full-width add to that
page?

------
songco
And the GFW(great fire wall)...

------
dotchev
This exactly one of the problems IPFS will solve by serving content from local
peers.

------
Pica_soO
Wasnt this "backwards" compatability the reason blizzard was always so
succesfull? Using old but sturdy tech, that would work on the slowest of
machines.

Actually one could make a whole slowMo WebStandard from this. No Pictures,
just svgs, no constant elaborate javascript chatter, no advertising. No
videos, no music, no gifs, just animated svgs. Actually, that would be
something lovely. Necessity begets ingenubeauty.

~~~
amiga-workbench
I've been very tempted to start publishing content on the gopher protocol, its
immune to the cancers of the modern web.

------
vegabook
you don't need to travel from Wisconsin to Washington to experience a slow
internet connection.

Try any mainstream commute on the South West Trains Wimbledon to Waterloo
(London) and you'll a) _still_ get blackouts for about 1/4 of the 25 minute
trip (this is one of the most densely populated areas in Europe - no excuses)
and b) at 3 of the 4 stations you'll stop at, your vaunted 4g connection will
drop to 1998 speeds due to contention. I generally curse the complex sites in
these situations because you'll easily be waiting 30-90 seconds (firmly in
your heatmap's red zone) for full load at least once per commute.

Incidentally, kudos on perfectly communicative yet lightweight web page
(50Kb).

~~~
drjasonharrison
Agreed. It's not just "third world countries" that have slow connections. Low
powered devices have slow connections. Places where there are lots of people
with portable devices have slow connections. This is _now_ not a long time
ago, nor far far away. The more wearables and IoT becomes a "thing" you're
going to find that attempting to get more interactions by saving on
transmission and client CPU load is worth the investment.

------
zump
This guys posts are insufferable for constantly namedropping where he works.
Ugh.

------
andrewclunn
reddit.com takes 7.5 seconds to load on FIOS? I must be reading this table
wrong.

------
lordCarbonFiber
I'm torn here, in a way. On the one hand light page weights and other such
optimizations make the internet better for everyone, on the other, there's a
certain point where designing your product to target 3 decades ago (we forget
1990 was 27 years ago) gets a little absurd.

I think the greater tragedy is not that the web is bloated (an issue for
sure), but that so much of America has internet worse than 3rd world mobile
2G.

~~~
SerLava
The bloat affects us today. A page that's impossible to load on dialup is also
going to make broadband viewing extremely sluggish and unresponsive. Each
click feels like a risk, especially when even the "back" button comes with 200
ad scripts that have to spin up again.

------
dsfyu404ed
What do we care? The vast majority of out target audience lives in a city with
fast internet

(I'm not putting /s because there's actually people that think this is a
reasonable opinion in the general case).

~~~
Spivak
It seems totally reasonable if you're not operating a company to be
altruistic.

* Pushing all the rendering to the client makes development easier, eases the transition to native apps, and uses fewer resources on the back end.

* The fancy site drives more conversions and makes the stakeholders happy.

* Not having fast internet is a crude filter for disposable income and losing those users probably goes unnoticed and might even increase the value of ad placements.

------
ldev
Well 3G is as low as you can get somewhere deep in the woods, not really a
problem...

~~~
ribasushi
I take it you don't walk in the woods much...

