
After 1 minute on my modem (2016) - BuuQu9hu
https://1-minute-modem.branchable.com/
======
joeyh
Since I temporarily have HN's attention with this side blog of mine, can I
suggest one simple tweak:

Please, please, if your site requires AJAX to work at all, then retry failed
AJAX queries. Use exponential backoff or whatever but don't let the AJAX query
fail once and the page be unusable.

This happens _all the freaking time_ when I'm on dialup, and there's nothing
more annoying than having filled out a form or series of forms only to have
the submit button break because it used AJAX to do a sanity check and threw an
exception because the server timed out after some absurdly short (dialup-wise)
period of time while the client was sending the request.

~~~
comboy
It's not just an easy JS fix though. It complicates quite a lot on the server
side. Your timed out request could have reached the server already, so the
state would different now. From the server perspective it's not clear whether
it was a retry or another request (and some actions are meant to be
repeatable).

Now that I think about it, generating some request UUID and passing it to the
server could allow it to quickly skip duplicates (it would also need to cache
responses for resending them).

I'm also curious how often this problem occurs statistically (not just in your
case, but for an average user of popular sites)

~~~
akiselev
Wouldn't it be easier to add a retry=true to the request, at least if your
implementing exponential back off in an existing codebase? Then you just hash
the request (minus the retry field) in your cache layer and send the saved
response again. You'll need a little extra code to update timestamps on the
client side but the server side can be implemented as middleware in most
frameworks. Expire the cache after a reasonable total timeout and make the
client do a new read if they try to retry after cache has cleared it.

~~~
comboy
I think you need client side UUID instead of just retry=true for requests that
can be repeated with the same params.

E.g. /inrement?retry=true, server gets 3 of them - how many times should it
increment? Is it just that user still haven't received the first response, or
should we increment 3 times because it was send/fail/retry/ok 3 times on the
client side?

But with this UUID it still can be easily implemented as a middleware.

~~~
tripzilch
but you shouldn't use a GET request for incrementing, right? do people (or
popular frameworks' defaults) really do that? frequently?

~~~
comboy
The OP was talking about having to refill the form, I assumed they weren't
submitted using GET.

------
krylon
When I went online for the first time (nearly twenty years ago - time goes by
pretty fast!), I did so on 14.4 kbit modem.

That was no fun even back then. After a year, I upgraded to ISDN, which was a
lot faster (64kbit), but once I got to use a faster line, even ISDN seemed
awfully slow.

And the trend continues to this day. Once one has a faster connection, one
gets used to it in no time. And just like many companies and individuals have
solved problems with slow software by throwing more/faster hardware at it,
these days we solve the problem of web sites making inefficient use of
bandwidth by throwing more bandwidth at it.

Which might not even be such a bad thing - I would not like going back to
programming in an environment where I have to triple-check every variable to
see if I can shave off a couple of kilobytes, either.

But even a fast connection gets clogged at times, and even on a 16MBit DSL
line, I have seen pages looking broken because the request for getting the CSS
timed out or something like that.

Maybe taking more care not to waste bandwidth should be considered a form of
courtesy. People on slow/saturated lines will thank you for it, and people on
fast lines will be amazed at how snappy that website loads. (And of course,
there's always trade-offs involved; I do not demand web developers sacrifice
everything else to efficient bandwidth usage; but it should not be ignored,
either.)

~~~
SubiculumCode
I remember when I upgraded to a 14.4k modem and boy was I pleased after
dialing with a 9600 baud for years.

~~~
peterclary
Luxury. I started out on 1200/75\. I used to dream of 2400, let alone 9600!

('ad to lick code clean wi' tongue, etc. etc.)

~~~
grkvlt
Was that PRESTEL, maybe? Did you ever take advantage of the ability to have
the modem train as the server side of a 1200/75 connection? That way, you
could _upload_ at the blistering speed of 1200 baud, with the downside of
trying to navigate PRESTEL or its ilk at the staggeringly slow (worse than the
original IBM teletypes) speed of 75 baud! Good times...

And, after leaving University, the only Internet connection I could use (I
believe Demon Internet in the UK was starting up, so dial-up was beginning to
appear, but the faster University connections were for students only) was
their free 2400 baud connection to the X.25 PAD (translatable to TCP/IP
through SLIP once a shell connection established) and I resigned myself to
this being the only Internet access I would ever have without a return to
academia!

------
vortico
If you use vimperator
([http://www.vimperator.org/](http://www.vimperator.org/)) on Firefox, put
this in your ~/.vimperatorrc so you can disable CSS with the "s" character
(and re-enable it with Shift-S). It removes 99% of bullshit from web pages and
allows you to read articles the way Tim Berners-Lee intended, guaranteed!

    
    
        nmap s :emenu View.Page Style.No Style<CR>
        nmap <S-s> :emenu View.Page Style.Basic Page Style<CR>

~~~
throwanem
If you don't use Vimperator, reader mode's usually good for this, too.

~~~
csydas
My only issue with reader mode is that many sites seem to beat it with the
stupid"load rest of article button" so you can't just load it all through
reader.

On iOS I usually try to load articles in between stops in the subway and
frequently cannot get just the article without being bothered by the rest of
the nonsense on the page. I am ooen to the idea I might be just using
readermode wrong but I'd have it on by default if I could be assured I'd
actually get a full article each time.

------
waterhouse23
This is awesome, and actually a pretty neat way of evaluating websites.

A surprising number of people are still on low bandwidth connections, while
it's probably not reasonable to optimize for them, it's at least worth
considering that market occasionally.

~~~
adrianN
If you optimize for people on low bandwidth connections, you automatically
also optimize for those with limited mobile data. And as a bonus, your website
becomes faster for everybody with a fat pipe too.

~~~
vortico
"Optimize" is somewhat the wrong word. "Straying from temptation" would be
more accurate, since all garbage on webpages is placed there by a website
manager's overzealous want to increase popularity or profit.

~~~
rtpg
I understand some people don't use things like cloud software, but the
internet isn't just blogs and newspaper websites.

Sometimes styling is added to make web app UX better. Sometimes Javascript is
used so that, even if the first load takes more time, subsequent actions will
use _less bandwidth_. Sometimes more stuff is put on the page because 95% of
users want that information to also show up.

This is obvious and pedantic, but also counters half of the comments on these
sorts of these stories

~~~
vortico
My point _especially_ applies to web apps. I just loaded facebook.com, and it
took 228 requests, 8,825 KB, and 17.0 s. Scrolling through the page is laggy
while a stressful amount of muted videos start playing and mouse-hover events
start firing.

Compare this with mbasic.facebook.com, which is 22 requests, 107 KB, and 1.48
s. There are some small UX problems with basic HTML Facebook that I would
recommend they improve on (placement of links/buttons, omnipresent header),
but overall it is a much better experience for me since I feel much more in
control. Same with basic HTML Gmail vs full Gmail.

My point is that it is absolutely possible to not give in to shitty bloated
web trends driven by the expectation to increase popularity, while making a
quality, profitable website.

~~~
semi-extrinsic
> mbasic.facebook.com

Mind blown. It even has messaging that works without having to install
privacy-invading Messenger. You have just improved my facebook UX by a country
mile.

~~~
Nexxxeh
The combination of mbasic for Messenger, and m for normal Facebook browsing,
is fine on Android. I don't install the awful FB apps on my phone any more,
and I'm a fairly heavy Facebook user.

------
suhith
This is gold.

I've seen so many of these, even on websites with lots of traffic. Websites
have to be written taking into consideration the way it loads too, especially
on mobile data. I've found the Chrome DevTools feature where you can throttle
bandwidth comes in super handy for this.

~~~
Laforet
For resilience testing I highly recommend Clumsy even if it only runs on
Windows:

[https://jagt.github.io/clumsy/](https://jagt.github.io/clumsy/)

------
pmontra
That exoscale screenshot is very similar to what I see with NoScript on a 100
Mb/s connection before I temporarily allow their JS.

What's nice about NoScript is that I can turn on their JS but keep turned off
the JS scripts from the other sites. Apparently they only use
googletagmanager. Ublock doesn't report any blocked script so it's a rare well
behaved site.

~~~
nitrogen
Google tag manager is sometimes (often?) used to load every other third-party
script, so if you allow gtm to load you'll probably see a bunch more scripts.
As a web developer, gtm was the bane of my existence because marketing could
change and break the site significantly, while decreasing user privacy and
page speed, and the devs could take the blame.

------
tracker1
There is/was an internet news website 15seconds.com iirc, that was so named
because that's how long the average person would wait for a page to load. Back
in the 90's when dialup was common. I think people should try setting chrome
in devtools to 2g speed now and then, so they know the pain they're causing
for a lot of people on wireless without a good/stable connection.

------
tomrod
This is a fantastic way to assess website functionality. It would drive me
insane on day to day use.

In all reality, I just want to dump the modern web's approach. CSS,
Javascript, you name it. Give me simple HTML and text ads, if you need ads.
Give me pictures when I want them, with descriptive captions. I agree with the
intent of the blog--quit making crappy ads and bloated sites!

I use elinks often, and find it's text-based approach easier to comprehend.
What are your thoughts?

~~~
mysterydip
Agreed! I try to keep my site light. I'd like to see a return of the web to a
content layer where the browser can choose how to present it. In the end we
want to read content. We don't care about someone's favorite scrolling method
or menu system. Why can't I set up my browser preferences to be "show me
websites with a light blue background, navigate through a menu bar
horizontally across the top," etc. I don't think it would happen with current
inertia, but maybe that's an area for a niche browser.

~~~
jff
As I understand it, early on there was sort of an expectation that _users_
would be defining stylesheets and applying them to websites, exactly like you
said: "use light blue background, menu bar horizontally across the top,
paragraph text should be 16 pt." etc.

There's nothing _wrong_ with site-provided CSS, but I strive to make my own
stuff work with or without CSS.

------
2bluesc
Could probably automate this on a Linux VM using netem[0]

[0]
[https://wiki.linuxfoundation.org/networking/netem](https://wiki.linuxfoundation.org/networking/netem)

------
nandhp
For anyone wondering, the spinning Unicode symbol mentioned is F01E,
corresponding to fa-repeat in Font Awesome:
[http://fontawesome.io/icon/repeat/](http://fontawesome.io/icon/repeat/) Font
Awesome also has a bunch of spinner icons which OP is probably seeing on other
sites:
[http://fontawesome.io/icons/#spinner](http://fontawesome.io/icons/#spinner)

It really is unfortunate that there is no way to have these widely-used
resources (Font Awesome, jQuery, etc.) cached on a long term basis across all
sites that use them. (Though arguably this is easily achieved for fonts, which
can be installed system-wide.)

~~~
mschuster91
> It really is unfortunate that there is no way to have these widely-used
> resources (Font Awesome, jQuery, etc.) cached on a long term basis across
> all sites that use them.

There is - Google, Cloudflare and jQuery offer a CDN. The problem with CDNs is
that you then have a SPOF and an external dependency.

> Though arguably this is easily achieved for fonts, which can be installed
> system-wide.

Oh please no, this always leads to problems sooner or later - for example,
graphics designers tend to have LOTS of fonts installed, from all possible
sources. Especially the pirated fonts tend to freak out in lots of different
ways, and if there's a local() in the font-face rule, things break and you get
weird bug reports...

------
andygambles
Currently only have a 1.2M connection at home. Reveals how bandwidth intensive
many website are that simply do not need to be.

Ad Blocker is a must.

------
jasonlfunk
I understand the concern. Websites can become too bloated. They can require
too many resources or be poorly optimized to reduce bandwidth.

But this also seems like complaining about the trouble with driving a horse-
drawn carriage on the interstate. Sure, there are lots of people around the
world still on low speed networks - just like there are people who still use
horses are their primary mode of travel. And maybe there should be a way to
accommodate them, but let's not pretend that the advances in website
technology are only a detrimental problem that needs to be solved.

~~~
ivan_gammel
Significant amount of web requests is now performed via mobile connections,
which, depending on a lot of factors, can be as bad as 2G. We don't use
modems, of course, but we still need web sites that work on low bandwidth.

------
jackmoore
Can someone tell me why SVGs are gigantic while first loading? I often see
this even at modern connection speeds.

~~~
vortico
Two reasons come to mind: It's likely that the designer providing the SVG logo
scales the logo completely arbitrarily, which might be 1000 pixels wide. The
CSS `width:10px` in another file hasn't loaded yet, so the <img> tag holding
the SVG uses the absolute size of the SVG file. Another possibility is that a
flexbox or similar grid system is used, and the container holding the <img>
tag is told to stretch its contents to the full width of the flex item. If the
content in the following flex item is very large, the SVG will be compressed
to the proper small size, but if there is no content loaded yet in the
following flex item, the flexbox will stretch just the first item.

~~~
jackmoore
Thanks, looked into it further on my own project and found my specific issue.
The svg width and height attributes were missing from my webpack-built bundle
even though they were part of the source svg file. The svg loader I was using
(svg-inline-loader) was stripping with and height by default.

------
gjkood
If all the important content of your website can be rendered in a timely
fashion through a text browser like Lynx, then you will have catered to the
lowest common denominator.

Granted that is a very retro concept.

------
captn3m0
If anyone is actually suffering from dialup speeds and using Chrome, you
should try out my extension to disable web fonts:
[http://github.com/captn3m0/disable-web-
fonts](http://github.com/captn3m0/disable-web-fonts). It blocks all network
requests to font-files. Also has a couple other tips in the README for
improving page-load performances over slow networks.

I wrote it when i saw suffering terrible speeds over mobile internet (EDGE) a
couple years back.

------
matt_morgan
Somewhere between Linux, Firefox, uBlock, etc. I see a lot of this stuff on my
fast connection as well. Vox looked liked that to me for a few months, maybe a
year or two ago.

------
udfalkso
If you're on a mac the Network Link Conditioner is a great way to test your
stuff on a simulated slow connection.

[https://medium.com/@YogevSitton/use-network-link-
conditioner...](https://medium.com/@YogevSitton/use-network-link-conditioner-
when-testing-your-app-bad18ecad877)

~~~
siliconwrath
Another easy way to play around with network conditions is included in Chrome
dev tools: [https://developers.google.com/web/tools/chrome-
devtools/netw...](https://developers.google.com/web/tools/chrome-
devtools/network-performance/network-conditions)

------
mntmn
This reminds me: Since a while I'm looking for a good configurable proxy
solution to clean up/filter the web on my server especially for browsing via
old devices (Amiga and such). So I would like to reduce website to their
content, stripping all CSS, background images, scripts and such. Any
recommendations?

~~~
icomefromreddit
Privoxy ([https://www.privoxy.org/](https://www.privoxy.org/)):

> Privoxy is a non-caching web proxy with advanced filtering capabilities for
> enhancing privacy, modifying web page data and HTTP headers, controlling
> access, and removing ads and other obnoxious Internet junk. Privoxy has a
> flexible configuration and can be customized to suit individual needs and
> tastes. It has application for both stand-alone systems and multi-user
> networks.

> Privoxy is Free Software and licensed under the GNU GPLv2.

I'm from Bangladesh, my connection is ~256 kbps and my pc is slow (256 RAM and
1.6 ghz), if not by privoxy, i couldn't browse internet.

------
l0b0
Excellent site which brings us to an obvious question: At what point should we
as developers consider a site _good enough?_ There's an infinite tail of worse
and worse speeds and latencies. At some point it makes business sense to stop
optimising, and for businesses with lots of users that point is inevitably
_before_ supporting 100% of users. So how do I prove to the business where the
90th and 99th percentiles are, within some reasonably scientific measure of
uncertainty?

Aside: I just checked the site I'm working on. When throttling Chromium to
GPRS speeds (500 ms, 50/20 kb/s) the main page has all the text on it by 16
seconds after a hard refresh.

------
dgudkov
The developer axiom #1: If it works on my computer then it works for everyone
else.

~~~
JustSomeNobody
[https://rlv.zcache.ca/works_on_my_machine_round_sticker-r56c...](https://rlv.zcache.ca/works_on_my_machine_round_sticker-r56ce1cc314be46efbe749e9c58c761d5_v9waf_8byvr_512.jpg)

------
robocat
> Please, please, if your site requires AJAX to work at all, then retry failed
> AJAX queries.

Anyone have data on the best ways to do this? Or information on the
implementations used by say Gmail or Facebook?

------
colanderman
I don't see a mention of the question of utmost importance – at what baud rate
are you connecting?

------
therealmarv
Google Data Compression on Chrome helps a lot. At least on HTTP connections.
Too bad this does not work for HTTPS sites were the web designers do not test
and optimize enough for low bandwidth.

~~~
kalleboo
Or Opera Mini, which renders pages server-side and sends a minimal
representation to your phone. They even managed to convince Apple they're not
a browser, so there's an iOS version.

In my 56K days, the regular Opera was my browser of choice since it had a very
useful toggle for loading images or not (or showing only cached ones, or
letting you load them in afterwards without a painful page refresh cycle)

~~~
tracker1
You can have a browser, you just can't use your own JS engine for it, only
safari's, it also proxies everything through Opera, and delivers a better
mobile representation.

~~~
kalleboo
[https://developer.apple.com/app-
store/review/guidelines/](https://developer.apple.com/app-
store/review/guidelines/)

> 2.5.6 Apps that browse the web must use the appropriate WebKit framework and
> WebKit Javascript.

It reads to me like you're not allowed to use your own HTML renderer either.

------
cyrusmg
I have seen something similar on Ryanair.com - this is why I chose React
instead of Angular when I was looking for a new frontend framework

------
lupin_sansei
I wonder how much difference surfing with Adblock would make while connecting
over a modem?

------
trevyn
What browser is this?

~~~
truncheon
Uh, the point is that, on a slow connection, everything is going to suck in
unpredictable ways, because of server-side bloat, and the trendiness of
developers deploying 5MB js libraries.

Worrying about whether the browser was recent or supported misses the point.
Any browser can suffer these sorts of problems on a 56KB dial-up connection.
(3G mobile data is frequently throttled at 128KB by popular ISP's, btw)

~~~
frumiousirc
Uh, reading in a whole slew of things into a simple question is missing the
point.

------
abpavel
twitter/fb logos are svg, and are rendered at whatever resolution needed. I.E.
svg does not have "full scale"

------
jordache
please make your site lynx compatible

------
Fifer82
I don't see the problem. It isn't 1996 and I don't care about people who turn
JS off. This tiny percentage of people, are dwarfed by IE9 users, which, I
don't support either.

What is the moral of the story?

~~~
eponeponepon
The moral of the story is that some web people don't care about things which,
had they a deeper understanding of what they do, they would intuitively grasp
they should care for. QED.

~~~
Fifer82
So I will ask my boss on Monday if he minds that we revisit all our projects,
and refactor for the 0.01% of hipsters?

~~~
wtbob
If you have to revisit your projects to ensure that they work properly, then
you wrote them improperly in the first place.

