Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries. Use exponential backoff or whatever but don't let the AJAX query fail once and the page be unusable.
This happens all the freaking time when I'm on dialup, and there's nothing more annoying than having filled out a form or series of forms only to have the submit button break because it used AJAX to do a sanity check and threw an exception because the server timed out after some absurdly short (dialup-wise) period of time while the client was sending the request.
And then there are many many users world wide who use a smartphone (small screen) and connect over GPRS or Edge (slow as modem). Bonus points when overblown sites with dozends of ads and trackers crash your browser on a mobile device, because it ate all your RAM for lunch. Every additional JS file is a pain. Tip: Keep the total JS file size below 300KB.
Side rant: can I take a moment to call out WHATWG for deciding to specify that all networking errors in XmlHttpRequest get status: 0 and absolutely no explanation anywhere in the response object ( see https://fetch.spec.whatwg.org/#concept-network-error ), making it absolutely a nightmare to diagnose problems and support our users? I suppose in a world of fail whales and cute cat pics, leaving the user in the dark as to why something broke is now standard practice, but at least in those cases there's something server side to let techs know what's going on, but I get random calls from users complaining about this and the best I've got is that they tried to leave the page while the form was still saving (this triggers the error handler too since it's the exact same code) or their internet connection dropped ever so briefly because the next words out of their mouth when they say that is "but I can view other websites".
can I take a moment to call out
WHATWG for deciding to specify
that all networking errors in
XmlHttpRequest get status: 0
Do the w3c specs have anything to say on the matter?
> The error flag indicates some type of network error or request abortion
and in their spec for "network errors" at https://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#reque... unless you're using it in synchronous mode, it just sets the error flag. There's an onerror callback, but it doesn't appear to get any information about what went wrong than the WHATWG version.
Now that I think about it, generating some request UUID and passing it to the server could allow it to quickly skip duplicates (it would also need to cache responses for resending them).
I'm also curious how often this problem occurs statistically (not just in your case, but for an average user of popular sites)
In HTTP-semantic terms, you're always allowed to retry GETs, and they're always supposed to be idempotent (or not change state at all, but that's a lost battle.)
There's no reason that the browser shouldn't default to automatically retrying (with back-off) GET XHRs, save for how many sites are built without a proper understanding of HTTP semantics.
E.g. /inrement?retry=true, server gets 3 of them - how many times should it increment? Is it just that user still haven't received the first response, or should we increment 3 times because it was send/fail/retry/ok 3 times on the client side?
But with this UUID it still can be easily implemented as a middleware.
Turn up retries on TCP. Increase timeouts on the servers. Etc...
(disclaimer: my web developer days were a lifetime ago)
wasn't this basically the idea with CSS (and user CSS), "semantic" HTML and JS as progressive enhancement? those were pretty exciting ideas (IMHO), and with modern CSS there's no reason they shouldn't look good either... what ever happened with that?
Yeah, we should not be visiting those sites. Give our attention and currencies to the sites who care about their users.
I guess one good thing about AMP is now I know what sites to do my best not to use.
I suppose it'd be great if pampered first-worlders get to disable it for themselves.
No one who uses dial-up wants to use dial-up.
But the modern web is hopeless on slow connections. Some sites are pretty bad even at standard cable internet speeds. Site developers who work with gigabit fiber connections should try their pages at 10Mb or slower and 3G speeds out of courtesy to the real world.
The copper cable bringing ADSL to my house broke and it is prohibitively expensive to fix. I haven't had this bad Internet connectivity after 1999 but I can still work alright with mosh + tmux. Browsing the web isn't very nice, though.
I regularly pull 60/30 speeds on LTE. Broadband is 50/5. :)
As I said - if you'll actually look data in aggregate, you'll notice that people love to use their mobile devices for reading the web on commutes. Millions and millions of people devoting their morning/afternoon commute time to reading the internet on their mobile devices and mobile coverage in those conditions is nowhere close to "universal LTE without packet loss". A lot of cities (heck, even CalTrain in SV) have spotty commuter line coverage, especially when cells get loaded due to large amount of clients. Those people will then get crappy experience if you just assume everyone has LTE.
Is anectodal feeling data really how you design and optimize your software for end-users? :/
In particular, the latency can make your browsing experience rival dial up.
yes, 2g isn't dialup modem speed, but it isn't useful, either