Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries. Use exponential backoff or whatever but don't let the AJAX query fail once and the page be unusable.
This happens all the freaking time when I'm on dialup, and there's nothing more annoying than having filled out a form or series of forms only to have the submit button break because it used AJAX to do a sanity check and threw an exception because the server timed out after some absurdly short (dialup-wise) period of time while the client was sending the request.
And then there are many many users world wide who use a smartphone (small screen) and connect over GPRS or Edge (slow as modem). Bonus points when overblown sites with dozends of ads and trackers crash your browser on a mobile device, because it ate all your RAM for lunch. Every additional JS file is a pain. Tip: Keep the total JS file size below 300KB.
Side rant: can I take a moment to call out WHATWG for deciding to specify that all networking errors in XmlHttpRequest get status: 0 and absolutely no explanation anywhere in the response object ( see https://fetch.spec.whatwg.org/#concept-network-error ), making it absolutely a nightmare to diagnose problems and support our users? I suppose in a world of fail whales and cute cat pics, leaving the user in the dark as to why something broke is now standard practice, but at least in those cases there's something server side to let techs know what's going on, but I get random calls from users complaining about this and the best I've got is that they tried to leave the page while the form was still saving (this triggers the error handler too since it's the exact same code) or their internet connection dropped ever so briefly because the next words out of their mouth when they say that is "but I can view other websites".
can I take a moment to call out
WHATWG for deciding to specify
that all networking errors in
XmlHttpRequest get status: 0
Do the w3c specs have anything to say on the matter?
> The error flag indicates some type of network error or request abortion
and in their spec for "network errors" at https://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#reque... unless you're using it in synchronous mode, it just sets the error flag. There's an onerror callback, but it doesn't appear to get any information about what went wrong than the WHATWG version.
Now that I think about it, generating some request UUID and passing it to the server could allow it to quickly skip duplicates (it would also need to cache responses for resending them).
I'm also curious how often this problem occurs statistically (not just in your case, but for an average user of popular sites)
In HTTP-semantic terms, you're always allowed to retry GETs, and they're always supposed to be idempotent (or not change state at all, but that's a lost battle.)
There's no reason that the browser shouldn't default to automatically retrying (with back-off) GET XHRs, save for how many sites are built without a proper understanding of HTTP semantics.
E.g. /inrement?retry=true, server gets 3 of them - how many times should it increment? Is it just that user still haven't received the first response, or should we increment 3 times because it was send/fail/retry/ok 3 times on the client side?
But with this UUID it still can be easily implemented as a middleware.
Turn up retries on TCP. Increase timeouts on the servers. Etc...
(disclaimer: my web developer days were a lifetime ago)
wasn't this basically the idea with CSS (and user CSS), "semantic" HTML and JS as progressive enhancement? those were pretty exciting ideas (IMHO), and with modern CSS there's no reason they shouldn't look good either... what ever happened with that?
Yeah, we should not be visiting those sites. Give our attention and currencies to the sites who care about their users.
I guess one good thing about AMP is now I know what sites to do my best not to use.
I suppose it'd be great if pampered first-worlders get to disable it for themselves.
No one who uses dial-up wants to use dial-up.
But the modern web is hopeless on slow connections. Some sites are pretty bad even at standard cable internet speeds. Site developers who work with gigabit fiber connections should try their pages at 10Mb or slower and 3G speeds out of courtesy to the real world.
The copper cable bringing ADSL to my house broke and it is prohibitively expensive to fix. I haven't had this bad Internet connectivity after 1999 but I can still work alright with mosh + tmux. Browsing the web isn't very nice, though.
I regularly pull 60/30 speeds on LTE. Broadband is 50/5. :)
As I said - if you'll actually look data in aggregate, you'll notice that people love to use their mobile devices for reading the web on commutes. Millions and millions of people devoting their morning/afternoon commute time to reading the internet on their mobile devices and mobile coverage in those conditions is nowhere close to "universal LTE without packet loss". A lot of cities (heck, even CalTrain in SV) have spotty commuter line coverage, especially when cells get loaded due to large amount of clients. Those people will then get crappy experience if you just assume everyone has LTE.
Is anectodal feeling data really how you design and optimize your software for end-users? :/
In particular, the latency can make your browsing experience rival dial up.
yes, 2g isn't dialup modem speed, but it isn't useful, either
That was no fun even back then. After a year, I upgraded to ISDN, which was a lot faster (64kbit), but once I got to use a faster line, even ISDN seemed awfully slow.
And the trend continues to this day. Once one has a faster connection, one gets used to it in no time. And just like many companies and individuals have solved problems with slow software by throwing more/faster hardware at it, these days we solve the problem of web sites making inefficient use of bandwidth by throwing more bandwidth at it.
Which might not even be such a bad thing - I would not like going back to programming in an environment where I have to triple-check every variable to see if I can shave off a couple of kilobytes, either.
But even a fast connection gets clogged at times, and even on a 16MBit DSL line, I have seen pages looking broken because the request for getting the CSS timed out or something like that.
Maybe taking more care not to waste bandwidth should be considered a form of courtesy. People on slow/saturated lines will thank you for it, and people on fast lines will be amazed at how snappy that website loads. (And of course, there's always trade-offs involved; I do not demand web developers sacrifice everything else to efficient bandwidth usage; but it should not be ignored, either.)
This isn't limited to just the 'developing' world. Last year I was living in Italy and my phone service was provided by Vodafone. When visiting some popular websites (the biggest I remember was anything owned by Gawker Media) I would be redirected to unrelated spam sites a few seconds after the page loaded. Hitting back did nothing, even going back to the search results and pressing the link again did the same. If I fired up my VPN it worked fine though :D
I have no doubt that certain countries (probably in Africa) may be worse, but I think it's a mistake to look at the third world the way you're portraying it. In many cases, they're not deprived or years-behind in the way you've described. In many cases, they're significantly ahead of us.
That might be part of why I've found newer programmers seem to write more complex and inefficient code than older ones, who tend to come up with simple and efficient solutions naturally. I think everyone should experience using and programming in a constrained environment, if only to train this skill of what's efficient and what isn't.
Granted, a decade of that was chasing hardware trends, but the earlier optimizations stack too.
FWIW - when I can, I like to to use a - relatively - resource-constrained machine to test my code on, so performance hiccups one would not even notice on a high-end desktop PC become obvious. I used to use a SparcStation for that, but it went the way of all earthly possessions. So now, I use a Raspberry Pi, and an old laptop (it has a "Built for Windows Vista" sticker, so it must have been built before 2009/Q3) for this purpose. It is a luxury I have when programming in my spare time, to use this as the bar - if I cannot get a program to run reasonably well on those machines, I consider it a failure (or a candidate for optimization, when I am in the mood...).
I started on 2400 and had 28.8k through high school on bad phone lines. I used to play solitaire while waiting for web pages to load. Some would take minutes!
I now have gigabit fiber and everything is so fast. A lot of sites are bloated and may take seconds to load on slower DSL but it's still orders of magnitude faster.
Nothing quite like watching characters being drawn at typing speed :) least on IRC it wasn't too bad unless it was a busy channel
e.g. find ~ | baud.pl -300
('ad to lick code clean wi' tongue, etc. etc.)
And, after leaving University, the only Internet connection I could use (I believe Demon Internet in the UK was starting up, so dial-up was beginning to appear, but the faster University connections were for students only) was their free 2400 baud connection to the X.25 PAD (translatable to TCP/IP through SLIP once a shell connection established) and I resigned myself to this being the only Internet access I would ever have without a return to academia!
Bought the coupler and a Volker Craig VT100 terminal (complete with still-attached security chain) for $50 CAN from an Admin at my university who had no doubt pinched it from his employer.
This allowed me to dial in and work on programming assignments at the last hour, when the labs were full but open dial-up ports always available.
Don't know when I got rid of the coupler - I felt like a high-tech dude dialing the number, listening for the carrier and then placing the receiver in the cups.
That's called "embedded development" and sometime's it's fun to visit but I'm glad not to live there.
To be fair, most of the problems these people seem to face are relatively simple (at least a lot of the time) compared to writing multi-threaded web servers with two database backends (one relational, one NoSQL) using 47 different varieties of XML to represent the same data, but in slightly different ways that only become apparent once one has spent a few weeks reading the specs before going to sleep, until one dreams of the perfect file or something.
I know embedded developers have their own fair share of troubling problems; I am so glad that even the worst bug I could ever come up with would be unable to endanger human lives. I start transpiring like a hippo just thinking about a user being really unhappy with a piece of code that I wrote; I seriously don't want to know what it's like if your bug has killed people.
(Embedded development covers a wide range of problems, of course, but inevitably, some of those are going to have to do with transporting human beings across an ocean, safely, or something like that. I do not envy the people that have to write that kind of code.)
With an embedded system you know and control every component of the system, right down to the CPU bugs, and as a result your application code is much, much simpler. You can reason about and understand everything; there are no black boxes.
The safety thing is not a big problem in practice. Your companies have lawyers, regulatory teams, risk assessors, standards and code reviews. You will thoroughly test everything to ridiculous detail (in my experience, 30:1 ratio of test:production code). Mistakes happen, but by the time they manifest in reality so many people will have had a hand in it that you won't feel personally responsible.
It's also a maturity thing. At one point 1) you're not chasing new "needs" 2) you know how much a few buys you.
I keep seeing people asking for optic fiber I'm going wild. Turns out they all want multiple HD streaming and 50GB Steam games (I wonder if Steam is has lazy game content streaming somewhere...). It's a fad luxury. Not long ago I think we would have been nicely happy with 2x 720p, which is possible on a 5Mbps line I think. I'm still happy when I get a linux ISO in 10min or so. I don't think I'd be crazy joyful if it came in 20 seconds.
I upgraded from 16mbit/s to 50mbit a couple years ago and didn't really notice the difference. Now I could get 400mbit but didn't bother.
For example, I used to live in a house where there were 5 of us. I really wanted to subscribe to Netflix, because it was 1/10th the cost of cable TV and suited my needs even better. But the connection just wasn't strong enough for two people to stream video at once so it meant we had to work out a timetable of who could watch movies/TV when, swap blocks if some live event one of us wanted to watch came on, etc, and it just wasn't worth the inconvenience.
Most people live with other people.
I read 'screaming'.
nmap s :emenu View.Page Style.No Style<CR>
nmap <S-s> :emenu View.Page Style.Basic Page Style<CR>
On iOS I usually try to load articles in between stops in the subway and frequently cannot get just the article without being bothered by the rest of the nonsense on the page. I am ooen to the idea I might be just using readermode wrong but I'd have it on by default if I could be assured I'd actually get a full article each time.
Alt-vyb to enable site provided css again
map s :set invusermode <CR>
I'd love to be able to control umatrix via vimperator though
A surprising number of people are still on low bandwidth connections, while it's probably not reasonable to optimize for them, it's at least worth considering that market occasionally.
This is obvious and pedantic, but also counters half of the comments on these sorts of these stories
Compare this with mbasic.facebook.com, which is 22 requests, 107 KB, and 1.48 s. There are some small UX problems with basic HTML Facebook that I would recommend they improve on (placement of links/buttons, omnipresent header), but overall it is a much better experience for me since I feel much more in control. Same with basic HTML Gmail vs full Gmail.
My point is that it is absolutely possible to not give in to shitty bloated web trends driven by the expectation to increase popularity, while making a quality, profitable website.
Mind blown. It even has messaging that works without having to install privacy-invading Messenger. You have just improved my facebook UX by a country mile.
It also downscales all the media down, making it hard to appreciate a lot of things.
The main facebook site makes a lot of media requests to preload the things you're going to start reading immediately.
the main facebook site doesn't attempt to scale to the connection speed, which is unfortunate, but it offers a much better experience for those who have the bandwidth to pull in the pictures. Especially for the primary use case (scroll a bunch to see a bunch of people's stuff)
I wouldn't want Facebook to scale things to my bandwidth. That gives me less control over what I see because the system is making mostly arbitrary decisions for me.
I spent the holidays at my parents's, they're hooked to 1024kbps ADSL, so theoretical 128kB/s. Effectively it hovers between 80k and 120k which is fine…-ish: the connection will regularly jump to ~7% packet loss and 2000±1000ms ping. Downloading at 80k is one thing, downloading at 80k with 3s round-trip and 1 packet in 15 missing things get much trickier, especially when most pages have a really high number of resources.
 and nothing better is currently available short of cellular (they do get ok 4G, at least when the weather doesn't screw with it), though they're supposed to get fibre a year or two down the line
Although that doesn't mean to go ahead and throw in lodash + underscore + ramda + jquery + jquery extensions all on top of react/redux. Just because you're using a modern framework, doesn't mean it needs to be huge. You can build a basic app with preact and min+gz hitting a total size around 20-30kb.
Yes, that's more than server-rendered simple AMP-style html... on the flip side, you don't have to round trip for every screen either. It's also a LOT smaller than some of the behemoths out there, and even then, a lot of the time Images are the biggest hit in bandwidth.
If and only if you do the SPA right. Compare Twitter (slow to load, slow to interact with, battery-draining, issues with maintaining scroll position, and all of these get worse the farther you scroll down) to the SPA version of https://mobile.twitter.com/ (fast to load and interact with) to the static version of Mobile Twitter (even faster, no infinite scroll, works on Dillo).
I've seen so many of these, even on websites with lots of traffic. Websites have to be written taking into consideration the way it loads too, especially on mobile data.
I've found the Chrome DevTools feature where you can throttle bandwidth comes in super handy for this.
What's nice about NoScript is that I can turn on their JS but keep turned off the JS scripts from the other sites. Apparently they only use googletagmanager. Ublock doesn't report any blocked script so it's a rare well behaved site.
I use elinks often, and find it's text-based approach easier to comprehend. What are your thoughts?
There's nothing wrong with site-provided CSS, but I strive to make my own stuff work with or without CSS.
It really is unfortunate that there is no way to have these widely-used resources (Font Awesome, jQuery, etc.) cached on a long term basis across all sites that use them. (Though arguably this is easily achieved for fonts, which can be installed system-wide.)
There is - Google, Cloudflare and jQuery offer a CDN. The problem with CDNs is that you then have a SPOF and an external dependency.
> Though arguably this is easily achieved for fonts, which can be installed system-wide.
Oh please no, this always leads to problems sooner or later - for example, graphics designers tend to have LOTS of fonts installed, from all possible sources. Especially the pirated fonts tend to freak out in lots of different ways, and if there's a local() in the font-face rule, things break and you get weird bug reports...
Ad Blocker is a must.
But this also seems like complaining about the trouble with driving a horse-drawn carriage on the interstate. Sure, there are lots of people around the world still on low speed networks - just like there are people who still use horses are their primary mode of travel. And maybe there should be a way to accommodate them, but let's not pretend that the advances in website technology are only a detrimental problem that needs to be solved.
You're lucky. Last designer I asked to provide a SVG image had obviously exported from a vector drawing tool to a bitmap, AND THEN converted to SVG. So you kinda had a whole SVG object for each pixel.
It never occurred to him that a 40 MB SVG file might indicate a problem (when the point of having a vector format was partly to save memory).
I wonder how big can the SVG DOM be in browsers before things start to get painfully slow
Usually happens with SVGs whose internal dimensions differ wildly from the display dimensions.
e.g. Internal canvas of 1000px x 1000px, display size of 10px.
Granted that is a very retro concept.
I wrote it when i saw suffering terrible speeds over mobile internet (EDGE) a couple years back.
> Privoxy is a non-caching web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It has application for both stand-alone systems and multi-user networks.
> Privoxy is Free Software and licensed under the GNU GPLv2.
I'm from Bangladesh, my connection is ~256 kbps and my pc is slow (256 RAM and 1.6 ghz), if not by privoxy, i couldn't browse internet.
acl Blocked_extensions urlpath_regex -i "/etc/squid3/blocked.extensions.acl"
http_access deny Blocked_extensions
Aside: I just checked the site I'm working on. When throttling Chromium to GPRS speeds (500 ms, 50/20 kb/s) the main page has all the text on it by 16 seconds after a hard refresh.
Anyone have data on the best ways to do this? Or information on the implementations used by say Gmail or Facebook?
In my 56K days, the regular Opera was my browser of choice since it had a very useful toggle for loading images or not (or showing only cached ones, or letting you load them in afterwards without a painful page refresh cycle)
It reads to me like you're not allowed to use your own HTML renderer either.
Worrying about whether the browser was recent or supported misses the point. Any browser can suffer these sorts of problems on a 56KB dial-up connection. (3G mobile data is frequently throttled at 128KB by popular ISP's, btw)
What is the moral of the story?