Hacker News new | comments | show | ask | jobs | submit login
After 1 minute on my modem (2016) (branchable.com)
409 points by BuuQu9hu on Jan 14, 2017 | hide | past | web | favorite | 156 comments



Since I temporarily have HN's attention with this side blog of mine, can I suggest one simple tweak:

Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries. Use exponential backoff or whatever but don't let the AJAX query fail once and the page be unusable.

This happens all the freaking time when I'm on dialup, and there's nothing more annoying than having filled out a form or series of forms only to have the submit button break because it used AJAX to do a sanity check and threw an exception because the server timed out after some absurdly short (dialup-wise) period of time while the client was sending the request.


Just as annectdote, your blog site is kind of broken on smaller screens (tablet). The useless right sidebar covers almost half of the screen, and overlaps the content (only a bit more than half of it is visible).

And then there are many many users world wide who use a smartphone (small screen) and connect over GPRS or Edge (slow as modem). Bonus points when overblown sites with dozends of ads and trackers crash your browser on a mobile device, because it ate all your RAM for lunch. Every additional JS file is a pain. Tip: Keep the total JS file size below 300KB.


And doesn't happen only on dialup either, but tethered and mobile connections while commuting - when most people are using their mobile devices to browse around.


Yeah, found this out the hard way with customers complaining that they hit save, get an error code "0", and then couldn't save again because we had disabled the button after submitting.

Side rant: can I take a moment to call out WHATWG for deciding to specify that all networking errors in XmlHttpRequest get status: 0 and absolutely no explanation anywhere in the response object ( see https://fetch.spec.whatwg.org/#concept-network-error ), making it absolutely a nightmare to diagnose problems and support our users? I suppose in a world of fail whales and cute cat pics, leaving the user in the dark as to why something broke is now standard practice, but at least in those cases there's something server side to let techs know what's going on, but I get random calls from users complaining about this and the best I've got is that they tried to leave the page while the form was still saving (this triggers the error handler too since it's the exact same code) or their internet connection dropped ever so briefly because the next words out of their mouth when they say that is "but I can view other websites".


  can I take a moment to call out
  WHATWG for deciding to specify 
  that all networking errors in 
  XmlHttpRequest get status: 0
I think this is one of those times where people should be ignoring the spec.

Do the w3c specs have anything to say on the matter?


According to w3c:

> The error flag indicates some type of network error or request abortion

and in their spec for "network errors" at https://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#reque... unless you're using it in synchronous mode, it just sets the error flag. There's an onerror callback, but it doesn't appear to get any information about what went wrong than the WHATWG version.


It's not just an easy JS fix though. It complicates quite a lot on the server side. Your timed out request could have reached the server already, so the state would different now. From the server perspective it's not clear whether it was a retry or another request (and some actions are meant to be repeatable).

Now that I think about it, generating some request UUID and passing it to the server could allow it to quickly skip duplicates (it would also need to cache responses for resending them).

I'm also curious how often this problem occurs statistically (not just in your case, but for an average user of popular sites)


Mind you, we're talking about "sanity checks" here, not actions: presumably these are XHR GETs.

In HTTP-semantic terms, you're always allowed to retry GETs, and they're always supposed to be idempotent (or not change state at all, but that's a lost battle.)

There's no reason that the browser shouldn't default to automatically retrying (with back-off) GET XHRs, save for how many sites are built without a proper understanding of HTTP semantics.


Wouldn't it be easier to add a retry=true to the request, at least if your implementing exponential back off in an existing codebase? Then you just hash the request (minus the retry field) in your cache layer and send the saved response again. You'll need a little extra code to update timestamps on the client side but the server side can be implemented as middleware in most frameworks. Expire the cache after a reasonable total timeout and make the client do a new read if they try to retry after cache has cleared it.


I think you need client side UUID instead of just retry=true for requests that can be repeated with the same params.

E.g. /inrement?retry=true, server gets 3 of them - how many times should it increment? Is it just that user still haven't received the first response, or should we increment 3 times because it was send/fail/retry/ok 3 times on the client side?

But with this UUID it still can be easily implemented as a middleware.


but you shouldn't use a GET request for incrementing, right? do people (or popular frameworks' defaults) really do that? frequently?


The OP was talking about having to refill the form, I assumed they weren't submitted using GET.


This is at the TCP/http layer not app logic. Your machine and the servers you're connected to don't have the right settings for dialup.

Turn up retries on TCP. Increase timeouts on the servers. Etc...


I've had this happen on a cell phone modem! My (large) image uploads to an online flow-chart saas started failing. Open up the chrome dev tools, turns out they all failed at exactly 20.0 seconds -- obviously a timeout issue. I had a maddening support experience in which the support tech kept insisting that he couldn't reproduce the problem from his (high bandwidth) end. I asked him to please just ask an engineer to raise the 20 second timeout, but he stone-walled me. So frustrating.


Then again, as a user I actually find a perpetual loader spinner animation far more annoying than a simple "oh noes server is under heavy load please come back later".


I'm curious what do you think about AMP. Do you think it is a good solution for this? (not implying that AMP solved this problem, but rather some company as intermediary to serve you the page correctly, instead of relying on the website's owner)


AMP isn't a solution for anything. Google should NOT be 'fixing' the web for anyone.


I don't disagree with you, but there are so many bad websites that something should be done. AMP is a really bad solution to a real problem. Walled garden pages as Facebook for companies and similars are equally bad in my opinion. Some time ago I had the idea that, maybe, the website layout could be controlled by the user, and only data are controlled by the website's owner. The web would become boring, I know, but the alternative is just as bad: broken websites.


> Some time ago I had the idea that, maybe, the website layout could be controlled by the user, and only data are controlled by the website's owner. The web would become boring, I know,

(disclaimer: my web developer days were a lifetime ago)

wasn't this basically the idea with CSS (and user CSS), "semantic" HTML and JS as progressive enhancement? those were pretty exciting ideas (IMHO), and with modern CSS there's no reason they shouldn't look good either... what ever happened with that?


> I don't disagree with you, but there are so many bad websites that something should be done.

Yeah, we should not be visiting those sites. Give our attention and currencies to the sites who care about their users.

I guess one good thing about AMP is now I know what sites to do my best not to use.


As a third world Google user, I appreciate that they save me a lot of frustration and money with AMP.

I suppose it'd be great if pampered first-worlders get to disable it for themselves.


My point is that the developers of the sites that are so slow should be figuring out why and not leaning on the crutch that is AMP.


The problem AMP is trying to fix is very real. Its just a shitty powergrabbing fix


[flagged]


"It hurts when I move my arm." - "Don't move your arm then."

No one who uses dial-up wants to use dial-up.


It's still possible to be reasonably productive on dial-up if you're working in a screen or tmux session over ssh.

But the modern web is hopeless on slow connections. Some sites are pretty bad even at standard cable internet speeds. Site developers who work with gigabit fiber connections should try their pages at 10Mb or slower and 3G speeds out of courtesy to the real world.


With mosh instead of plain SSH, working with a crappy connection becomes pretty fluid.

The copper cable bringing ADSL to my house broke and it is prohibitively expensive to fix. I haven't had this bad Internet connectivity after 1999 but I can still work alright with mosh + tmux. Browsing the web isn't very nice, though.


10mb? I celebrate anything over 100KBps. This is anecdotal but in Uganda(Third word) most personal connections(MIFI/modem) are way slower than that


You mobile phone == dial up. Just less reliable and drops more packets.


At my home, mobile is much faster than my cable modem, although latency is variable.

I regularly pull 60/30 speeds on LTE. Broadband is 50/5. :)


In 1999 with a BTCellnet brick and WAP, I agree, but at no point in the almost 20 years since then, has it been as slow as dial up in any country I have visited. Maybe I am just lucky but it seems far fetched.


What do you define "fast"? Yes, most of the time you (the anectodal you) probably have LTE coverage everywhere, but that coverage is not universal even in high-tech places like Sillicon Valley and can vary wildly when just moving around for a bit.

As I said - if you'll actually look data in aggregate, you'll notice that people love to use their mobile devices for reading the web on commutes. Millions and millions of people devoting their morning/afternoon commute time to reading the internet on their mobile devices and mobile coverage in those conditions is nowhere close to "universal LTE without packet loss". A lot of cities (heck, even CalTrain in SV) have spotty commuter line coverage, especially when cells get loaded due to large amount of clients. Those people will then get crappy experience if you just assume everyone has LTE.

Is anectodal feeling data really how you design and optimize your software for end-users? :/


The throughput might be better than dial up but latency is worse, reliability can be awful (especially when you're between two cell base stations) and the connection can go down for a minute or more at any time.

In particular, the latency can make your browsing experience rival dial up.


Maybe you've never left the cities. I just took a 3000-4000 mile long train trip across the country. I'd estimate that I had cell phone coverage for less than 1% of the trip.


Well, on second thought let's say less than 5%. 1% would only be about an hour of that trip.


I ran in to plenty of 2g service in Germany and Belgium last year. Perhaps T Mobile just has horrible service, but I was frequently on slow 3g at best, especially west of Cologne. It made 4g/lte really nice when I found it, though :)

yes, 2g isn't dialup modem speed, but it isn't useful, either


When I went online for the first time (nearly twenty years ago - time goes by pretty fast!), I did so on 14.4 kbit modem.

That was no fun even back then. After a year, I upgraded to ISDN, which was a lot faster (64kbit), but once I got to use a faster line, even ISDN seemed awfully slow.

And the trend continues to this day. Once one has a faster connection, one gets used to it in no time. And just like many companies and individuals have solved problems with slow software by throwing more/faster hardware at it, these days we solve the problem of web sites making inefficient use of bandwidth by throwing more bandwidth at it.

Which might not even be such a bad thing - I would not like going back to programming in an environment where I have to triple-check every variable to see if I can shave off a couple of kilobytes, either.

But even a fast connection gets clogged at times, and even on a 16MBit DSL line, I have seen pages looking broken because the request for getting the CSS timed out or something like that.

Maybe taking more care not to waste bandwidth should be considered a form of courtesy. People on slow/saturated lines will thank you for it, and people on fast lines will be amazed at how snappy that website loads. (And of course, there's always trade-offs involved; I do not demand web developers sacrifice everything else to efficient bandwidth usage; but it should not be ignored, either.)


Well balanced. And keep in mind that much of the emerging web users across the world are on mobile connections, which, even if using 3G or 4G or higher, are in practice much slower than advertised and latency ridden, due to load and outright corruption by the service providers. Not to mention metered bandwidth is still the norm outside select areas.


> outright corruption by the service providers

This isn't limited to just the 'developing' world. Last year I was living in Italy and my phone service was provided by Vodafone. When visiting some popular websites (the biggest I remember was anything owned by Gawker Media) I would be redirected to unrelated spam sites a few seconds after the page loaded. Hitting back did nothing, even going back to the search results and pressing the link again did the same. If I fired up my VPN it worked fine though :D


I'm curious what proportion of sites you were confident you were visiting over HTTPS.


Perhaps, and all I have is anecdotal experience, but having spent most of the last year traveling around Southeast Asia, I was stunned by how much better their wireless service was. For $10-$15/mo, I was getting more data, at faster speeds and with less latency than I get here in the US for roughly 3x the cost. The coverage was also pretty excellent.

I have no doubt that certain countries (probably in Africa) may be worse, but I think it's a mistake to look at the third world the way you're portraying it. In many cases, they're not deprived or years-behind in the way you've described. In many cases, they're significantly ahead of us.


On the other hand, as long as the latency is similar, the SSH experience is pretty much the same regardless of bandwidth --- the exception being commands that produce plenty of output.

Which might not even be such a bad thing - I would not like going back to programming in an environment where I have to triple-check every variable to see if I can shave off a couple of kilobytes, either.

That might be part of why I've found newer programmers seem to write more complex and inefficient code than older ones, who tend to come up with simple and efficient solutions naturally. I think everyone should experience using and programming in a constrained environment, if only to train this skill of what's efficient and what isn't.


You are not controlling for the programmer's age as a variable. An equally valid explanation could be that newer programmers have less experience to draw from, where older programmers have more experience and know what works and what doesn't. Or, alternatively, newer programmers might just be a much larger pool of people, and perhaps more people stop programming as they get older than start, so that the average skill of an older programmer is much higher than you might otherwise expect (since all the low-skill programmers quit when they were younger).


This. My day-to-day code has probably doubled in efficiency every 2-4 years since the mid 90's.

Granted, a decade of that was chasing hardware trends, but the earlier optimizations stack too.


> I think everyone should experience using and programming in a constrained environment

FWIW - when I can, I like to to use a - relatively - resource-constrained machine to test my code on, so performance hiccups one would not even notice on a high-end desktop PC become obvious. I used to use a SparcStation for that, but it went the way of all earthly possessions. So now, I use a Raspberry Pi, and an old laptop (it has a "Built for Windows Vista" sticker, so it must have been built before 2009/Q3) for this purpose. It is a luxury I have when programming in my spare time, to use this as the bar - if I cannot get a program to run reasonably well on those machines, I consider it a failure (or a candidate for optimization, when I am in the mood...).


Your point is valid but the load time of modern sites is nowhere near comparable to dialup.

I started on 2400 and had 28.8k through high school on bad phone lines. I used to play solitaire while waiting for web pages to load. Some would take minutes!

I now have gigabit fiber and everything is so fast. A lot of sites are bloated and may take seconds to load on slower DSL but it's still orders of magnitude faster.


I remember when I upgraded to a 14.4k modem and boy was I pleased after dialing with a 9600 baud for years.


I remember 300 baud connections! :) because the "fast" 2400 ones were all in use.

Nothing quite like watching characters being drawn at typing speed :) least on IRC it wasn't too bad unless it was a busy channel


Brendan Gregg's "baud.pl" script[0] is great for re-living those nostalgic moments.

e.g. find ~ | baud.pl -300

[0]: http://www.brendangregg.com/Specials/baud


Luxury. I started out on 1200/75. I used to dream of 2400, let alone 9600!

('ad to lick code clean wi' tongue, etc. etc.)


Was that PRESTEL, maybe? Did you ever take advantage of the ability to have the modem train as the server side of a 1200/75 connection? That way, you could upload at the blistering speed of 1200 baud, with the downside of trying to navigate PRESTEL or its ilk at the staggeringly slow (worse than the original IBM teletypes) speed of 75 baud! Good times...

And, after leaving University, the only Internet connection I could use (I believe Demon Internet in the UK was starting up, so dial-up was beginning to appear, but the faster University connections were for students only) was their free 2400 baud connection to the X.25 PAD (translatable to TCP/IP through SLIP once a shell connection established) and I resigned myself to this being the only Internet access I would ever have without a return to academia!


well I first wrote 2700 baud, and then I had a sudden work I was remembering wrong. I honestly think I started at 2700 baud as a 10-year-old.


I'll pitch in here with a 300 bps acoustic coupler claim, back when bps == baud [0].

Bought the coupler and a Volker Craig VT100 terminal (complete with still-attached security chain) for $50 CAN from an Admin at my university who had no doubt pinched it from his employer.

This allowed me to dial in and work on programming assignments at the last hour, when the labs were full but open dial-up ports always available.

Don't know when I got rid of the coupler - I felt like a high-tech dude dialing the number, listening for the carrier and then placing the receiver in the cups.

[0]http://www.tldp.org/HOWTO/Modem-HOWTO-23.html


> I would not like going back to programming in an environment where I have to triple-check every variable to see if I can shave off a couple of kilobytes

That's called "embedded development" and sometime's it's fun to visit but I'm glad not to live there.


Some of my friends work in embedded development, so I have at least a second-hand feeling of what it's like to have to get the code work with only 64k of RAM.

To be fair, most of the problems these people seem to face are relatively simple (at least a lot of the time) compared to writing multi-threaded web servers with two database backends (one relational, one NoSQL) using 47 different varieties of XML to represent the same data, but in slightly different ways that only become apparent once one has spent a few weeks reading the specs before going to sleep, until one dreams of the perfect file or something.

I know embedded developers have their own fair share of troubling problems; I am so glad that even the worst bug I could ever come up with would be unable to endanger human lives. I start transpiring like a hippo just thinking about a user being really unhappy with a piece of code that I wrote; I seriously don't want to know what it's like if your bug has killed people.

(Embedded development covers a wide range of problems, of course, but inevitably, some of those are going to have to do with transporting human beings across an ocean, safely, or something like that. I do not envy the people that have to write that kind of code.)


I do both embedded development (day job) and web development (night jobs(s)) and it's not that embedded tasks are simpler, it's that you don't have to deal with a ridiculous stack of third-party code that is constantly changing.

With an embedded system you know and control every component of the system, right down to the CPU bugs, and as a result your application code is much, much simpler. You can reason about and understand everything; there are no black boxes.

The safety thing is not a big problem in practice. Your companies have lawyers, regulatory teams, risk assessors, standards and code reviews. You will thoroughly test everything to ridiculous detail (in my experience, 30:1 ratio of test:production code). Mistakes happen, but by the time they manifest in reality so many people will have had a hand in it that you won't feel personally responsible.


Thank you, that was fascinating to read!


I remember having to wait at a friend's flat in 2004, that only had a 56K line. For the average webpage, online chat and even occasional mp3 download it felt alright. Not sluggish, not cringey, not anger triggering.

It's also a maturity thing. At one point 1) you're not chasing new "needs" 2) you know how much a few buys you.

I keep seeing people asking for optic fiber I'm going wild. Turns out they all want multiple HD streaming and 50GB Steam games (I wonder if Steam is has lazy game content streaming somewhere...). It's a fad luxury. Not long ago I think we would have been nicely happy with 2x 720p, which is possible on a 5Mbps line I think. I'm still happy when I get a linux ISO in 10min or so. I don't think I'd be crazy joyful if it came in 20 seconds.


You can stream anything at any bitrate, but 2x 720p on 5 Mbps would be a very extreme case. That would be a third to a quarter the bitrate of a 20-year-old DVD, or 3.5 - 7% the bitrate of a Blu-ray disc. That's less than half the 'Extremely Low' preset of the Sorenson encoder, the lowest you can go without defining custom options.


why do you even need to stream more than a single movie at a time?

I upgraded from 16mbit/s to 50mbit a couple years ago and didn't really notice the difference. Now I could get 400mbit but didn't bother.


You'd want to stream more than one movie at a time if you had more than one person in a house.

For example, I used to live in a house where there were 5 of us. I really wanted to subscribe to Netflix, because it was 1/10th the cost of cable TV and suited my needs even better. But the connection just wasn't strong enough for two people to stream video at once so it meant we had to work out a timetable of who could watch movies/TV when, swap blocks if some live event one of us wanted to watch came on, etc, and it just wasn't worth the inconvenience.

Most people live with other people.


Some households contain more than one person.


As other said, couples, families.


Fair enough, didn't think of that! We're actually 3 here but my wife and I always watch movies together & the baby isn't big on streaming.


I remember the first time my wife and kids made a game unplayable and resulted in a loss for my team. It feels like I perpetually need a faster connection. And it wasn't long after that they forced me into getting a wireless router that supported N because they were saturating the g one we had.


> the baby isn't big on streaming.

I read 'screaming'.


If you use vimperator (http://www.vimperator.org/) on Firefox, put this in your ~/.vimperatorrc so you can disable CSS with the "s" character (and re-enable it with Shift-S). It removes 99% of bullshit from web pages and allows you to read articles the way Tim Berners-Lee intended, guaranteed!

    nmap s :emenu View.Page Style.No Style<CR>
    nmap <S-s> :emenu View.Page Style.Basic Page Style<CR>


Thanks. This is actually really great. Not only for testing my own web sites, but also for getting rid of a lot of the bullshit on other sites. I'll try to remember I have this shortcut now!


If you don't use Vimperator, reader mode's usually good for this, too.


My only issue with reader mode is that many sites seem to beat it with the stupid"load rest of article button" so you can't just load it all through reader.

On iOS I usually try to load articles in between stops in the subway and frequently cannot get just the article without being bothered by the rest of the nonsense on the page. I am ooen to the idea I might be just using readermode wrong but I'd have it on by default if I could be assured I'd actually get a full article each time.


Opera has or used to have a user styles feature, that you can use to do something similar, I used to dump the sites CSS and use my own.


It also has a disable images button.


Is there a significant difference between this function and simply hitting the standard View -> Page Style -> No Page Style menu?


Nope, it's just quicker. Also, my menu bar is hidden and I don't know how to get to it.


Press alt to see the menu.


Alt-vyn to disable site styles

Alt-vyb to enable site provided css again


Neat, thanks. Works identically with the slightly smaller fork called Pentadactyl (http://5digits.org/pentadactyl/).


For me it turns every page completely blank.


You can get similar results by toggling usermode:

    map s :set invusermode <CR>
Then you can just use the same key to switch back and forth.


I've actually just been disabling css via umatrix, but yes. I've found that disabling JS makes sites load at decent speeds, and disabling CSS makes sites readable again.

I'd love to be able to control umatrix via vimperator though


That's... such a good idea.


Thank you, this is terrific!


This is awesome, and actually a pretty neat way of evaluating websites.

A surprising number of people are still on low bandwidth connections, while it's probably not reasonable to optimize for them, it's at least worth considering that market occasionally.


If you optimize for people on low bandwidth connections, you automatically also optimize for those with limited mobile data. And as a bonus, your website becomes faster for everybody with a fat pipe too.


"Optimize" is somewhat the wrong word. "Straying from temptation" would be more accurate, since all garbage on webpages is placed there by a website manager's overzealous want to increase popularity or profit.


I understand some people don't use things like cloud software, but the internet isn't just blogs and newspaper websites.

Sometimes styling is added to make web app UX better. Sometimes Javascript is used so that, even if the first load takes more time, subsequent actions will use less bandwidth. Sometimes more stuff is put on the page because 95% of users want that information to also show up.

This is obvious and pedantic, but also counters half of the comments on these sorts of these stories


My point especially applies to web apps. I just loaded facebook.com, and it took 228 requests, 8,825 KB, and 17.0 s. Scrolling through the page is laggy while a stressful amount of muted videos start playing and mouse-hover events start firing.

Compare this with mbasic.facebook.com, which is 22 requests, 107 KB, and 1.48 s. There are some small UX problems with basic HTML Facebook that I would recommend they improve on (placement of links/buttons, omnipresent header), but overall it is a much better experience for me since I feel much more in control. Same with basic HTML Gmail vs full Gmail.

My point is that it is absolutely possible to not give in to shitty bloated web trends driven by the expectation to increase popularity, while making a quality, profitable website.


> mbasic.facebook.com

Mind blown. It even has messaging that works without having to install privacy-invading Messenger. You have just improved my facebook UX by a country mile.


The combination of mbasic for Messenger, and m for normal Facebook browsing, is fine on Android. I don't install the awful FB apps on my phone any more, and I'm a fairly heavy Facebook user.


mbasic.facebook.com is nice for when you want to save the bandwidth, but the experience is nowhere near as nice as the main website. It only loads the first 2 stories or so, requiring you to click forward many times.

It also downscales all the media down, making it hard to appreciate a lot of things.

The main facebook site makes a lot of media requests to preload the things you're going to start reading immediately.

the main facebook site doesn't attempt to scale to the connection speed, which is unfortunate, but it offers a much better experience for those who have the bandwidth to pull in the pictures. Especially for the primary use case (scroll a bunch to see a bunch of people's stuff)


I can see 8 stories on each page on mine, but I agree the images are smaller than I'd like. You have to click the image to get to the image page and then click "View Full Size" to see the full image. But with vimperator, it's much faster to get to that point than it sounds. On the other hand, simply clicking on videos delivers the full raw video, no slow Facebook video player/wrapper needed.

I wouldn't want Facebook to scale things to my bandwidth. That gives me less control over what I see because the system is making mostly arbitrary decisions for me.


I fought so hard for this at one of my current customers. But, no, we have Mbyte+ image maps for our pricing plan selection pages.


Low bandwidth is but one component, the other two are delay and packet loss.

I spent the holidays at my parents's, they're hooked to 1024kbps ADSL[0], so theoretical 128kB/s. Effectively it hovers between 80k and 120k which is fine…-ish: the connection will regularly jump to ~7% packet loss and 2000±1000ms ping. Downloading at 80k is one thing, downloading at 80k with 3s round-trip and 1 packet in 15 missing things get much trickier, especially when most pages have a really high number of resources.

[0] and nothing better is currently available short of cellular (they do get ok 4G, at least when the weather doesn't screw with it), though they're supposed to get fibre a year or two down the line


>while it's probably not reasonable to optimize for them

it's not an optimization, it's a best practice on the web. server side render and use javascript for progressive enhancement.


The web has moved on from progressive enhancement. Current practise is client-side HTML rendering.


No, no it hasn't. Just because there are a bunch of terrible front end developers trying to push that notion doesn't make it true. Any half way competant front end developer will be able to create sophisticated websites that are usable for the vast majority of their users and they'll be using something like progressive enhancement to do it.


Client rendering doesn't have to be so bad... I actually did do it as an optimization back in the dialup days. React is pretty nice, and if you really want to reduce payload, you can build against inferno or preact for similar results at a smaller payload, but since React packaged JS is usually less than a single moderately sized image, it's really less of an issue.

Although that doesn't mean to go ahead and throw in lodash + underscore + ramda + jquery + jquery extensions all on top of react/redux. Just because you're using a modern framework, doesn't mean it needs to be huge. You can build a basic app with preact and min+gz hitting a total size around 20-30kb.

Yes, that's more than server-rendered simple AMP-style html... on the flip side, you don't have to round trip for every screen either. It's also a LOT smaller than some of the behemoths out there, and even then, a lot of the time Images are the biggest hit in bandwidth.


That’s really not true for the majority of sites, in particular nearly every document site. There’s no one blanket fits-all best practise, but in general using a SPA is hurting a significant majority of users.


Unfortunately.


I find it really fortunate. SPAs can use much much less data for the same content, which is a big plus when you're on a data-limited connection, as most users in the developing world are. If you spend much time on a site they also demand less CPU and thus less battery life (vdom updates are less intense than new page renders), which helps when you're on a mobile connection, and they feel much more responsive when you're on a high-latency connection, which you are again if you're in the developing world or on mobile. Server-side rendering feels like getting a faster and lighter initial load in exchange for heavier, slower, more battery-draining usage overall.


> Server-side rendering feels like getting a faster and lighter initial load in exchange for heavier, slower, more battery-draining usage overall.

If and only if you do the SPA right. Compare Twitter (slow to load, slow to interact with, battery-draining, issues with maintaining scroll position, and all of these get worse the farther you scroll down) to the SPA version of https://mobile.twitter.com/ (fast to load and interact with) to the static version of Mobile Twitter (even faster, no infinite scroll, works on Dillo).


That's fine for SPAs, what sucks is how many other sites are built that way. :/


To be fair, the market is considered occasionally but a for-profit site isn't going to go out of its way to cater for those people with little money.


This is gold.

I've seen so many of these, even on websites with lots of traffic. Websites have to be written taking into consideration the way it loads too, especially on mobile data. I've found the Chrome DevTools feature where you can throttle bandwidth comes in super handy for this.


For resilience testing I highly recommend Clumsy even if it only runs on Windows:

https://jagt.github.io/clumsy/


That exoscale screenshot is very similar to what I see with NoScript on a 100 Mb/s connection before I temporarily allow their JS.

What's nice about NoScript is that I can turn on their JS but keep turned off the JS scripts from the other sites. Apparently they only use googletagmanager. Ublock doesn't report any blocked script so it's a rare well behaved site.


Google tag manager is sometimes (often?) used to load every other third-party script, so if you allow gtm to load you'll probably see a bunch more scripts. As a web developer, gtm was the bane of my existence because marketing could change and break the site significantly, while decreasing user privacy and page speed, and the devs could take the blame.


There is/was an internet news website 15seconds.com iirc, that was so named because that's how long the average person would wait for a page to load. Back in the 90's when dialup was common. I think people should try setting chrome in devtools to 2g speed now and then, so they know the pain they're causing for a lot of people on wireless without a good/stable connection.


This is a fantastic way to assess website functionality. It would drive me insane on day to day use.

In all reality, I just want to dump the modern web's approach. CSS, Javascript, you name it. Give me simple HTML and text ads, if you need ads. Give me pictures when I want them, with descriptive captions. I agree with the intent of the blog--quit making crappy ads and bloated sites!

I use elinks often, and find it's text-based approach easier to comprehend. What are your thoughts?


Agreed! I try to keep my site light. I'd like to see a return of the web to a content layer where the browser can choose how to present it. In the end we want to read content. We don't care about someone's favorite scrolling method or menu system. Why can't I set up my browser preferences to be "show me websites with a light blue background, navigate through a menu bar horizontally across the top," etc. I don't think it would happen with current inertia, but maybe that's an area for a niche browser.


As I understand it, early on there was sort of an expectation that users would be defining stylesheets and applying them to websites, exactly like you said: "use light blue background, menu bar horizontally across the top, paragraph text should be 16 pt." etc.

There's nothing wrong with site-provided CSS, but I strive to make my own stuff work with or without CSS.


Since traveling in Asia and getting elinks into muscle memory I now use it even on good connections, it's so much more satisfying to get consistently quickly rendered text into my eyeballs.


Could probably automate this on a Linux VM using netem[0]

[0] https://wiki.linuxfoundation.org/networking/netem


For anyone wondering, the spinning Unicode symbol mentioned is F01E, corresponding to fa-repeat in Font Awesome: http://fontawesome.io/icon/repeat/ Font Awesome also has a bunch of spinner icons which OP is probably seeing on other sites: http://fontawesome.io/icons/#spinner

It really is unfortunate that there is no way to have these widely-used resources (Font Awesome, jQuery, etc.) cached on a long term basis across all sites that use them. (Though arguably this is easily achieved for fonts, which can be installed system-wide.)


> It really is unfortunate that there is no way to have these widely-used resources (Font Awesome, jQuery, etc.) cached on a long term basis across all sites that use them.

There is - Google, Cloudflare and jQuery offer a CDN. The problem with CDNs is that you then have a SPOF and an external dependency.

> Though arguably this is easily achieved for fonts, which can be installed system-wide.

Oh please no, this always leads to problems sooner or later - for example, graphics designers tend to have LOTS of fonts installed, from all possible sources. Especially the pirated fonts tend to freak out in lots of different ways, and if there's a local() in the font-face rule, things break and you get weird bug reports...


Currently only have a 1.2M connection at home. Reveals how bandwidth intensive many website are that simply do not need to be.

Ad Blocker is a must.


I understand the concern. Websites can become too bloated. They can require too many resources or be poorly optimized to reduce bandwidth.

But this also seems like complaining about the trouble with driving a horse-drawn carriage on the interstate. Sure, there are lots of people around the world still on low speed networks - just like there are people who still use horses are their primary mode of travel. And maybe there should be a way to accommodate them, but let's not pretend that the advances in website technology are only a detrimental problem that needs to be solved.


Significant amount of web requests is now performed via mobile connections, which, depending on a lot of factors, can be as bad as 2G. We don't use modems, of course, but we still need web sites that work on low bandwidth.


Can someone tell me why SVGs are gigantic while first loading? I often see this even at modern connection speeds.


Two reasons come to mind: It's likely that the designer providing the SVG logo scales the logo completely arbitrarily, which might be 1000 pixels wide. The CSS `width:10px` in another file hasn't loaded yet, so the <img> tag holding the SVG uses the absolute size of the SVG file. Another possibility is that a flexbox or similar grid system is used, and the container holding the <img> tag is told to stretch its contents to the full width of the flex item. If the content in the following flex item is very large, the SVG will be compressed to the proper small size, but if there is no content loaded yet in the following flex item, the flexbox will stretch just the first item.


Thanks, looked into it further on my own project and found my specific issue. The svg width and height attributes were missing from my webpack-built bundle even though they were part of the source svg file. The svg loader I was using (svg-inline-loader) was stripping with and height by default.


Because people who write SVG's usually don't know about the viewBox property[0]. They just use illustrator to draw the shape and then export it.

[0]https://sarasoueidan.com/blog/svg-coordinate-systems/


> They just use illustrator to draw the shape and then export it.

You're lucky. Last designer I asked to provide a SVG image had obviously exported from a vector drawing tool to a bitmap, AND THEN converted to SVG. So you kinda had a whole SVG object for each pixel.

It never occurred to him that a 40 MB SVG file might indicate a problem (when the point of having a vector format was partly to save memory).


I think your designer may have taken "pixel perfect" a little too literally.


Lots of <rect>'s in that SVG :D

I wonder how big can the SVG DOM be in browsers before things start to get painfully slow


They really don't have to be [0].

Usually happens with SVGs whose internal dimensions differ wildly from the display dimensions.

e.g. Internal canvas of 1000px x 1000px, display size of 10px.

[0] https://oddsquare.surge.sh/10/


If all the important content of your website can be rendered in a timely fashion through a text browser like Lynx, then you will have catered to the lowest common denominator.

Granted that is a very retro concept.


If anyone is actually suffering from dialup speeds and using Chrome, you should try out my extension to disable web fonts: http://github.com/captn3m0/disable-web-fonts. It blocks all network requests to font-files. Also has a couple other tips in the README for improving page-load performances over slow networks.

I wrote it when i saw suffering terrible speeds over mobile internet (EDGE) a couple years back.


Somewhere between Linux, Firefox, uBlock, etc. I see a lot of this stuff on my fast connection as well. Vox looked liked that to me for a few months, maybe a year or two ago.


If you're on a mac the Network Link Conditioner is a great way to test your stuff on a simulated slow connection.

https://medium.com/@YogevSitton/use-network-link-conditioner...


Another easy way to play around with network conditions is included in Chrome dev tools: https://developers.google.com/web/tools/chrome-devtools/netw...


This reminds me: Since a while I'm looking for a good configurable proxy solution to clean up/filter the web on my server especially for browsing via old devices (Amiga and such). So I would like to reduce website to their content, stripping all CSS, background images, scripts and such. Any recommendations?


Privoxy (https://www.privoxy.org/):

> Privoxy is a non-caching web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It has application for both stand-alone systems and multi-user networks.

> Privoxy is Free Software and licensed under the GNU GPLv2.

I'm from Bangladesh, my connection is ~256 kbps and my pc is slow (256 RAM and 1.6 ghz), if not by privoxy, i couldn't browse internet.


Squid can do this easily. Just setup an acl that blocks requests for file extensions you don't want.

  acl Blocked_extensions urlpath_regex -i "/etc/squid3/blocked.extensions.acl"
  http_access deny Blocked_extensions
The contents of /etc/squid3/blocked.extensions.acl would be something like:

  \.css$
  \.css\?.*$
  \.js$
  \.js\?.*$
  \.woff$
  \.woff\?.*$
  \.eot$
  \.eot\?.*$
  \.svg$
  \.svg\?.*$
It's also easy to run squid in transparent proxy mode if you want to support older devices that don't let you manually specify a proxy.


Excellent site which brings us to an obvious question: At what point should we as developers consider a site good enough? There's an infinite tail of worse and worse speeds and latencies. At some point it makes business sense to stop optimising, and for businesses with lots of users that point is inevitably before supporting 100% of users. So how do I prove to the business where the 90th and 99th percentiles are, within some reasonably scientific measure of uncertainty?

Aside: I just checked the site I'm working on. When throttling Chromium to GPRS speeds (500 ms, 50/20 kb/s) the main page has all the text on it by 16 seconds after a hard refresh.


The developer axiom #1: If it works on my computer then it works for everyone else.



> Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries.

Anyone have data on the best ways to do this? Or information on the implementations used by say Gmail or Facebook?


I don't see a mention of the question of utmost importance – at what baud rate are you connecting?


Google Data Compression on Chrome helps a lot. At least on HTTP connections. Too bad this does not work for HTTPS sites were the web designers do not test and optimize enough for low bandwidth.


Or Opera Mini, which renders pages server-side and sends a minimal representation to your phone. They even managed to convince Apple they're not a browser, so there's an iOS version.

In my 56K days, the regular Opera was my browser of choice since it had a very useful toggle for loading images or not (or showing only cached ones, or letting you load them in afterwards without a painful page refresh cycle)


You can have a browser, you just can't use your own JS engine for it, only safari's, it also proxies everything through Opera, and delivers a better mobile representation.


https://developer.apple.com/app-store/review/guidelines/

> 2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.

It reads to me like you're not allowed to use your own HTML renderer either.


Interesting. I tested Opera Mini after your comment and it seems kind of "dangerous" for me. It also saves a lot of bandwidth on HTTPS connections so I assume it intercepts and also server rendering those connections. Saves a lot of bandwidth for sure but this is a privacy nightmare.


I have seen something similar on Ryanair.com - this is why I chose React instead of Angular when I was looking for a new frontend framework


I wonder how much difference surfing with Adblock would make while connecting over a modem?


What browser is this?


Uh, the point is that, on a slow connection, everything is going to suck in unpredictable ways, because of server-side bloat, and the trendiness of developers deploying 5MB js libraries.

Worrying about whether the browser was recent or supported misses the point. Any browser can suffer these sorts of problems on a 56KB dial-up connection. (3G mobile data is frequently throttled at 128KB by popular ISP's, btw)


Uh, reading in a whole slew of things into a simple question is missing the point.


The first few screenshots appear to be Firefox; further down the page is Chrome.


That looks like Firefox.


twitter/fb logos are svg, and are rendered at whatever resolution needed. I.E. svg does not have "full scale"


please make your site lynx compatible


I don't see the problem. It isn't 1996 and I don't care about people who turn JS off. This tiny percentage of people, are dwarfed by IE9 users, which, I don't support either.

What is the moral of the story?


The moral of the story is that some web people don't care about things which, had they a deeper understanding of what they do, they would intuitively grasp they should care for. QED.


So I will ask my boss on Monday if he minds that we revisit all our projects, and refactor for the 0.01% of hipsters?


If you have to revisit your projects to ensure that they work properly, then you wrote them improperly in the first place.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: