Hacker News new | comments | show | ask | jobs | submit login
The web sucks if you have a slow connection (danluu.com)
1269 points by philbo on Feb 8, 2017 | hide | past | web | favorite | 598 comments



>When I was at Google, someone told me a story about a time that “they” completed a big optimization push only to find that measured page load times increased. When they dug into the data, they found that the reason load times had increased was that they got a lot more traffic from Africa after doing the optimizations. The team’s product went from being unusable for people with slow connections to usable, which caused so many users with slow connections to start using the product that load times actually increased.


Hah! A Jevons Effect[1] in a web site's bandwidth!

[1] When an increase in the efficiency with which a resource is used causes total usage to increase. https://en.wikipedia.org/wiki/Jevons_paradox


Wow, uncanny resemblance to Giffen goods:

https://en.wikipedia.org/wiki/Giffen_good


From the wikipedia article I gather that the only Giffen goods that were actually shown to exist are the Veblen goods and thus disqualify as Giffen goods. It seems to me that Giffen goods are a theoretical thing that has never been actually shown in real world (as the article states, all of the proposed examples were discarded).

The case of website cost going down and demand going up seems pretty standard.


You didn't prove that giffen goods are equivalent with veblen goods, there.

If a package of ribeye steak normally sells for $2.99 and doesn't sell well, but then its price changes to $6.99 (but nothing else changes), and demand increases, that steak is a giffen good. The ribeye steak is not conspicuous consumption (unless your definition of status is really loose and includes posting photos of your food to Instagram). What happened there is straightforward: there's an elastic demand for ribeye steak, people saw the price and assumed quality signaling. When it increased to price parity with higher end brands, people assumed quality parity as well.

Conversely, a mechanical watch is specifically optimized to be good at time keeping in the most functional and cost inefficient ways (e.g. assembled in hand in a white gold case with a hand-decorated guilloche dial and proprietary in-house movement mechanisms, etc). That is precisely conspicuous consumption, and thus it's a veblen good.

One good's demand increases because of quality signaling, the other good's demand increases due to status signaling. The point of there being two of these definitions is the nuance in why consumers would purchase luxury items. Theoretically, people don't buy at Trader Joe's just to brag to their upper class friends that they shop at Trader Joe's (this is not a good example but take away a specific brand and you get the gist).


The steak in your example might not be a veblen good, but it still isn't a giffen one.

The article states that a necessary condition is that "The goods in question must be so inferior that the income effect is greater than the substitution effect". The reason people are buying more when the price rises is not because of the income effect (which would be because they have less money since the price went up, so demand for inferior goods increases), but rather because they now have evidence/reason to believe that the good is quality.

That's not giffen according to the definition given, which includes a causal factor for the demand curve.


IANAEconomist (but I could play one on TV)

It is simply not true that that a good must be "inferior" to be a Giffen good (unless you adopt a special meaning of inferior, which since it's not necessary to do, I won't agree to). The classic example (thought experiment, without regard to whether it actually happened) is potatoes in poverty stricken Ireland: a poor person's diet would be mostly potatoes (inexensive Econ-Utility (compared to steak): calories, fills the belly) with some meat a few meals a week (expensive Econ-Utility: protein, iron, B vities, tasty, "not potato", even a touch Vebleny)

So, arbitrary budget example, let's say $20 at the grocery gets you $15 potatoes every day and $5 of steak 1 days a week. If the price of potatoes goes up, you need to reduce something, but you need to eat every day so you can't reduce potatoes, so you reduce your steak consumption, but now you have some extra money which you spend on even more potatoes. Price of potatoes went up, consumption of potatoes went up. <-- there is already a theoretical problem there, you could reduce steak just enough to keep potatoes equal, so let's just say you can buy a steak or not buy a steak, no half steaks, OK? just trying to make the point "what is a Giffen good", and not trying to prove whether Giffen goods exist or not.

So, potatoes are not "inferior" to steaks, both are requirements for a balanced diet; I supposed a technical econ-definition of inferior could be designed to mean something along the lines of "inferior is defined to rule out your example, aaight"

In any case, while Giffen goods probably can't exist in a market for any length of time, the concept is completely understandable as a short-term reasonable thing that occurs: I go to the store with cash intending to buy an "assemble your own" burrito with guac, the price of beans went up, I don't have the cash at hand now to get the guac, but it's not a burrito at all without the beans, so I leave out the guac... but turns out by leaving out the guac, I can get a larger size burrito: consumption of beans just went up at the same time as the price. This effect happens for sure... does it happen enough to counteract the people who would leave out the beans and keep the guac? Can the "substition of beans for guac" function always be seen seen to be continuous and differentiable? <-- perhaps not, burrito shops like to have overly expensive add-ons for 2nd order price discrimination, so the price of guac might very well be "quantized" at an absurdly high level, and does that make beans not a Giffen good? ...

my point is, the way you guys are arguing this is leaving too much out, can't be answered and wikipedia at this level of analysis is too unreliable.


Inferior good just means that demand increases as income goes down. Potatoes in your hypothetical are inferior, since if you have less money you can't afford steak and so buy more potatoes instead.


so that's what I mean, that term-of-art definition is intertwined with dependent variables of the definition of Giffen goods, so it would be no wonder if the ideas get tied together even if the concepts are not facially.

Terms of art annoy me (the legal profession and philosophy are full of them, overloaded (OOP definition) on preexisting words) because IANALinguist but I could play one on TV without rehearsing, so my point is, if you want to have a narrow morphology for a word, don't recycle a word that has broad meanings, invent a new word that is precise, like econ-inferior. Then at least when a person doesn't understand what you say, they will think to themselves "maybe I should look up the definition" as opposed to actually believing you said something different than you did.

Nobody can live on potatoes alone, you'd die. Nor can anybody live on steak alone. Neither good can be said to be precisely econ-inferior to another, only econ-inferior over some delta range of prices and/or time (and assuming demand, etc). But the whole question of Giffen goods is also valid only over some delta, so as long as they are different deltas, the definitions would not be in conflict (and vice versa all the variations of that).


Fair enough, but this is a standard term taught in Econ 101 (at least, I was taught the econ meaning of inferior good in my first econ class).


I've studied econ at the graduate level at MIT after having taken it as an undergrad as well, and I have a degree in Finance, so I didn't mean to imply that I don't know what I'm talking about. But I know a lot of other topics as well and I've always objected to terms of art in one field being easily confused with terms from other areas, and hell if I can remember what an inferior good is 20 yrs later. My point was not that you didn't know what you were talking about; I joined in because between the two of you I replied to, I didn't think your discussion was benefiting the rest of HN as much as it could because many of those people have not taken any econ at all. I was trying to Econ 100 the discussion, without losing the flavor of what is interesting about Giffen goods; and I think that if researchers are going to "prove" that Giffen goods don't exist in aggregate (<-- not Macro term of art), they need to also address the obvious short term circumstances (as I tried to describe) where it's clear that the underlying principle is actually operating, whether it has an effect on market clearing or not, because people can go one extra week without meat, just can't do it forever.

Not trying to argue, just trying to clarify what I came upon. Econ theory I think is sound but requires many simplifying assumptions to teach and learn, and then when we talk about whether Giffen good actually exist or not it's easy to lose track of simplifying assumptions like "long term" or "substitution".

cheers.


> It seems to me that Giffen goods are a theoretical thing that has never been actually shown in real world

In practical terms (that may not fill the theoretical definition of the giffen good), spare time in certain circumstances is quite obviously a Giffen good. Once your income increases (which means the opportunity cost of your spare time increases), you are willing to work less, i.e. consume more spare time. Of course, this is not a universal rule, but I think it is obvious that for _many_ people this is the case. If it was _not_ the case, there was no way people in sweatshops work longer hours than western middle class.


Giffen goods are likely to exist only in communities of extreme poverty, where the cheapest things you buy dominate your spending. That's why it was only found in an experiment performed on people living on subsistence:

https://en.wikipedia.org/wiki/Giffen_good#cite_note-4

I bet you could find it in some video game economies.


How are you distinguishing between Giffen and Veblen goods? If you define Veblen in such a way that all Giffen goods are Veblen and then say that disqualifies them then of course you'll find there are no Giffen goods.


According to the article:

> To be a true Giffen good, the good's price must be the only thing that changes to produce a change in quantity demanded. A Giffen good should not be confused with products bought as status symbols or for conspicuous consumption (Veblen goods)

Veblen goods = Goods for which demand rises with price because they are status symbols Giffen = Goods for which demand rises with price - Veblen Goods

However there are no examples there that hold, to me this signifies that the only goods for which the law of demand does not apply are status symbols.


Isn't a Giffen good then just something where one infers quality from price? I've seen that happen many times with my own eyes, so hard to believe they've never been identified. It's possible to price something so cheap, people assume there's a catch.


No. The example of a giffen good given is a high calorie food, that's exceptionally low status. Thus, when its price falls, people will demand less of it, as they can afford to replace some of their consumption of that food with more expensive, better food. Workers replacing some proportion of their bread or potato intake with meat, as the price of that bread or potato intake drops, say.

The idea is that the good is the lowest quality way of fulfilling some need - so people buy it because they can't afford anything else.


> Isn't a Giffen good then just something where one infers quality from price?

No, it's an inferior good (in the economic sense) in which the (negative) income effect of a price increase outweighs the success substitution effect.

What you are describing is a good that has a positive elasticity of demand with respect to income (or, technically, two different goods, because the higher price represents a different good altogether - one with a higher status symbol).


How so? I don't really see the similarity


Both describe unexpected effects when trying to extrapolate from price and quantity involved in individual use cases.

Jevons: Quantity required per use goes down, so you might expect total demand to decrease. Instead total consumption goes up.

Giffen: Price per use goes up, so you might expect total demand to decrease. Instead, total consumption goes up.

Either could increase consumption by displacing available substitutes, though that's not necessarily the case with Jevons. They are indeed different phenomena, they just have some similarities.


And now with both of those things Baader-Meinhof is going to be triggering every hour for the next month at least


> And now with both of those things Baader-Meinhof is going to be triggering

You know that Andreas Baader and Ulrike Meinhof were the main founders of the terrorist organization RAF (Rote Armee Fraktion (Red Army Fraction)) in Germany. RAF was also the reason that grid investigation was used in the 70th (where lots of innocent people were accused wrongfully) after a RAF terror series, after which some constitutional principles were quashed. The German word for this is "Deutscher Herbst" (German Autumn; https://en.wikipedia.org/wiki/German_Autumn).

These experiences lead (indirectly) to the rise of a completely new party (Die Grünen; The Green Party) and are (besides the experiences with the two dictatorial regimes on German ground in 20th century) one of the reason why data privacy is taken very seriously in Germany.

Thus mentioning RAF, Andreas Baader or Ulrike Meinhof to (in particular older) Germans is perhaps like mentioning Al-Kaida, 9/11, Mohammed Atta etc. to US citizens.


So triggering^2


I'm not sure why this gets a special term. It sounds like basic supply and demand. If you decrease the price of something by increasing the efficiency of production, you will obviously capture more of the demand curve. What am I missing?


A lot of people naively assume that if you can use a resource more efficiently, then total use will go down, "because you don't need as much, right?"

See: the entire popular support for efficiency mandates.

(Edit: Also, this very example -- I certainly didn't expect that a faster site would allow that many more users: my model was more "either they want to see your site, or they don't", i.e. inelastic demand.)

The (common) error is to neglect the additional uses people will put a resource to when its cost of use goes down. ("Great news! We get free water now! Wha ... hey, why are you putting in an ultra-thirsty lawn??! You don't need that!")

Also, I wouldn't call it basic supply and demand; depending on the specifics (inelasticiy of demand, mainly), total usage may not actually go up with efficiency.


This sounds like the reason widening roads doesn't usually ease congestion.

Which, really, can be summed up by my favorite Yogi Berra-ism "No one goes there nowadays, it’s too crowded."


> This sounds like the reason widening roads doesn't usually ease congestion.

It usually does actually.

What is happening there is that you have different demand levels at different congestion levels. If you alleviate some congestion by widening the road then demand goes up.

That is only a problem if the demand without congestion is higher than what even the wider road can handle. As long as the new road can handle the higher but still finite demand you get when there is no congestion, there is no problem.

In other words, as long as you make the road wide enough for the congestion-free demand level to begin with, that doesn't happen.


That's technically true, but it assumes away the core, ever-present problems:

- It may not be physically possible to add enough lanes to e.g. handle everyone who would ever want to commute into L.A.

- Even if that road was correctly sized, it still has to dump the traffic into the next road, through the next intersection point. If you've increased the capacity of the freeway but none of smaller road networks that the traffic transitions to, you've just moved the bottleneck, not eliminated it. And that too may be physically impossible.

In any practical situation car transportation efficiency does not scale well enough that you can avoid addressing the demand side.


It isn't physically impossible to use eminent domain to seize all the property around the roads and then build 32 lane roads all over Los Angeles.

That is a separate question from how stupid that is in comparison to the alternative of building higher density residential housing closer to where people work and with better mass transit.

But if people don't want to do that either, you have to pick your poison.

And there really are many cases (Los Angeles notwithstanding) where adding one lane isn't enough but adding two is and where that genuinely is the most reasonable option.


Also, one thing that's often forgotten is that roads take up space. A lot of it. You make your roads bigger to accommodate more people, and all of your buildings wind up farther apart as a result. When buildings are farther apart you have to drive farther, meaning that everyone's journeys are longer, meaning more traffic... and on and on it goes.

14 percent of LA county (not just city!!) is parking. http://www.citylab.com/commute/2015/12/parking-los-angeles-m...

I'm trying to find a better source, but at one point supposedly 59 percent of the central business district was car infrastructure (parking, roads, etc.) http://www.autolife.umd.umich.edu/Environment/E_Casestudy/E_...

I mean, at what point do you just build a 400 square mile skid pad with nothing else there just to "alleviate traffic"? Hell, that's practically what Orange County is already.


Gosh, 3.3 parking spaces per car (citilab article). That's be some space to free up when they're self driving.


Assuming "parking spaces" include one's home space (like garage or reserved spot), 3 should be expected, at least: home, work, and wherever you're visiting.


Now assume Uber et al own fleets of self-driving 9-passenger minivans. During peak commuting hours they're completely full because they pick up different passengers who have the same commute, and that way you eliminate the parking space both at home and at work.

The rest of the day they don't actually park anywhere, they just stay on the road operating by carry one or two passengers at a time instead of eight or nine. Or half them stay on the road operating and the other half go off and park in some huge lot out where land is cheaper until demand picks up again.

Then instead of 3.3 spaces per car you can have <1, and most of them can be in low land cost areas.

It's actually kind of like dynamically allocated mass transit.


More likely scenario will be self driving cars cooperating with each other to drive from start to finish without any stops. Whether on freeways or local streets, cooperation amongst vehicles will raise the average speed and the volume of vehicles you can process through a given area. Pools work to a certain degree if everyone is starting and ending at the same location. When they don't, then it actually takes longer than driving by yourself.

Final point, you can certainly build out less parking spaces, but pre-existing spaces won't go away without redevelopment.


They're already empty most of the time, for what it's worth.


You're right -- I should have said "feasibly" rather than "physically" above. It's certainly physically possible, but requires a tradeoff I don't think many people would actually sign off on: blowing 3 years of budget for multi-deck freeway tunnels and having twelve-lane streets for most of the city, a parking garage for every block, and 95% of the city allocated to roads.


> It isn't physically impossible to use eminent domain to seize all the property around the roads and then build 32 lane roads all over Los Angeles.

This might be an extreme example that won't work for other reasons, but generally adding more lanes will increase demand, so you still won't have enough lanes.


The Wired article posted below has a pretty good rebuttal on those ideas.


No, that is still only the short term new equilibrium. What happens is that roads with unused capacity (or at capacity, but acceptable congestion) get busier as activity increases around those roads, because of the excess capacity/low congestion. Of course it's more complex than just that; it heavily depends on the spatial relationship with job activity centers within commuting distance, social expectations, economic characteristics and many more, but the core tenet remains - adding roads is not a long term solution for congestion, spatial planning is.


Right, widening roads doesn't increase speeds for existing commuters, it serves more commuters at the same speed.


all the cars on this new lane are not on another road adding to the traffic tho. Maybe it eases the traffic elsewhere There is a finite amount of cars after all



Scott Alexander has some examples of that in his post:

http://www.scottaaronson.com/blog/?p=418

For example:

> Why are even some affluent parts of the world running out of fresh water? Because if they weren’t, they’d keep watering their lawns until they were.


The supply and demand "law" refers to the observation that the price of a good settles at a point where the available supply (which increases as the price goes up) matches the demand (which decreases as the price goes up).

Jevons Paradox is only tangentially related. It is based on the observation that sometimes using a resource more efficiently results in higher overall consumption. For example, say 40 kg of lithium is needed for the batteries of an electric car. At some point, 4000 tonnes are produced annually, enough for 100,000 electric cars per year. Now a new battery comes on the market that needs only 20 kg of lithium. Should the lithium producers be worried that the lithium demand will drop, since only 2000 t will be needed for the 100,000 electric cars? Maybe. But if Jevons Paradox comes into play, the annual production of electric cars might triple as their cost drops due to lower lithium usage, and the new demand will then settle at 6000 tpa. So, paradoxically, reducing the amount of lithium in each battery could be good news for lithium producers.

Whether or not Jevons Paradox occurs depends on the elasticity of supply-demand curves, in this case the curves for lithium and for electric cars.


Not to mention that reducing the cost of batteries may lead to new classes of devices suddenly making sense as battery-powered (instead of corded or gasoline-powered), leading to increased demand for batteries.


Well, that's because it's often not the case. Take the two cases:

* Engines get more efficient (fewer litres per kilometre traveled). Does the total amount of petrol consumed go down or up?

* Flushes get more efficient (less water / successful flush). Does the total amount of water consumed go down or up?

Both of these have a more efficient use of a consumable quantity. Often, however, more efficient engines lead to more traveling and larger vehicles whereas more efficient flushing leads to reduced total water consumption usually.

The fact that gains from efficiency can be outraced by the induced demand can be seemingly paradoxical. And "seemingly paradoxical" is the only thing that makes anything labelled "Paradox" interesting.


> What am I missing?

The paradox is that they tried to reduce demand to reduce consumption, but accidentally reduced price, so increased consumption.

The bit you're missing is that duality, that an action intended to reduce demand could reduce price instead. Applying rules of supply and demand happens as a step after categorising the action, that the action was miscategorised led to a misprediction.


Because it's easier to say somename effect than the half paragraph or so that describes it.

This is the basic reason for naming anything after all a car is just a fossil fuel internal combustion kinetic comvertion wheeled people and goods pilotable transportation platform but saying a car is just easier :)


Well, if you'd follow the link you'd see that Jevons identified this phenomenon in the mid 1800's with respect to coal usage. It probably wasn't so obvious then.


To put it another way, marginal efficiency increased but total efficiency went down, which is (to some) unintuitive. It's certainly rare to observe!


I saw a similar effect with really fast disks. If you make the kernel faster at passing requests down to the disk, a simple benchmark with one request at a time will be faster. However, with many requests at the same time and less time spent processing then, you now have more time to poll the disk for completed requests. Each time the kernel polls the disk, it will typically see fewer completed request than before your optimizations, and overall this can actually result in decreased throughput.


That's like saying 'deadweight loss' "shouldn't be a term, because it's basic supply and demand".

There's a clear and identifiable trend or pattern resultant of the general model, there's no reason not to assign it a shorthand way of being referred to in discussion or study.


> I'm not sure why this gets a special term.

It gets a special term because it was coined in 1865, before most of modern economics was codified and this was a cutting edge finding. You may as well ask why Newton's laws get a special term, because they're all just obvious basic equations in physics that high school students are taught.


By giving it a special term, additional commentary/analysis can coalesce around it, such as that "governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising".


Is there a negative counterpart for that ?

I remember turning an algorithm upside down, making it so fast, it went from "number crunching wonder" to "users saw nothing, this software is meh".


Microsoft UX research found that with row adding in Excel. It was instant, so users weren't quite sure if it happened or if it happened correctly. Now it animates for that reason.


I believe that this is partly for ergonomic reasons, it's hard to track a grid changing instantaneously, animation allow for "analogous" traceability.


Yep. I see the commonality as being "it's so fast it's hard to tell any work was done" - whether the user is trying to gauge whether their actions had any effect or whether something is worth paying for.


I've tried using Google products from Africa (Ethiopia ... last time this January), and generally, it is right out unusable. JS-heavy apps like GMail will never load properly at all.

This while the connection in itself is not THAT bad. I use to use a 3G/4G mobile connection and it generally works excellent, with pretty quick load times, for everything else than javascript-heavy web apps.

I have a hard time understanding why this issue is not paid more attention. Ethiopia alone has some 99 million inhabitants, with smart phone usage growing by the hour. Some sources say "the country could have some 103 million mobile subscribers by 2020, as well as 56 million internet subscribers" [1].

[1] https://www.budde.com.au/Research/Ethiopia-Telecoms-Mobile-a...


In Ethiopia's case, it's not so much the connection speed in Addis. There's a great deal of interference from the national Deep Packet Inspection filters that leads to timed-out requests, reset TCP connections, etc.

JS-heavy apps make a lot of requests to background servers and should one of those requests fail, apps will hang. It's quite frustrating and I would often load pages with the console open to see which requests have failed so I'm not left wondering what happened.


I may get flamed for pointing this out either by people who are offended by the viewpoint, or by those who find it so bleeding obvious as to not be worth stating, but those page hangs (and I know exactly what you mean) are really down to poorly architected and implemented front-ends rather than an inherent flaw with JavaScript-heavy apps and pages.

Any time you do an XHR you can supply both a success and a failure callback and, if you care at all about your users, the failure callback can come in handy for error recovery, handing off to a retry mechanism, etc.

Modern web apps can be a lot more like fat client apps, just running in a browser. Even there, there's no inherent need for them to be unusable, even over relatively slow connections. A lot of it comes down to latency, and the number of requests going back and forth between client and server, often caused by the sheer quantity of assets many sites load (I'm looking at YOU, almost every media site on the Internet).

I seem to spend my life citing this paper, from 1996, but "It's the latency, stupid" is still relevant today: http://www.stuartcheshire.org/rants/latency.html.


Nothing controversial here, it's common sense. Most web stuff is built by total amateurs figuring things out as they go.


I'd like to complement: most _stuff_ is built by total amateurs winging it.


Nonsense, email is an impressively well designed and logical protocol ;)

From


There can be whole lot of reason for this and it kind of make sense. What doesn't make sense is that for such a big company, such a big product thats the best Google/gmail can do. I can understand if scaling the gmail backend can be tough. I can appreciate gmails feature set of spam filtering, tagging but on the UI feature-set I don't see anything that revolutionary that makes it (according the chrome task manager) the heaviest tab in my browser ~500MB. I think thats to the point of shameful.

I think the standard of whats considered slow, bloated, complex has become absurd. I think if the processor companies today release processors that say improves the single thread performance 10 times in two years gmails and facebooks of the world will eat all that up with marginal improvement in functionality. I'm talking about the client side, in the server side yeah they may make 10 times more complex analysis though most likely 80% will go to feeding us more accurate ad.


That's why fastmail is such a breath of fresh air. It has lots of features and is wicked fast.


Agreed. I'm actually really impressed at how fast and responsive the UI is. As a user, it's probably one of the most responsive and functional UX's I've used in years.


>and should one of those requests fail, apps will hang

That also happens on 'good' connections, when some crappy isp router drops the packet without any icmp. The request fails only after a tcp timeout, which is large enough to be noticed. I cannot understand why asynchronous js requests do not involve smart adaptive human-oriented timeouts and why this problem is still not solved in general. TCP timeouts are simply insane nowadays.


That's good to know, Thanks!


GMail has a HTML-only version that is much more lightweight and usable on slow / flaky connections.

It's worth it to memorize or bookmark the address, in case you ever need it:

http://mail.google.com/mail/h/

(on mobile browsers you need to "request desktop version" and then paste the address again, before you can see it)


Thank you for sharing this link. Here is a related HN discussion: https://news.ycombinator.com/item?id=7513388

My problem with accessing the low-bandwidth Google tools with archaic browsers (http://dplus-browser.sourceforge.net/, etc.) is that Google still requires the high-bandwith login.

Are you aware of any alternative login URLs or authentication mechanisms?


I just tried the two-year old 3.4-dev copy of NetSurf I had buried on this computer, and was able to login to Gmail's Basic HTML.


It's not only about connection speed but also about infrastructure. If you look at this map https://cloud.google.com/about/locations/ you'll see that your packets have a looong way to reach their data center. AWS is no better than Google on this point. Guess it's not bankable


That only adds about 200-300ms RTT I'd guess. I live in India and use many websites which are hosted in the US, and they work fine.


(Hello from Kenya)

There are usually CDN nodes in India. Cloudfront has edge nodes there, google's cdn does.

There's also a Mumbai AWS datacentre.

When you get far away from the common edges, it gets real noticeable.


~$ ping imgur.com

PING imgur.com (151.101.40.193) 56(84) bytes of data.

64 bytes from 151.101.40.193 (151.101.40.193): icmp_seq=1 ttl=53 time=342 ms

imgur.com works fine

~$ ping python.org

PING python.org (23.253.135.79) 56(84) bytes of data.

64 bytes from 23.253.135.79 (23.253.135.79): icmp_seq=1 ttl=48 time=267 ms

python.org works fine

news.ycombinator.com and reddit.com also works fine even though I'm logged in (there's about a 300ms, 700ms delay in the Network tab of Chrome's devtools for news.ycombinator.com and reddit.com)


PING imgur.com (151.101.12.193): 56 data bytes 64 bytes from 151.101.12.193: icmp_seq=3 ttl=51 time=499.633 ms 64 bytes from 151.101.12.193: icmp_seq=65 ttl=51 time=330.021 ms 64 bytes from 151.101.12.193: icmp_seq=66 ttl=51 time=557.491 ms 64 bytes from 151.101.12.193: icmp_seq=67 ttl=51 time=478.380 ms 64 bytes from 151.101.12.193: icmp_seq=68 ttl=51 time=400.365 ms Request timeout for icmp_seq 69

PING python.org (23.253.135.79): 56 data bytes 64 bytes from 23.253.135.79: icmp_seq=0 ttl=44 time=615.871 ms 64 bytes from 23.253.135.79: icmp_seq=1 ttl=44 time=539.681 ms

I'm on an island lost in the middle of the Indian Ocean. But ping weren't that different (50 ms more or less) on the continent (I went to RSA and Namibia)


Gmail worked surprisingly well from Antarctica


> I have a hard time understanding why this issue is not paid more attention. Ethiopia alone has some 99 million inhabitants, with smart phone usage growing by the hour. Some sources say "the country could have some 103 million mobile subscribers by 2020, as well as 56 million internet subscribers" [1].

How much disposable income will they have though? Most web products like those you describe are produced by businesses looking to make money.


You may be surprised. Certainly the number of people with disposable income will be less than in many other places but those that have disposable income often pay more.

Tax on cars is upward of 200% and traffic is becoming a major issue. When it comes to services, another major challenge is that there are no widely accepted payment mechanisms besides cash and checks. Debit cards only work with ATMs and some very select retailers.


This is a perfect example of why "average" metrics for such values aren't that great and are often overused as vanity metrics.

A nice chart showing how many users are in each bucket of load time would be far more useful. One that you could easily change the bucketsize from 0.1ms to 1 second and these types of 'digging' wouldn't even be a second thought.


>This is a perfect example of why "average" metrics for such values aren't that great and are often overused as vanity metrics.

The average human has, on average, one testicle and one ovary, but there aren't many humans who can actually fit this description.


Funny anecdote about the freeway system of southern California.

When they were initially planning the system in 1930s, 40s, they were planning to have the system in use for next 100 years. So they built over sized roads (like 10 lane freeway, without having to stop for traffic lights, that go THROUGH center of a major city).

When the system proved so car friendly, more and more people moved in and bought cars. Within in a short period of time (much shorter than 100), the system is completely jammed.

Always look for unintended consequences...


The original designers of the interstates didn't want the roads to go through the downtown areas. The idea was for the high-speed roads to go near cities, and have spur roads (3-digit interstate numbers that start with an odd number) connect them - like the design of the original Autobahn.

But there was a coalition of mayors and municipal associations that pressured Congress to have the roads pass through their towns (jobs! progress!). President Eisenhower was not amused, but he found out too late to change the design.

A consequence of this was the bulldozing of historically black-owned property to make way for the new roads.


They didn't really NEED cars to move people around because Los Angeles area already had a GREAT light rail system called the Red Line. The current walkway on Venice Beach was what's left of that line. Can you imagine? An above ground light rail system running parallel to a beach in LA?

They RIPPED it out thanks to lobbying by car companies and tire companies. Yay to lobbyists.

Now, it takes a billion to build a few miles of a subway/lightrail system that practically goes no where...


The Red Line is the new Metro system (which was destroyed in 1997's Volcano). The electrified light-rail from the 1930's was the LA Railway.

https://en.wikipedia.org/wiki/Los_Angeles_Railway

GM, Firestone, and several other companies were indicted in 1949 for attempting to form a monopoly over local transit. The semi-urban legend part (it was never definitively proved there was a plot behind it all) was the ripping out of the streetcars, replacing them with GM-made bus networks.

https://en.wikipedia.org/wiki/General_Motors_streetcar_consp...


> A consequence of this was the bulldozing of historically black-owned property to make way for the new roads.

Indeed. A couple years ago, the city of St Paul actually formally apologized for exactly this, destroying the primarily black Rondo neighborhood with freeway I-94.

http://www.usatoday.com/story/news/local/2015/07/17/rondo-ap...


This is known as induced demand: https://en.wikipedia.org/wiki/Induced_demand


Reading through some early texts on the coal and oil industry. One notes that at then-current rates of consumption, the coal reserves of the United States would supply over one million years' consumption.[1]

The current North American coal proven reserve is less than 300 years of current utilisation rates: http://www.bp.com/en/global/corporate/energy-economics/stati...

It's amazing what a constant increase in growth rate can accomplish. Also the overwhelming tendency for lowered costs to induce an increased demand -- the Jevons paradox.

If you want people to use less of something, increase the cost, not the efficiency.

________________________________

Notes:

1. Henri Erni, Coal Oil and Petroleum: Their origin, history, geology, and chemistry, 1865. p. 15.

https://archive.org/stream/coaloilpetroleum00erni#page/14/mo...


One way to plan for the next 100 years when building roads but not cause people to buy cars is to have a 10-lane wide verge on one side of the road. Have a row of 30-storey buildings on one side of a 5 lanes each way road, but on the other side the buildings are all set back at least a 10-lane width which is used for car parking, 1-storey buildings, public spaces, etc.

If there's ever a need to widen the road, it can be done without demolishing any tall buildings. I see this in new road layouts in China all the time. Of course, under the road will be a new subway system -- another disincentive for people to buy cars.


Sorry for saying the same thing in two comments, but that (like the Google case) looks like a Jevons effect (where induced demand is a special case):

https://en.wikipedia.org/wiki/Jevons_paradox


Isn't that a good unintended consequence? They built out infrastructure which attracted lots of people & jobs. Today, SoCal is home to world-leading firms in entertainment & aerospace. They also have top-tier research and educational institutions.


Which is why, when discussing infrastructure upgrades in our Hackerspace, I keep reminding that infrastructure is an enabler - it should not be built to support current needs, it needs a healthy margin to enable people to do more. People always find interesting ways to use up extra capacity.


There is a limit to that. People point out all the growth that overbuilt freeways caused argue we need to build more freeways. Nobody every asks if the trend will continue. They want to bring that same growth to small middle of nowhere towns, but it isn't clear if that will happen.

Also lost in the conversation is opportunity cost: sure people drove more and that drove growth to those areas. However what if the roads had not been built - what would have happened instead? We don't know, but it is fun to speculate. (maybe railroads would still be the most common mode of transport?)


The unintended consequence I'm talking about is they expected it will take 100 years for the system to be utilized at full capacity but in reality it took just 20-30 years...



I always wonder about this. You have to reach saturation eventually, right?


This isn't an example of Braess' paradox.


I think this is the same anecdote: http://blog.chriszacharias.com/page-weight-matters


One wonders how a user that takes 2 minutes to load 98KB is actually able to watch a video.

Even by the most optimistic estimations, a video that is a few minutes long at 480p will weigh in at 10 megabytes, meaning it'll take them OVER 3 HOURS to download the entire thing.

You would probably be able to browse (slowly), read comments, but not actually do much else.


This was the baseline experience everywhere in the 90s: people would just do something else while they waited for things to download over dialup. Clients for things like email, Usenet, browsers, etc. commonly had batch modes where you could queue large downloads so you could basically see what's new, select a bunch of large things, and then let it download while you got a cup of coffee / dinner / slept.


There was a time when Netflix was young, and I had slow internet, that if you queued up a movie and then paused it, it would continue to load. So you'd pick a movie, queue it up (literally) and then go make snacks and get situated. When the bar looked long enough you'd start watching. Then if it stalled (which it would do like clockwork every evening around 8 pm) you'd take an intermission.

By the time I got stuck with slow internet again (boycotting Comcast), they had removed that feature and it really sucked. Now I have fiber and the only drama is around Netflix only allowing you to stream a couple movies at once.


Yes, back when things actually buffered properly on the internet. The way YouTube buffers now: not actually loading more than a few seconds of the video ahead of where I'm watching, even when it KNOWS my internet is spotty, really frustrates me.


You could install the YouTube Plus browser extension and check "disable DASH playback" in the settings. Then if you start playing a video for a second or two and pause it, it will buffer the entire thing. The only downside is I think it reduces the maximum video quality for all videos to 720p.


I have a 1920x1200 monitor, so yeah I don't think so lol.


So do I and it's what I do. I can't really see much difference between 720p and 1080p on YouTube.


The sense I get is that it's an elaborate way to soften people up for abandoning their ISPs and going with Google Fiber. There's a whole system of alerts and pages for basically saying 'LOL ur internetz sux', and it's plausible to me that they'd get that in place in markets they've not currently entered yet.


This drives me INSANE.

I'd rather let it buffer for 10 min, then watch it in 360p because you guys can't figure out your buffering. The same goes for netflix!


They try to match the resolution to your bandwidth. Sometimes that's the right solution, but not always. When the playback stalls right at the big reveal it can be painful, but that doesn't mean I want to watch the minecraft edition of the movie the whole time


And, as most things, I'd rather have the choice. If you want to default to auto-configured resolution, that's fine. But give me the choice to override if I want!


Of course, you could spring for the high capacity Netflix plan.


Alas, I bought a new TV just before 4K was a thing. It saved me the temptation of buying one while they were so expensive. The HD level is good enough for me.


Oh yes, I remember my teenager days of happily waking up in the morning knowing that our ISDN connection surely has finished the download of that album or live recording I started last night via Soulseek / DC++ / Audiogalaxy.


Not sure what your comment has to do with mine.

I used Internet in the 90s and dial-up specifically as recently as 2004 - and have quite a good memory of the experience. Internet at dial-up speed was an extremely valuable commodity, to the point of disabling images in the browser and only downloading the most essential things (which is about as far from a youtube video you can get) - like documents and zipped installers (after researching that it was what you actually needed) and checking email.

Internet "videos" didn't really even exist as content before ~2005 and Youtube. The biggest player before them was Break.com, which posted a whopping 10-15 videos a day.

While someone may have spent several hours waiting for a key software installer to download, almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

________________________________________________________________________________________________________________________

To illustrate my point even further, the speed we're talking about isn't even dial-up speed. It's 6.5kbit/second, which makes it almost an order of magnitude slower than a 56k modem! 10 times slower!

And people are actually suggesting that someone would spend that valuable bandwidth to spend days to load a video...


> While someone may have spent several hours waiting for a key software installer to download, almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

I vividly remember waiting hours to download a video in the '90s. The Spirit of Christmas short that spawned South Park, and that news story about the exploding whale, were viral videos that predated modern video sites by many years. You'd download it off of an FTP server somewhere. At one point, the majority of my hard drive was devoted to half a dozen videos that I'd show to everyone who came over.

Basically, watching a video on your computer felt new and exciting and worth waiting for. I'd never do that today even if I were stuck on a slow pipe, but at the time it was oddly compelling.


Or that stupid dancing baby CGI that blew up in the late nineties for some reason.


It was a demo of Autodesk's Biped bone-animation software. It really took off when the producers of Ally McBeal licensed the baby to appear on their show.

That always reminds me: Earlier that decade, a major plot arc of Beverly Hills 90210 featured a nightclub called Peach Pit After Dark. The door of the club had a flying toaster, from the PC screensaver After Dark.


Hmm, either your experience, or memory, of the nineties / dial-up, is different than mine and all of my (quickly polled friends). We all built a library of painfully-obtained 320x200 horrible overcompressed video... ahh, the memories of RealPlayer. Some video was down to 180p... heck, I still have videos that were one or two megabytes in size in my archives - that I darn tootin' well waited hours to download :)


Please don't mention RealPlayer again. I feel ill and queasy thinking of that piece of software, its million rewrites and the CPU hog they released for Linux (at least on my lame hardware).

I remember using "download accelerators" to try and grab files faster back in the day. Who knew they were just doing 4 simultaneous downloads of ranges of the same file eh?

Ah I feel old


As I recall, the download managers were primarily useful because they could resume files that didn't finish.

Incredibly useful for enormous files like visual basic 6, which I spent like...a month or so downloading in the late 90s.


TIL the RealPlayer brand still exists... http://www.real.com


Can confirm, I remember downloading anime clips at 6 or so hours apiece! Not even full episodes, clips! And it was amazing!

And before there was YouTube there was flash videos/animations on Newgrounds. The people I hung with back then were into animutations.


We drink ritalin!


> almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

Umm. Porn?


RealPlayer video rips of my favorite TV shows for me, all in wonderful 160p!


A year or two ago, I went through some old files that had somehow followed me all through highschool. Among them were a handful of music and video files I got from friends passing around burned CDs. I was quite amused to play some random episode of Dragonball Z and have it pop up a postage-stamp sized video on my relatively high-res modern screen!


Or to have your kid say "Dad, why are there video thumbnails in the archive directory and where are the actual videos?"


I spent like a week downloading individual music videos over 28K. People will do a lot of things


Two minor points: GPRS is 36-112Kbps and EDGE is faster still, so the 90s modem comparison seems apt to me. Latency is terrible so the player page loading slowly but the actual video playing is entirely plausible since that's the best case network traffic profile. Other technological improvements help, too: H.264 is much better than the codecs like MPEG-1 used for things like the infamous exploding whale video.

The bigger point was simply that people will wait for things they want. Not being able to load instantly changed the style of interaction but not the desire to listen to or watch things and there's far more content available than there used to be. Tell teenagers that the cool music video is available and you'll find a lot of slow downloads over their entire day at school, work, etc.


IIRC in 1997 (was it 1996?) this thing called webcasting made lifecasting possible, Jennicam and Amandacam come to mind. Then there was stileproject and its treasure trove of videos, Not mentioning internet porn videos that people burned onto CDs to sell to people without internet.

Internet video has been a thing a while before youtube got released hoping to capture enough of this thing through offering to host it for free in the hope to be bought later by one of the big players.


> Not sure what your comment has to do with mine.

Someone doesn't agree with your comment, but rather than come back with a refutation they voted you down instead :)

And someone else don't agree with my comment .. modded down -4


Likely because your comment doesn't add anything to the conversation, and the guidelines ask not to comment on downvotes.

https://news.ycombinator.com/newsguidelines.html


If I cared about karma or being modded down, I wouldn't post on the Internet, at all.

If people chose to download porn over 28.8k modems, that's their choice. Just seems like a waste of time, that's all. Probably quicker just to take your dad's nudie mags rather than wait 6 hours for a 400x300 jpeg.


> it'll take them OVER THREE HOURS

I remember frequently spending three hours downloading 5MB files over dial-up in the late 90s. Mostly software, not videos+, but it really just felt like a regular thing back then.

+ Computers nearly didn't have the power to decode video—or even audio—in realtime back then, unless it was the entirely uncompressed kind. I recall ripping a CD to WAV and finding out half-way that my 2GB hard drive was now 100% full.


I think your memory of the late 90's is actually from the early 90's ;)

I remember the first realtime mp3 player on Windows, Fraunhofer IIS' WinPlay3, which launched in '95. Then Winamp came out in '97 and blew our minds.

https://en.wikipedia.org/wiki/WinPlay3


You needed about a 100mhz CPU to playback mp3s without skipping. Seemed to hold both on my PPC mac and pentium windows machine.


I used a Pentium at 133 MHz at the time, and it struggled to play MP3. The tracks would stutter.


I played MP3's on a 486 DX4/100mhz. There was a setting in winamp I had to turn on (I think it was quality related) otherwise it would stutter and be generally unlistenable. Even with that setting turned on it pegged the CPU at about 100% the computer was unusable for anything else while the mp3 was playing. Trying to seek in the track would sometimes crash winamp.

My Pentium 166 on the other hand was much more capable I could multitask while playing Mp3's (I used to play mp3's and Quake at the same time).


I think somewhere in that timeframe, standard soundcards adopted support for hardware playback of mp3 files.


That would be interesting to hear more about.

AFAIK all sound drivers in Windows accepted standard PCM data.


I'm not sure about that, but 'mmx' did come to fruition with early pentiums.


I had a Soundblaster 16 in the 486


Wait a sec guys.. are you sure you had the turbo button on your machine active?


I used a Pentium 100 MHz and it would play just fine. Encoding them would take real-time (i.e. a 3 minute song would encode in 3 minutes), but playing back was fine. I even listened to music while doing other things. This was with Winamp.


My intel box may have been a 166Mhz machine. Either way this assumed you weren't trying to do anything else. The PPC mac was definitely 100mhz though - maybe Apple marketing wasn't a lying about perf/clock back in those days.


Yes I remember using mods and xm files instead as attempting to play a low bitrate MP3 on a Pentium 100 (I think?) Elonex laptop meant 100% CPU.

Even tried it on mpg123 and mpg321 on RedHat on a 486 DX66 - I was poor. Didn't fare any better.


You would have much better chances with Opus and its integer decoder today.


I used a Pentium 150MHz to play mp3s and do other stuff at the same time. It worked just fine. This was with Winamp in around 1997 I believe.


Seems about right; I could play mp3s on my Libretto 30, but only with the Fraunhofer decoder, Winamp wasn't quite optimised enough.


Yes, my point was specifically about videos, and I've elaborated on it further in the post above.

I, too, remember quite vividly waiting 30-40 minutes to download a 5MB installer for WinAmp and ICQ.

I honestly wouldn't even know where to look for videos in the 90s internet. Most people probably didn't have the upload speed to even consider sharing them online.


I think shockwave.com hosted Flash animated videos. Also, I remember downloading Troops from TheForce.net, seems like they had several other fan made videos. I also remember watching the Star Wars Episode I trailer on StarWars.com (I beleive it required the QuickTime plugin); so anyway, those are some sites hosting video content in the 90s.


I got some from Sony's BBS system where fan groups shared (links to) music videos.


> three hours downloading 5MB files over dial-up in the late 90s

Ah Napster.


I remember my family's pre-PPC Mac could play MP2, but not MP3.


A Pentium II can play a DVD back at 24 fps back no problems.


That's because the pentium 2 chip isn't decoding the video. Try it with a software decoder some time. It hardly works. I had to disable video scaling before my 366 would stop dropping frames.


I'm guessing you've never downloaded porn from usenet over a 2400 baud modem.


I have not. That was a little bit before my time. The first modem I've had the luxury of using was a 33.6K.


Perhaps you shouldn't be asserting yourself as an authority on what did or did not happen if you are too young to have experienced it in the first place.

I'm very young, only 27, and know that I missed a full decade of early internet culture and can't speak to it. Even given that, I had a 14.4 and fondly remember downloading a music video for hours. (No porn on the modems personally, I think I was barely adolescent when we upgraded to cable)


Erotic literature?


Literature? alt.binaries.


ascii art.


They don't watch videos (speaking from experience). But you do not and have never needed to watch TV to be well educated. The same is true of the internet. The tragedy that the post points out is that text and diagrams are being artificially weighed down with video-like anchors for no good reason.


When I had a slow connection I loved tools like youtube-dl because they didn't expect to be used interactively. Network tools like browsers that just assume they can monopolize your time are probably the most frustating things in these situations.


YouTube-dl helped a lot when I was living in a rural town a few years ago. I would download a bunch of tutorials over the weekend when I visited my parents in the city and watch them course of the week. Seems strange to say now, because this was the case only 3-4 years ago and only a few hundred km from where I currently live. Today I can't imagine watching in anything less than HD.


I save the links to download later at some other place and view offline. Before I had ssh access to a server from a friend who copied what I downloaded (he also copies whole debian repositories for me).


YouTube still has 240p mode for that very reason, they just do not advertise it in the UI.

The modern codecs make these super compressed video and sound relatively watchable.


In my observation (of people who still use dial-up here in the US), they usually just do something else (like make dinner or watch TV) while the video loads.


I live in the first world and I rarely watch videos at 480p. I have 100gb of bandwidth a month, and typically view YouTube at 360.


> Even by the most optimistic estimations, a video that is a few minutes long at 480p will weigh in at 10 megabytes, meaning it'll take them OVER 3 HOURS to download the entire thing.

Right there in the same link, it says that the video page was a megabyte. It's the sentence directly after the 'two minutes' but. The sentence even has some 'all caps' words in it - even skimming, your eye is drawn to it. Why on earth did you stop reading halfway through the penultimate paragraph?


if you have that kind of speed you watch Youtube in 144p !


I have done that many times and it's doable for most things that don't have hard-coded subs.


3 hours is so low in the grand scheme of things. It used to take 3 hours to download a jpg over dialup ...


I used to rely on PC Plus and cover CDs for all of my software needs. I remember needing Qt for Linux (it was all new to me) and spending hours downloading a 16MB file, only to be grabbing the non-devel RPM.

We take file sizes for granted these days.


Why do you need 480p? 144p/240p was the standard YouTube less than 10 years ago.


As someone who lives in Africa, hoorah! More of this please. For me the best feeling is visiting a web page that is almost entirely text based. It loads in a few seconds which feels like quite a rare feeling these days.


Page Weight Matters by Chris Zacharias. http://blog.chriszacharias.com/page-weight-matters


this was in relation to Project Feather of the Youtube - the Youtube website did not even load before for them and when it did, they started watching more videos even though it took more than 20 seconds to load!


Read somewhere it was YouTube


Something I have had at the back of my mind for a long time: in 2017, what's the correct way to present optional resources that will improve the experience of users on fast/uncapped connections, but that user agents on slow/capped connections can safely ignore? Like hi-res hero images, or video backgrounds, etc.

Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics. And in those demos, it's often not acceptable to degrade the experience of users on fast connections to accommodate users on slow connections.

So -- if I write a web page, and I want to include a large asset, but I want to indicate to user agents on slow/capped connections that they don't _need_ to download it, what approach should I take?


This seems like the thing that we'd want cooperation with the browser vendors rather than everyone hacking together some JS to make it happen. If browsers could expose the available bandwidth as a media query, it would be trivial to have different resources for different connections.

This would also handle the situation where the available bandwidth isn't indicative of whether the user wants the high-bandwidth experience. For example, if you're on a non-unlimited mobile plan, it doesn't take that long to load a 10mb image over 4G, but those 10mb chunks add up to overage charges pretty quickly, so the user may want to set his browser to report a lower bandwidth amount.


Here in Greece, the internet is plenty fast (in bandwidth), but everything is far away, so there's lots of latency in opening every page. Going to the US on a trip, it's striking how much faster every website loads, there's no 300ms pause at the start anywhere.

Because I got frustrated at fat websites downloading megabytes of useless things, I decided to start an informational site about this very thing:

http://www.lightentheweb.com/

It's not ready yet, but I'm adding links to smaller alternatives to popular frameworks, links to articles about making various parts of a website (frontend and backend) faster, and will possibly add articles by people (and me) directly on the site. If anyone has any suggestions, please open an issue or MR (the site is open source):

https://gitlab.com/stavros/lighten-the-web/issues


Very interesting.

I would suggest swapping the current structural aesthetic of "come in and look around" for the somewhat more widespread approach of having one or more calls to action and making the homepage fully sketch out the points you want to make.

FWIW, I say this out of pragmaticness. I don't mind the "welcome! browse!" approach myself, but it won't appeal to the demographic you're trying to reach: people who themselves are being paid to eat/sleep/dream modern web design.

Another thing I would recommend is using every single trick in the book to make the site fast. For example you could get ServiceWorkers caching everything for future visits (with maybe AppCache on top just because) and use the HTML5 history API so you can preload all the site text (say, in an XHR that fires after page load) and use that to make it feel like navigation is superhumanly fast.

TL;DR, use this as your playground to learn how to make sites load better. Voila, the site will be stupidly fast, and it will self-describe too, which is kind of cool. And you'll wind up with a bunch of knowledge you could use for consulting... and then you could use the site as the home base for that, which would be even cooler.

(I realize you just started this project, and that the above suggestions are in the "Rome wasn't built in a day" category)


It's funny that you mention that, because I just wanted to have a site I could optimize to hell, and it seemed apt to make an informational site about optimization for that . AppCache is obsolete and harmful now (yes, already), and I should link to the articles that talk about that, thanks for reminding me.

As for the "come browse" approach, you're definitely right, and I don't intend the finished site to look like this, but I'm also not sure how to structure the content. What do I send the user to first? Maybe I'll write a tutorial hitting all the bullet points with links, though (eg add caching to static media, bundle them, don't use heavy libraries, load js async if you can, etc etc).

Thank you very much for your feedback!


I've wanted to play around with some similar ideas for a while too, actually. I have a few loose high-level ideas - I know I want it to feel like an app, but I want to use plain JS; I want to leverage everything modern browsers can support, while remaining backward-compatible (!); I want to try odd things like using Lua inside nginx for everything (or even writing my own web server), or programmatically reorganizing my CSS and JS so runs of similar characters are grouped together and gzipping has the best effect. I also have a hazy idea of what I want the site to be about (not a content site, some sort of interactive thing) but I haven't resolved all the "but if you make it about X, it doesn't really nail all those bits about Y you wanted" stuff yet. Anyway.

Thanks for the note that AppCache is now out of the picture. I actually think I remember reading something vaguely about it being not the greatest, but I didn't know it was actively harmful. Do you mean in a security sense or it just being bad for performance?

I wasn't sure what to say about the content structure thing at first, but then I thought: take the pragmatic approach. Gather piles and piles and piles of actual content and dump it either on the site itself or your dev version. Notions about structure, presentation and content will likely occur in the process of accumulating (or writing) what's on the site.

As for what kind of content to put up, I would suggest focusing heavily on links to (and/or articles about) pragmatic, well-argued/well-reasoned arguments for lightening page load, and the various kinds of real-world metrics that are achieved when people make the investment to do that.

An obvious example: it's one thing to say "I recommend http://vanilla-js.com!", it's quite another to say "YouTube lightened their homepage weight from 1.5MB to 98KB and made it possible for brand new demographics in 3rd-world countries to experience video playback (http://blog.chriszacharias.com/page-weight-matters). Also, the reason the site feels so fast now when you click from one video to the next is that the platform only pulls in the new video URL, comments and description - the page itself never reloads."

Regarding where to start, I was thinking that a mashup/ripoff of halfway between https://developers.google.com/web/fundamentals/ and MDN might be an interesting target to aim for. I'm definitely not saying to [re-]do that much work (although I wouldn't be protesting if someone did... some of those Fundamentals tutorials are horribly out of date now), I'm just saying, the way that info is presented could do with cleanup and you can always run rings around them in various ways (layout, design, navigational hierarchy) because of bureaucracy blah blah... but you could do worse than aiming for something that feels like those sites do. Except you'd be focusing on making everything as lightweight as possible, and you would of course make the site your own as time went by. Maybe what I'm trying to get at here is that nobody's done a full-stack (as in, "bigger picture") top-to-bottom "here's how to do everything lightweight, and here are a bunch of real resources" sort of site yet, and I'm suggesting the lightweight-focused version of Google Web Fundamentals... :/

On a related note, I've come across a few websites that are nothing more than a bunch of links and a tiny bit of text describing some technical/development issue or whatever. They almost feel like spam sites, except they talk about legitimate issues and are clearly written by a person.

I'm sure these people mean well, but the low-text high-link format (or the "I'm going to rewrite what's in this link in my own words" approach) doesn't work for blog sites (possibly because of WordPress's 10000-clicks-to-get-a-high-level-overview browsing model...tsk) and similar - I'm trawling for actual text when I'm on a site like that, if you give me a link I'm not even on your website anymore.

You've probably seen sites like that too. (Note that I'm slightly griping here, I don't see your site as similar at all. I think I got a bit off track, I was trying to demonstrate the exact opposite of the direction I would suggest you go in. :P)

Also, I just thought of https://www.webpagetest.org and https://jsperf.com. Arguably microoptimization-focused, but I thought I'd mention them anyway.


> Do you mean in a security sense or it just being bad for performance?

It is bad for performance. I was going to link you to the article, but I figured I'll add it to the site :) http://www.lightentheweb.com/resources/

> Regarding where to start...

Ah, good idea, thank you. Yes, I'm not really in a position to rewrite content that exists (it would take too much time), but I would like to at least index it sensibly.

> nobody's done a full-stack (as in, "bigger picture") top-to-bottom "here's how to do everything lightweight

That's exactly what I'm aiming for, with clear steps and possibly a checklist (good idea!) on what to do.

> I've come across a few websites that are nothing more than a bunch of links

I think that's hard to avoid when making an informational site, but the links could possibly be embedded into article-style copy, making it not look as spammy. I'll keep that in mind, thank you.

> I was trying to demonstrate the exact opposite of the direction I would suggest you go in

Haha, yes, I know what you mean, and the links will be the "read more" material. I'd like to add some original content and non-time-sensitive guides to the fundamentals.

> Arguably microoptimization-focused, but I thought I'd mention them anyway.

Those are great, thank you!


Can I suggest you add caching to the css, javascript and logo?


I will, it's still early so I hadn't taken a look. To be honest, since this is hosted on Netlify I was kind of assuming they'd be sending caching headers for all static files, but I see that they aren't.

I'll look into it, thank you!

EDIT: Netlify would be setting everything properly if I hadn't turned that off... Luckily they have great support!


Whole things super fast now, loads in 55ms. I assume most of that is ping (I'm in Australia).


Fantastic, thanks!


Thanks for the links to lightweight CSS and JS libs; I actually need exactly that right now for a project.

Gitlab link is 500-ing, unfortunately.


Ah, oops :/ Seems to be okay now, let me know if there's something else you need! I'm looking for ideas on how to organize the docs/site at the moment.


>If browsers could expose the available bandwidth

I don't know why this seems like such an imposition, but I think I'd be uncomfortable with my browser exposing information about my actual network if it didn't have to. I have a feeling way more people would be using this to track me than to considerately send me less data.

That said, browser buy-in could be a huge help, if only to add a low-tech button saying, "request the low-fi version of everything if available." This would help mobile users too -- even if you have lots of bandwidth, maybe you want to conserve.


Indeed; as an user, I don't want the site to decide what quality to serve me based on probing my device. It'll only lead to the usual abuse. I want to specify I want a "lightweight" version or "full experience" version, and have the page deliver an appropriate one on demand.


I remember when websites used to have "[fast internet]" or "[slow internet]" buttons that you could use to choose if you wanted flash or not. Even though I had a high-speed, I chose slow because the site would load faster.


It doesn't have to be your actual bandwidth. The values could be (1) high quality/bandwidth, (2) low quality/bandwidth, (3) average. The browser can determine that automatically with an option to set it if you want to (e.g. for mobile connections).

That should solve most problems without giving away too much information. But an extra button would probably just confuse people.


Progressive resources would help a lot here. We have progressive JPEGs and (I might be wrong) PNGs, you can set your UA to low-fi mode and it will only download the first layer of the JPEG.


I think if someone wants to track you, the bandwidth is not the first thing they'll be looking at.

It's just another signal, but there's already a few tens of them, so adding one more is not going to make a significant difference.


If you consider every identifying piece of information as a bit in a fingerprint, it makes more than a significant difference; it makes an exponential difference. Consider the difference between 7 bits (128 uniques) and 8 bits (256 uniques) and then 15 bits (32K uniques) and 16 bits (65K uniques). Every additional bit makes a difference when fingerprinting a browser.


This sounds just like the idea that the website should be able to know how much power you had left on your device, so it could serve a lighter version of the webpage.

I think the arguments against are pretty much the same.


You can get the network type or downlink speed in Firefox and Chrome using NetworkInformation interface: https://developer.mozilla.org/en-US/docs/Web/API/NetworkInfo...

Then you can lazy load your assets depending on that condition.


One hack-y way to do it would be to load it via JavaScript. For example, see this stackoverflow [0]. Obviously not a great solution, but it works if your dying for something.

I bet people w/ slow connections are much more likely to disable javascript, though.

    let loadTime = window.performance.timing.domContentLoadedEventEnd- window.performance.timing.navigationStart;
    if (loadTime > someArbitraryNumber) {
        // Disable loading heavy things
    }
[0] http://stackoverflow.com/questions/14341156/calculating-page...


It's too late to disable loading heavy things at that point - the loading is already started.

Do the opposite, start loading heavy things if the page loaded quickly.

A clean way would be to set one of two classes, connection-slow or connection-fast, on the body element. Then you could use those classes in CSS to choose the correct assets for background images, fonts and so on.


Well, I meant to leave the heavy things out of the HTML you send over to the browser and then inject them only if they can be loaded.

So, yeah totally agree with you. Should have been clearer.


>start loading heavy things if the page loaded quickly

...and not loaded from cache. You need a way to determine this reliably. AFAIK there's no way to determine this for the main page itself.


There is a proposed API for that.

https://wicg.github.io/netinfo/

And like most such APIs, it has been kicked around for a long time and it has only been adopted by Chromium on Android, ChromeOS and iOS. It'd be great if it were more widely adopted...


Yay, more browser fingerprinting data points!


Well, to be fair, as the spec notes, you can already fingerprint on speed by timing how long an AJAX call takes.

Also, "on a shit home DSL connection" doesn't really distinguish me from millions of other UK residents.


Yes it does, when used in combination with the other data points.

That said, I always get "unique" on those fingerprinting tests. You can't be "extra unique," so I guess I don't mind it.


> Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics. And in those demos, it's often not acceptable to degrade the experience of users on fast connections to accommodate users on slow connections.

This is prejudice. People use Craigslist, for example. If the thing is useful, people will use it. If there's a product being sold, and if it's useful to the potential clientele, they'll buy it. Without regard to the UI.

In the past ten years while my connection speed increased, the speed at which I can browse decreased. As my bandwidth increased, all the major websites madly inflated.

> So -- if I write a web page, and I want to include a large asset, but I want to indicate to user agents on slow/capped connections that they don't _need_ to download it, what approach should I take?

Put a link to it with (optionally) a thumbnail.


> People use Craigslist, for example. If the thing is useful, people will use it. If there's a product being sold, and if it's useful to the potential clientele, they'll buy it. Without regard to the UI.

Craigslist achieved critical mass in the 90s, so it's not a good example. Many useful products disappear because they can't attract enough users to become sustainable. A nice UI can affect users' credibility judgments and increase the chance that they'll stick around or buy things.

[1] http://dl.acm.org/citation.cfm?id=1315064


Random idea: Get the current time in a JS block in the head, before you load any CSS and JS, and compare it to the time when the dom ready event fires. If there's no real difference, load hi-res backgrounds and so on. If there is a real time difference, don't.


Wouldn't that be measuring latency more so than bandwidth? You'd run the danger of confusing a satellite internet connection (high(ish) bandwidth, high latency) with a third-world, low bandwidth connection.


Satellite ISPs have low data caps and/or charge a lot per GB of transfer. Avoiding unnecessary downloads seems like the correct behavior in this case.

I think the best solution would be an optional http header. That way, the server could choose to send a different initial response to low-bandwidth users. If connection speed is solely available via JavaScript API or media query, then only subsequent assets can be adapted for users on slow connections.


In the ideal future, FLIF [0] would become a standard, universally supported image and animation format. Almost any subset of a FLIF file is a valid, lower-resolution FLIF file. This would allow the browser - or the user - to determine how much data could be downloaded, and to display the best-quality images possible with that data. If more bandwidth or time became available, more of the image could be downloaded. The server would only have one asset per image. Nice and simple.

[0] http://flif.info/


We outsource this to CloudFlare and their Mirage service: https://support.cloudflare.com/hc/en-us/articles/200403554-W...


I think this is an important question.

Like another reply to your comment, I thought about having a very small js script in the header puting `Date.now()` in a global, then on page load, having another script checking the amount of time that had passed to see if it was worth downloading the "extra" at all. But then again where do you put the threshold? Has anyone tried this with some degree of success?


Design your UX so that any large assets can be requested at will by the user, and indicate the file size? That way it's the user's choice if they want to load that large video over their slow network, etc.


Most users on fast connections are not going to enjoy explicitly clicking to download every background image, font, etc. For videos it might make more sense, but there are many more optional assets to deal with.


Background images and fonts are examples of things probably not needed at all. I already have fonts on my computer, I don't need yours.


I have to agree with the original comment here:

> Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics.


I think your underestimating how perceptibly slow the internet has become for a lot of people. They don't realise they're downloading 2MB of JavaScript, they don't realise what JavaScript or css are. They'll say things like "I think my computers getting old" or "I think I have a virus". More often than not this is just because they're favourite news site has become so slow and they can't articulate it any better than that. All they want to do is read their text oriented news sites with a few images.


I don't think there is an easy way to tell the browser not to download something because the connection is slow. Progressive enhancement can work well for giving users a basic page that loads quickly with minimal assets while also downloading heavier content in the background that renders later. That's still different than putting a timer on a request to download the content (which would require JS to detect the slow connection).

If you make a page well it should render quickly under any network condition, slow or fast. As an example, you could try serving pictures quickly by providing a placeholder picture which uses lossy compression to be as small as possible. It could be byte64 encoded so it's served immediately even over a slow connection. Then after the page is rendered, a request could go out to download the 0.5Mb image and a CSS transition could fade the image in over the placeholder. People on fast connections wouldn't notice a change because it would load right away, while people on a 40kbit 2G connection would be OK with your page too.

The requests to download larger content will still go out over a slow connection but the user won't suffer having to sit through seconds of rendering. Maybe similar to how people have done mobile-first responsive design, people could try doing slow-first design. Get everything out of the critical rendering path and progressively enhance the page later.


I think srcsets are a reasonably proxy for this. Serve extremely optimized images to mobile devices, and the full images to desktops.

It isn't perfect - you'll get some mobile devices on wifi that could have consumed the hero images and some desktop devices still on dial up, but it's still a lot better than doing nothing.


Server side logs and sessions. You should be able to tell users devices/browser and what bandwidth they operate. Then calculate average speed they use. You could then create tiered service where media quality and JS features can be adjusted. You will periodically process logs to make sure that grouping is still correct. As additional feature users could choose in theirs setting what tier they want to use.

On client side, You can achieve some of this by heavy use of media queries. https://msdn.microsoft.com/en-us/library/windows/apps/hh4535... You can basically manipulate resolution/disable of assets based on screens quality. This is under assumption that someone with retina screen will have decent internet.


There are two ways:

1. Serve the AMP version of your page (https://www.ampproject.org) which uses lazy loading and other optimizations

2. Use the Network Information API (https://developer.mozilla.org/en-US/docs/Web/API/NetworkInfo...) to load heavy assets only on high bandwidth connections.


Here's a proxy: buy a 1st-gen iPad, turn JS off, and then use it to browse the site.

If it crashes the device, you're way off.

If it's appreciably slow or clunky, find ways to improve it.

Iterate until it's fast and usable.


Which demographic likes large images that are not necessary for the task at hand?


Simple. Design your webpages to only load the functional lighter weight essential stuff by default. Then use javascript to add in all the large assets you want. Users with slow connections can browse with javascript turned off.


I like this solution.

Unfortunately there is little consistency in how well browsers accommodate JS-less browsing. Firefox removed the ability to switch off JS from its standard settings panel. Brave lets you switch JS on/off for each domain from the prominent lionface button panel.


I found out this the hard way.

T-Mobile used to offer 2G internet speeds internationally in 100+ countries included in Simple Choice subscriptions. 2G is limited to 50 kbit/s, that's slower than a 56K modem.

While this absolutely fine for background processes (e.g. notifications) and even checking your email, most websites never loaded at these speeds. Resources would time out, and the adverts alone could easily exceed a few megabytes. I even had a few website block me because of my "ad blocker" because the adverts didn't load timely enough.

Makes me feel for people in like rural India or other places still only at 2G or similar speeds. It is great for some things, not really useable for general purpose web browsing any longer.

PS - T-Mobile now offers 3G speeds internationally; this was just the freebie at the time.


Disable JavaScript. You’ll be surprised at how most of the web still works and is much faster. Longer battery life on mobile, too.


I use NoScript. The web is much less annoying by default, and I can still enable scripts for those sites where I think it might be useful.

There are a good number of sites now which have entirely given up on progressive enhancement and simply don't show you anything without JS... but I generally find I just don't care, and just close the tab and look at the next thing instead.


I found that, usually, the ones that don't show anything when javascript is disabled are the ones loading scripts from ajax.google.com.... it might appear to be so because google is so much larger (or maybe Google did that on purpose)


I started using "Image On/Off" and "Quick Javascript Switcher" plugins to easily toggle images and js while traveling South America in 2014 to increase speed and save costly bandwidth.

Still using them for the side effects. Its nice to be able to start reading an article immediately without waiting for the jumping around of content to stop. And actually read until the end without having modal dialogs shoven down my the throat.


> You’ll be surprised at how most of the web still works and is much faster.

And you'll be more secure, and you'll retain more of your privacy.

I find 'this site requires JavaScript' to be another way of saying, 'the authors of this site don't care about you, your security or your privacy, and will gladly sell all three to the highest bidder.'


Well, that's quite unfair. JavaScript is also used for creating interactive web applications - not just tracking users. Really your attitude comes off unnecessarily aggressive.


There are exceptions where JS is needed. They are exceptions though. A vast majority of the sites I see now are web-pages that think they need to be SPAs. Sorry, sucks to be them, but if they didn't mis-design, I wouldn't mis-interpret their intentions.


Obviously things like gdocs need JavaScript, but blogs and news sites and forums sure don't.


I think it depends on what the JavaScript is used for. I agree that blogs and news site should be static, but forums - and in general, sites with a high degree of user interactivity - can see significant UX improvements with some JavaScript, for things like asynchronous loading, changing the UI without reloading the page, and even nice animations (although many of those can be done in CSS these days). However, graceful degradation is very important - disabliing JavaScript on these sites shouldn't break them, merely impact the UX.

[Edit] "blogs and news sites should be static" -> this should read "blogs and news sites don't need JavaScript"


Agreed, enhancements are good (and often nice on a modern devices with all the bells and whistles enabled), so long as it degrades nicely.


How can one know that first time viewing a webpage? I just enable some particular websites to run JS because I know that they are indeed interactive web applications that I want to (read: have been constrained to) use. Also, most interactive web applications can stay just as interactive even though they were to reduce the amount of JS and CSS libs, fonts, images, icons, videos and other stuff that they thoughtlessly pull in. I disabled fonts loading on webpages and all the search boxes are now an "fl" ligature for me, though many times I see that it's not more cryptic then before I disabled fonts, because weird icons some people invented are just as meaningless to me as random letters. I've gotten used to the fact that an identity sign is for a meny, but every other day someone invents another one that now I can't click anything without fear and uncertainty, as most of the time no-one bothers to put a tooltip or a litte label.


Is it more aggressive than the uses to which JS is being put these days?


Practically speaking, I think it's much more appropriate to just assume the admins are lazy.


I'll respectfully disagree with you.

It takes more work to have a bloated JS mess of a site, than it is to have a small, simple, clean site. If they were lazy, they wouldn't have gotten to that spot in the first place.


Not really. It's very easy to get bloat if you integrate ad-networks, analytics tools, social media tools etc. willy-nilly without looking at all the resources they fetch.

The lazy approach WILL lead to bloat.

No news agency is running a plain jane HTML website.


Your kind of proving my point. If you add these things in, it's more work. If you make a plain html site, which is what these sites should be doing, then you aren't going to add that stuff in, which means less work.


I think you're forgetting content creators that aren't developers - although to be fair I don't see why you can't create an interface for the user that spits out / retroactively updates old pages/links/images.

There is definitely a trade-off between ease-of-use and cost-of-use and I feel this gap is bridged by the content created by those who could not publish bare bones.


> I think you're forgetting content creators that aren't developers

I don't understand what you mean by that. Content creators don't need to be developers for us to use simple, reliable systems.

> There is definitely a trade-off between ease-of-use and cost-of-use and I feel this gap is bridged by the content created by those who could not publish bare bones.

Yes, but I personally find the "ease of use" to be worse on heavy, slow, bulky sites. If content is "easier to use", then why are people constantly angry at slow, non-responsive interfaces? I see and feel this all the time, yet it's somehow "easier to use"? I don't see people complain when sites are fast, responsive and simple. Everyone's top complaint is that their computer/phone is "soooo slooow". Why is this, when we have extremely fast computers?


If you're hand coding it either way, maybe but, at least in my personal experience, it's much faster/less effort to drop bootstrap and jquery on the page and get to something acceptable looking then to hand code just the 50 lines of js/CSS I actually need. Obviously there are many benefits to the latter approach, especially in the long run, but it's definitely not the lazier approach.


> hand code just the 50 lines of js/CSS I actually need

That's the problem. If you do legitimately need it, then yeah, it might be, but my experience says you probably don't need that.


Quite the opposite actually, people don't know how to set up a website, let aside making a simple, static one. Many websites are created on services like Squarespace, wordpress (used by many as a CMS), CMSes, Blogger, etc. And then for who knows to edit text files, it's easy to start with a tutorial and end up with a +1Mb hello-world website.


I can agree with this. I was talking about people who know how to code, but you make a good point.


Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

Antoine de Saint-Exupery


Yeah, but it takes diligence to know that.


Or, they assumed everyone nowadays has a browser with javascript and don't care about people who won't accept to using js or even know about these people existing


Cannot help but chuckle at the irony when I forget to allow goog/gstatic when using gmaps and get the non-blank-page stating "without js all that's left is a blank page".


This is the main reason why Brave is my default browser: you can set the default to disable JavaScript and enable it only on sites which actually do something useful with it. My data usage dropped something like 1GB the first month I switched.


Chrome on Android has this exact same setting. I currently have 26 sites allowed to run Javascript.


hackernews even works without javascript, the whole page just reloads everytime you upvote something.


Well, that's just the default way interactivity in websites works - submitting forms.


You could have used an iframe for each button instead of a normal form to prevent the reload of the page. Using an iframe with data: should take no longer to load than a normal form.


If only legacy didn't exist, and I can't think of a way to toggle iframe and form without js =\

I guess we're almost all on evergreen browsers now anyway...


FWIW, I just tried this in Firefox (set javascript.enabled=false) and went to my bank's website to see how it would fare. Firefox crashed. Tried again with no other tabs open, still crashed. Crash report sent.

OTOH, in Chrome the website actually works fine and feels more snappy with JS disabled. So, thanks for the tip!


Can you share the crash report IDs from your Firefox's about:crashes page? Can you share a link to your bank's crashing web page? I'd like to try to reproduce the crash. Thanks!


Sent by mail. Thanks for looking into it!


This crash is Firefox bug 1328861: https://bugzilla.mozilla.org/show_bug.cgi?id=1328861


Note though that disabling JavaScript can also slow down many sites. One good use of JavaScript is to detect the speed of the user's connection and then load in smaller and lower-quality assets. Disabling JavaScript can result in the default assets being loaded, which rapidly offsets the benefit of not loading that JS. Other sites will load in portions of the content first and use JS to load in extra chunks as requested, but load in the entirety of available content if JS is disabled, slowing down initial pageload enormously.


I can't think of a single website I know that uses JS to intelligently load thing via connection-speed sniffing. It's a nice thought, but it doesn't happen. There used to be JS fills for responsive imagery -- it was never connection-speed based, but viewport based -- but this is all browser-native these days. Some things might provide simpler assets via CDN-based UA sniffing.


Yeah when I used to run over my T-Mobile data allotment (in the US) and they dropped me to whatever speed they throttle you to when your "high speed" data is gone, Google Maps wouldn't load, Facebook wouldn't load, YouTube wouldn't load. I remember using all of those things back in the days when a 3G connection was a luxury, back when Windows was the best smartphone platform. What happened between then and now that suddenly nothing works?


High paying customers are going to have high speed connections. No one will talk about it, but it's discrimination. If you are on a slow connection, they don't want your kind on their site. If you try, they will mock you for not knowing your place.


No one will talk about it, but it's discrimination.

This will make a few segments of people cringe because much like the topic of racism, there's a school of thought that it only counts when things are exhibited in severe forms like water cannons, attack dogs or restrictive housing covenants or otherwise people being directly being told 'no' because of superficial topics like race, gender or sexual orientation.

But as a tech guy who's slowly pivoting towards law, I've long held the belief that technology will become the next battle ground for civil rights-and has the potential to even change (in the sense of expanding the definition of) how we talk about civil rights. Think along the lines of people being left behind when it comes to accessing information they need to request public resources as more and more cities move towards online only forms, or even utilization of "entitlement programs" to pay for internet access (http://www.usnews.com/news/articles/2016-03-31/fcc-expands-o...).

Now it may not be active discrimination in the sense that one will be outright told 'no', but disparate impact deserves to be at the table of discussing this sort of thing.


It seems more plausible to me that they just don't want to take the trouble to support low-bandwidth connections than that they're actively pumping up the space to keep out poor people.


> What happened between then and now that suddenly nothing works?

The average expection changed. Back in the day, everyone was on $SLOW_SPEED, so pages were designed for it. Now-a-days they can, and do design, pages for higher speeds


Yes and the average page size has sky-rocketed, despite there being no more actual content (i.e. text). Instead we have animations of images or text fading in as we scroll down a page and lots of Javascript to do things I do not know.

Kind of makes me miss the old plain HTML days - much less CPU intensive too.


But still should make sure low speed works. But try convincing some privileged 20 something developer of that.


It's really not about a privileged youngster, it's about business priorities. Most of the time the cost benefit ratio doesn't justify the effort to optimise for high latency high packet-loss connections. Let's say 1% of your potential users use such connections. It only makes sense to support them if your total userbase is a large enough number. For Google, it's a no-brainer. For other sites, it's something to consider.

At work we did something similar a few years ago with our Android app. We dropped support for Android 2.3 users because we only had a couple hundred of them and it didn't justify the developer cost to maintain it. WhatsApp only dropped support a month ago. I don't think that was because they were somehow less privileged than us.

The casual ageism in your comment is unbecoming. You could reconsider it.


> Let's say 1% of your potential users use such connections. It only makes sense to support them if your total userbase is a large enough number. For Google, it's a no-brainer. For other sites, it's something to consider.

Your making it usable for that 1%, but you're making it better for the other 99%.


You aren't necessarily, though. Efforts spent optimizing the existing functionality are not being spent adding new features.


But your optimizing things people are using frequently, not adding things they probably aren't. Adding features is usually diminishing returns as well.


Even if you just want to polish or optimize existing functionality, bandwidth usage may not be the biggest bottleneck for all or most users.


I could, but I've worked with too many examples that only care about writing new code in whatever is the latest hotness and moving on. They[0] don't want to fix their bugs. They don't care about anything but "works on my machine." They certainly don't care about using bandwidth.

[0] The ones I've worked with


Bring it up! They're newer to development then you are, they're newer to life. They're far more likely to have always have had high speed internet growing up, and not have had that visceral experience. The initial reaction will probably be negative, but initial negative reactions to perceived increases in scope/work is basically a universal human trait, it's surmountable.


What is an ageism? People are always inventing new ways to get offended...


I've heard about ageism (like racism, but for people of different ages) since around 2000-2001. Really around the time baby boomers started getting close to retirement age and some companies decided it was a better deal to fire them or lay them off than pay the pensions they had earned, and also with the DotCom boom where startups would only hire 20-somethings. It's not really a new term.


> What happened between then and now that suddenly nothing works?

Single Page Applications with dozens of MB of Javascript, Google AMP (which has a JS runtime taking several minutes to load on 2G), and so on.


> which has a JS runtime taking several minutes to load on 2G

source? the entire goal of AMP is to load pages quickly


Minutes might be slightly overselling it but AMP has a bit over 100KB of render-blocking JavaScript alone before you get the actual content.

Here's the current top story when I hit news.google.com in a mobile browser:

https://news.google.com/news/amp?caurl=https%3A%2F%2Fwww.was...

Loading that in a simulated 2G connection takes about 80 seconds and at least 30 seconds of that is waiting to display anything you care about. Looking at the content breakdown shows why: ~200KB of webfonts, 1.2MB of JavaScript, 275KB of HTML, etc.

https://www.webpagetest.org/result/170208_DV_R2R1/2/details/...

https://www.webpagetest.org/result/170208_5Y_R404/1/details

Loading the same page without JavaScript pulls the content render time down into a couple seconds, still over 2G:

https://www.webpagetest.org/result/170208_5Y_R404/1/details/...


On my throttled 2G connection, it’s ~2½ minutes, and because I rarely visit pages with AMP, it’s never cached.


I live in a major city, have an iPhone 6S with good LTE coverage according to benchmarks, etc. and still routinely have AMP take 15+ seconds to render after the HTML has been received. I don't know if that's Mobile Safari applying strict cache limits or an issue in Google's side but the sales pitch isn't delivering.


I have all of those as well, and AMP takes < 1s for me to load pages.

Sounds like either a configuration issue on your end or maybe your wireless carrier.


Note that I did not say it always happens — when everything is cached, it performs as well as any other mobile-optimized site — or that it's specific to my device/carrier – it also happens on WiFi, Android, etc.

The problem is simply a brittle design which depends on a ton of render-blocking resources. The assumption is that those will be cached but my experience is simply that fairly regularly I'll click on a link, see the page title load (indicating the HTML response has started), and then have to wait a long time for the content to display. Many news sites also load a ton of stuff but since fewer of them block content display waiting for JavaScript, the experience under real-world wireless conditions is better in the worst case and no worse in average conditions.


Well, it would have to be downloaded once. It'd stay cached after that though so it's not really a concern. They use version numbers for cache busting.


On mobile cache sizes are very limited, and with the size of modern web pages it has to get reclaimed regularly. You can't rely on caching to solve poor performance.


I was in rural China with an EDGE connection on Google Fi last month.

Hacker News was pretty much the only site I visit that could reliably load quickly. m.facebook.com had a slight wait but was still bearable. I had to leave my phone for 10 or 15 minutes to get Google News.

WeChat and email worked well.

Everything else was horrible, especially ad networks that would ping pong several requests or load large images.

Opera has a compression proxy mode that helped a bit when it worked but it was still painful.

For search results, Stack Overflow, and YouTube, it was easier to easier to ssh into an AWS node and use elinks/youtube-dl.

Using SSH as a socks proxy/compression was insanely slow due to something with with the great firewall.


> PS - T-Mobile now offers 3G speeds internationally; this was just the freebie at the time.

I don't think this has changed, at least not in general. The included roaming package is still free international 2G roaming everywhere except Mexico and Canada (which get free 4G), with "high-speed data pass" upgrades available for a daily or weekly fee if you want faster. They did have a promotion for the 2nd half of 2016 (initially for the summer, then extended through the end of the year), where international 3G, and in a few areas 4G/LTE, was free without buying the upgrade passes for most of Europe and South America [1]. But that's now over, and I believe it's back to free 2G internationally now.

[1] https://newsroom.t-mobile.com/news-and-blogs/t-mobiles-endle...


On the new "One Plan" it's now 128Kbps, and 256Kbps if you pay for the One Plus International plan ($25/mo).


I use T-Mobile as my ISP because the only landline choice in my apartment building is AT&T and I absolutely refuse to do business with them. I regularly hit the monthly bandwidth cap on my plan and get booted down to 2G.

I live in California -- this is not just something people internationally are dealing with.

Annoyingly, T-Mobile's own website doesn't work properly when you're throttled to 2G speed. Found that out the hard way when I ran out of minutes on Thanksgiving and couldn't talk to my family, and couldn't load their website to add more minutes.


I mainly used it for things like slack, skype and emails, and mapping.

With iOS9+ content blockers and things like Google AMP, I think the web is a lot more usable.

Apps tend to be less bloated in terms of bandwidth as well, since they usually don't load as many assets on request.


You have just discovered why apps are so good, they can download content in small amounts.


My 35Mbit cable got shaped down to 0.25 Mbit/s yesterday because we went over our download limit. It was like having no connection. I just gave up using it.

I hate the all-or-nothing approach to shaping. At least give me 5Mbit or something!


5mbps is a perfectly fine connection, they might as well not throttle you at all then. If they want to give you barely-usable internet, about 500kbps might be reasonable. 250kbps is quite slow indeed.


I wouldn't call 5mbps "perfectly fine", but I could do basic web browsing and email etc. And that's my point. I don't want to be shaped down to a barely-usable connection. Why do they need to shape at all? The only argument is congestion. And if there's congestion, they should shape us down to a reasonable level like 5mbps. No reason it should be all or nothing, 35Mbit or zero.


Having used both, I'll take the 2G mobile over the 56k modem every time.


I just looked up EDGE. It's crazy to think that the first iPhone topped out at double the speed of a 56k modem. And that I actually used my iPhone on that network sometimes, when 3G wasn't available.


I had a blackberry a bit before the first iPhone. I remember getting an update wirelessly that was something like 3MB and just thought "Good, it should only be about 10 minutes this time."


FYI, I had the same connection and I'm pretty sure T-Mobile simulates 2G by switching 3G on and off to get the correct speed on average. Breaks a lot of stuff. Almost unusable!


It's what makes me wish designers and developers would work with artificial constraints. Sure, it's easy to design and develop without really thinking of bandwidth constraints, but reality is you are and will always be a better developer and designer by setting artificial bandwidth constraints in your mind and choices.

Seeking out or thinking as though you have bandwidth constraints can push you to find better solutions and thereby make your services better. The west and the tech centers in particular is really rather blinded by the glutinous bandwidth that keeps eating up greater and greater amounts of data with only marginal improvements in outcome or user experience.


I say this so much that I should probably just copy/paste it in the future but...

I used to work at a place that had a <1mbps modem, and a ~7 year old destkop. If their software didn't work on that, it needed to be optimized. I wish more places would test this way. Your sight may work fine in downtown SF, but that doesn't mean it's going to work well anywhere else.


Databases too. Hosting the database on a fast machine with a lot of RAM and an SSD will hide performance problems that should be immediately apparent.


With games, it's way easier to see problems in the profiler on the minimum spec PC than it is on your dev machine. Everything is magnified.


Chrome's Developer Tools has throttling options immediately available in the Network tab.


UX guy here. I've always kept performance in mind. One of my pet phrases is that speed is part of design.

I've gotten a lot of blank stares.

That's why more designers don't bother: decision makers usually respond only to look/flashiness/branding.


Sad thing is that most of the web sucks on rather fast connections too. Pages being almost 5mb of data, making multiple dozens of requests for librairies and ads. Ads updating in the background, consuming evermore data.

I don't notice it much on my PC, since I've got a FTTH connection, but on LTE and 3G, it's very noticeable. Enough that I avoid certain websites. And that's nowhere near slow by his standards.

I do agree that everyone would benefit from slimmer websites.


I have Javascript off-by-default, and about 80% of the time it simply makes everything better.

Oh, sure, a few sites need JS (and get whitelisted) and some just have minor layout quirks... But I can actually scroll down and read the text of a news article rather than suffering through waiting times and input-latency as Javascript churns.


Same here - I would highly recommend people to at least try this once and get a reminder of how fast sites can be.


Firefox on Android supports uBlock Origin.


I also use an add-on called Decentraleyes. It caches various common scripts from popular CDNs within the add-on itself so your device doesn't need to make any network requests for them. It was originally meant as a privacy to but the caching seems to be at least as valuable.


Figure out a few interesting/useful websites that work fine without Javascript. Try browsing those for half an hour, then switch Javascript back on and browse your usual websites. You'll probably notice it's so much slower, even with FTTH, because of network load but also CPU (and marginally RAM, though modern browsers are mostly to blame for that).


I notice it in my browser's memory usage.


I design and write my company's framework, that other devs use to write websites and webapps.

I base my work on existing technologies (lately Laravel, which means Symfony, Gulp, and hundreds of other great libraries) but I always strive to:

1. Reduce the number of requests per page, ideally down to 1 combined and compressed CSS, 1 JS that contains all dependencies, 1 custom font with all the icons. Everything except HTML and AJAX should be cacheable forever and use versioned file naming.

2. Make the JS as optional as possible. I will go out of my way to make interface elements work with CSS only (including the button to slide the mobile menu, various kinds of tooltips, form widget styling, and so on.) Whenever something needs JS to work (such as picture cropping or JS popups) I'll make sure the website is usable and pretty, maybe with reduced functionality or a higher number of page loads, even if the JS fails to load or is turned off. Also, the single JS file should be loaded at the end of the body.

2b. As a corollary, the website should be usable and look good both when JS is turned off, and when it's turned on but still being loaded. This can be achieved with careful use of inline styles, short inline scripts, noscript tags, and so on.

3. Make the CSS dependency somewhat optional too. As a basic rule, the site should work in w3m, as pointed out above. Sections of HTML that make sense only when positioned by CSS should be placed at the end of the body.

I consider all of this common sense, but unfortunately not all devs seem to have the knowledge, skill, and/or time allowance to care for these things, because admittedly they only matter for < 1% of most website's viewers.


I don't completely agree. If you're working on an SPA that targets higher-income (e.g. better internet) consumers, a developer could be forgiven for doing as much as possible using JS. I choose to sacrifice the <1% of my target users who have JS turned off or have poor connections to benefit the UX of the other 99% of users. I think the time and resource investment in strictly adhering to these guidelines is cost-prohibitive for many lean engineering teams, particularly those at early-stage startups.

I get that the web was designed to be optimized for HTML/CSS first, JS last. However, the web was also not originally designed to support web applications as complex as the marketplace currently supports. As the web matures as the only universal application platform (to compete with various native platforms), I think a paradigm shift is required -- towards replacing as much markup with programmatic code as possible. Such a paradigm shift is required for complex web applications to compete with native environments going forward.

Of course, none of this applies if your organization just requires static websites. Choose the right tool for the job, and all that.


I would say a lot of people disregard internet speeds when creating websites. Just check the source of any random website, and you're going to see jquery loaded, where it's either not being used at all, or using a single function in the entire library.


I think your 2, 2b, and 3 are absolutely spot on, and it is amazing the extent to which web developers go out of their way not to do these things.

I'm going to slightly disagree with 1, though. It's somewhat important now, but should become less so as http2 gets more widely used.


I travel fulltime and my primary internet is 4G LTE. But, even though I spend $250 per month on data, I still run out, and end up throttled to 128kbps for the last couple days of the data cycle. The internet is pretty much unusable at that rate. I can leave my email downloading in Thunderbird for a couple of hours and that's usable (gmail, however is not very usable), and I can read Hacker News (but not the articles linked, in most cases). Reddit kinda works at those speeds. But nearly everything else on the web is too slow to even bother with. When I hit that rate cap, I usually consider it a forced break and take a walk, cook something elaborate, and watch a movie (on DVD) or play a game.

So, yeah, the internet has gotten really fat. A lot of it seems gratuitous...but, I'm guilty of it, too. If I need graphs or something, I reach for whatever library does everything I need and drop it in. Likewise, I start with a framework like Bootstrap, and some JavaScript stuff, and by the time all is said and done, I'm pulling a couple MB down just to draw the page. Even as browsers bring more stuff into core (making things we used to need libs for unnecessary) folks keep pushing forward and we keep throwing more libraries at the problem. And, well, that's probably necessary growing pains.

Maybe someday the bandwidth will catch up with the apps. I do wish more people building the web tested at slower speeds, though. Could probably save users on mobile networks a lot of time, even if we accept that dial-up just can't meaningfully participate in the modern web.


Incidentally, you may find GMail's "basic HTML view" works better when your connection's throttled:

https://support.google.com/mail/answer/15049

And as for reddit, their old mobile view is still available at the "i." subdomain - it's so much lighter-weight than the dreadful JS-laden one they introduced a while back, it's the only way to use reddit on mobile IMO:

https://i.reddit.com


I use gmail's Basic HTML interface all the time. AJAXy gmail and Inbox balloon to incredible levels of memory use pretty quickly, and are slower for most interactions than the full-page loads on Basic HTML, which means that someone somewhere lost track of WTF they were supposed to be doing all of this for.

It's easily worth the loss of a couple features.


This is commonly stated but not true under all conditions. The full-blown GMail UI has extensive latency-hiding capabilities. The basic HTML UI has no latency-hiding features of any kind. If you are on a high-latency connection but you have some bandwidth available, you will have a much better experience with the full UI. Otherwise you face the full latency for every action.

The Inbox UI is for some reason irredeemable. It is slow under all conditions.


I just tried it. My GOD that's quick. I think I'll stick with the basic HTML version of Gmail.


Reddit appears to be actively trying to hurt the mobile experience. They bought the best iPhone reddit app, and removed it from the store entirely. They are also currently trialling a version of the mobile site that does not work at all if you aren't logged in.

Personally I'd prefer they just show ads on mobile than make the experience suck on purpose.


I use to get the desktop version on my tablet (nexus 7, 2013). It worked fine, though swapping to .compact was a little easier to use.

For the last month, it's given me the new mobile version. It never remembers that I don't want to try their app (and the opt out link is both tiny and right under the giant yes please button)

But the worse part? I'm on a fast home connection, and the mobile site gives the same loading/network experience as being in the Welsh countryside.


It's a shame that their oldest mobile version, the original m.reddit.com is no longer available. It was truly the most compact way to experience the site. Barely more than a list of links.


Add .compact to the end of add reddit link (before the query) and you can still access it. A few minor things are broken but it still works well enough that I prefer it over the modern mobile site.


I think they mean the version before that. That version was REALLY barebones.


Any tips on low-bandwidth login pages for GMail?

Sometimes I can't even get to the HTML view because of the login process!


I thought Sprint and T-Mobile had unlimited plans in the $60-80 range for LTE?


Not really.

I have accounts with both. T-Mobile has "unlimited" for the phone, but for hotspots, there are no unlimited plans (this may not be true anymore; I think if you get the new One plan, and add the $25 international option, it includes unlimited 4G LTE data, even for hotspots). The unlimited plan for phones also de-prioritizes customers that use over a certain amount of data in a month; but it's never the device usage that is a problem for me.

Sprint is similar, only even more restrictive in their "unlimited" plans. After 28GB, they throttle the device. Hotspot usage is severely restricted (2GB in the default "unlimited" plan) unless it is specifically a plan for a hotspot (not a phone acting as a hotspot).

There was no unlimited hotspot plan on any carrier at the time I signed up for all of my plans.

Sprint was the best deal per-GB when I hit the road this time around, so I have a 40GB plan on a hotspot from Sprint, and 16GB from T-Mobile spread across two devices (a hotspot and a phone that can act as a hotspot). I end up using all 56GB most months. Each provider gets about $125/month from me.

T-Mobile further complicates things by offering Binge On, which allows me to watch Netflix without burning as much data (the video itself doesn't use data, but all of the meta data, and browsing Netflix does, so once I'm out of data, it's impossible to actually watch anything, even with Binge On).

Data over 3G/4G is complicated as hell, is what I'm trying to say, and it's going to cost a fortune if it's your primary method of getting on the internet. I need to actually confirm with the T-Mobile folks that the One plan plus the International add-on provides actual unlimited data. If it does, it'll allow me to shrink my Sprint plan by a bunch, and stop running out of data.

Also worth noting: T-Mobile used to have a smaller network than Sprint (so much so that when I was traveling in the past, even though I had a grandfathered in unlimited plan on T-Mobile, that they finally made me switch off of a few years ago, I had a Clear hotspot, as well, to fill in the coverage gaps). But, the reverse is true now. T-Mobile's network is also faster in most locations. With the new bands they've put online, T-Mobile reaches further into out-of-the-way places.

In short, "unlimited" is a lie (or was; T-Mobile may actually have an unlimited data plan, now, though I wouldn't be surprised if it still de-prioritizes heavy users...and if "heavy" means some ridiculously small number like 28GB in a month).

Edit: It used to be possible to use a tethering app on a rooted phone to work around such limits. Both networks detect hotspot usage (somehow), even with a rooted phone.


They can still throttle you after a certain level of consumption. With a recent T-Mobile promo, you could have 4 lines for the price of 2, each with its own 4G allotment... so if I ever experience throttling, I can switch to an alternate device.


What really has baffled me lately is Chase's new website. They did a redesign around, maybe 6 months ago, to make it "more modern" or something, I guess.

Now the thing just loads and loads and loads and loads. And all I want to do is either view my statement/transactions or pay my bill! Or sometimes update my address or use rewards points. That's not complicated stuff. I open it up in a background tab and do other stuff in-between clicks to avoid excessively staring at a loading screen.

I just tried it out, going to chase.com with an empty cache took a full 16 seconds to load on my work computer and issued 96 requests to load 11MB. Why!?

I then login. The next page (account overview) takes a full 32 seconds to load. Yep, half a minute to see my recent transactions and account balances. And I have two credit cards with zero recent transactions.

I am just baffled as to who signed off on it!! "This takes 30 seconds to load on a high speed connection, looks good, ship it."


Chase's website is just awful for just about anything.

It's particularly terrible if you are ever trying to use award points. The site is painfully slow, even on the fastest of connections.


To be fair, the Chase website was awful before. It was just awful and slightly faster.


> Why shouldn’t the web work with dialup or a dialup-like connection?

Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

> Pretty much everything I consume online is plain text, even if it happens to be styled with images and fancy javascript.

No doubt, pretty much everyone who works on web apps for long enough understands that it's total madness. The cost however, in supporting people so far behind as to only be able to serve them text is quite frankly unmanageable. The web has grown dramatically over the past 20 years both in terms of physical scale and supported media types.

The web is becoming a platform delivery service for complex applications. Some people like to think of the web as just hyper text, and everything on it should be human parse-able. For me, as someone who has come late to the game, it has never seemed that way. The web is where I go to do things: work, learn, consume, watch, play. It's a tool that allows me to access the interfaces I use in my daily life. I think there's a ton of value in this, perhaps more than as a platform for simple reading news and blogs.

I look forward to WebAssembly and other advancements that allow us to treat the web as we once treated desktop environments, at the expense of human readability. It doesn't mean we need to abandon older + simpler protocols, because they too serve a purpose. But to stop technological advancement in order to appease the lowest common denominator seems silly to me.


> Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

Horses on highways would cause accidents. I have yet to see a fast-moving web page crash in to a slow-moving one and shut down the router. Analogies work better when there is connective tissue between the concepts in play.

More generally, the vast bulk of the problem is not human readability or interactivity over http, but more a matter of insane amounts of unnecessary gunk being included in web pages because of faulty assumptions about the width of pipes.

More generally, I find myself moving in the opposite direction. I find that many SaaS services' interests don't align with mine, so I'm going back to local applications. I don't trust others with most of my data, so the only service that sees much of it only sees encrypted blobs (for offsite backup). I've always run my own mail, and have slowly been expanding the services I host as I bring more of this stuff in-house. And so on. But I realize I'm in a minority.

But the nice thing is that it gives me an intranet and "other" grouping that is very straightforward, so that the browser instances that touch untrusted (not-mine) services can run in "bastion" VM, locked down nicely and reset to a pristine state at will, not to mention allowing some stupid networking tricks that are sometimes useful.


> I have yet to see a fast-moving web page crash in to a slow-moving one and shut down the router.

To be pedantic, if you have a threaded server (thread per connection) slow clients can cause problems.

https://en.wikipedia.org/wiki/Slowloris_(computer_security)


> Horses on highways would cause accidents.

You are right, they are not the best, but people make do: http://www.mapministry.org/news-and-stories/amish-buggy-acci...


> More generally, the vast bulk of the problem is not human readability or interactivity over http, but more a matter of insane amounts of unnecessary gunk being included in web pages because of faulty assumptions about the width of pipes.

Doesn't affect the vast majority of users.

> But I realize I'm in a minority.

Yes, your statements are pretty anecdotal and don't really relate to the vast majority of internet users.

I'm sure your setup works great for you, but it sounds like a ton of overhead, none of which is required if you have fast internet and don't give a shit about what's going on (like nearly everyone who uses the internet.)


> don't really relate to the vast majority of internet users.

It's not that they don't. It's that you don't care.

Because why should you care about something that doesn't meaningfully increase ad revenue or sales? Why should you care that the 2 extra seconds of pageload on a fat pipe, and a fraction of a cent of extra electricity burned, when multiplied by a million of your US users add to over 500 man-hours and few kilograms of coal wasted. Not to mention the site being unreliable or unusable in trains, rural areas and larger buildings in which an user doesn't have Wi-Fi access.

And the problem wouldn't be as big if it was just you. The problem is, everyone else thinks the same way, so all the waste mentioned above adds up. All because people are too lazy to not put useless gunk - which often requires more work to add to your site than to refrain from using it in the first place.


And the funny thing is that it's even been shown that increasing speed increases usage and revenue/sales, so there's not even that excuse. Slow pages break flow, which cause people to realize that they've already wasted too much time on your site, and were supposed to have done xyz 15 minutes ago.


I don't think your claim that it doesn't affect the vast majority of users is correct. There's well over a billion people in India alone. You might argue that you were only making claims about US users, but a lot of sites have no reason not to be global.


If you've ever traveled internationally, you'll know that a lot of English-language sites become unbearably slow to use over mediocre hotel wifi, let alone cellular. Lightweight sites like HN become relatively MUCH more pleasant to use.


>That's like asking "why shouldn't we allow horses on our highways?" //

We do allow horses on our "highways" in the UK, not motorways but other highways. It's a terrible analogy though as you can't have progressive enhancement of a road for the vehicle capabilities as you can a website.

>But to stop technological advancement in order to appease the lowest common denominator seems silly to me. //

Not serving simple text, for websites where that's appropriate, and then offering enhanced capabilities when the web client can make use of them seems perverse to me.

You don't have to not advance the technology, having radio broadcasts doesn't hold back VR/AR, but if all you have to convey is able to be passed on in an audio stream then purposefully designing a site to be hostile to clients that can only consume an audio stream to me is wrong. Sure add on an immersive environment where one can play on a VR beach whilst listening to your "24/7 wave noises" but don't make it so that simple audio access is impossible if the primary content only requires that.

In other words don't require webgl so I can see your store opening times.


Text articles are probably the most widespread type of content on the web. Most web sites are not web apps. But many developers want to re-construct web sites into web app architectures even when there's no benefit to the end user.

I posted the links below on a previous discussion about AMP. They are two examples of basic, javascript-free web pages with text content. There's about 2500+ words on these test pages, but the page weight is still much smaller than, for example, a medium article with one tenth the number of words (250).

Try loading them on your mobile on a 3G (or slower) connection. Do they load fast or slow?

Version A: http://interfacesketch.com/test/energy-book-synopsis-a.html

Here is an identical version to the above but one that loads custom fonts (approx 40kb extra).

Version B: http://interfacesketch.com/test/energy-book-synopsis-b.html


Version B could probably be optimized here by not loading two very similar fonts.

You can also try loading the font locally first, to avoid the download if it's installed on the user's system.

Finally, unicode-range lets you avoid the download completely if that character isn't included on the page. Not a likely outcome on an English page, but a good practice regardless.

Webfonts are tough to optimize, but not impossible. Right now there's solutions of using Javascript to background the font so it's non-blocking (eg. loadCSS[1]), but it's not ideal when trying to keep overhead down. The situation should improve once font-display[2] becomes standardized.

For what it's worth though, I find Version B looks much nicer.

[1] https://github.com/filamentgroup/loadCSS

[2] https://css-tricks.com/font-display-masses/


Thanks for trying out these test pages.

Version B has two different font weights from the same family: Regular and Medium/Semi-bold. Version A relies on the fonts already installed on the user's computer.

Dropping the semi-bold font weight would save approx 23k, but having a regular and bold font weight felt like the minimal styles needed to support the page.

Dropping the header image would save 40k. (Note: the header image hasn't been optimised using something like the HTML srcset attribute which can load different picture sizes for different devices).


It could avoid loading the font entirely. Why does every webpage assume I want to use their fonts?


about:config -> browser.display.use_document_fonts = 0

I don't load fonts. I either don't notice a difference, or the fancy pants fonts are hard to read.


Ever run into trouble on sites that use font's for icons, like bootstrap ones?


It's a pain in the arse. Icon fonts are a Wrong Solution.


> Because we have the capability to work beyond that capacity now in most cases

Except if you are in a rural area of a developed country, or in practically any undeveloped country. Most of Africa, Asia and South America have abysmal internet connections, and rural parts of Australia (even a few hours drive from a major city) are only able to get satellite internet.

Tech people often fail to realise how divided and limited access to internet actually is once you leave 'tech hubs'.


Absolutely, but those are secondary markets for most American businesses because there are an abundance of issues with delivering any application to those areas. Whether its political, cultural, financial, etc. Priority for most businesses and website owners is to serve the people you know you can serve first, then work on supporting those other areas.

Twitter, like most companies, exists to make money. If you're busy shaving off every bit you can from your requests, you're spending a lot of money. You're also losing money because I'd expect you wouldn't be serving ads, etc. as well.

Most blogs that exist to make money aren't targeting those without good internet either, so I don't really see the problem.


Rural areas secondary markets for most American businesses?

This isn't just about Twitter, its about making sure if you are helping build a local Mom & Pop Shop's online presence - which is where all business is now days - then you build it in a way that their customers might care about.

There is a world outside of cities, and a lot of people live there. The tech world needs to wake up to that, because those people where Ubers don't go, GrubHub doesn't deliver, they exist (and vote) too.


They vote, yes, but do they buy? Do they significantly contribute to a company's economy (even potentially)?

If they do, companies will be happy to spend time and resources in serving lighter versions of their content. But if they don't, there's no reason, from the POV of a company, to employ resources in something that doesn't generate revenue.

If there's money, there's will.


The web is becoming a platform delivery service for complex applications.

No, it's not. Yes, there are MMORPGs that run in the browser using WebGL.[1] But very, very few pages use all that capability. Most web pages today would work just fine in HTML 3.1.

And what is this thing with running over ten trackers on one page?

[1] http://www.webglgames.com/


As soon as WebAssembly is stable and available I'm predicting we'll see a dramatic shift away HTML + JS as the target for most web apps. I don't mean blogs, I mean people who are trying to build websites that work like apps.

Most webpages today that would work fine in HTML 3.1 aren't built as massive JS web apps. If they are, it may serve a purpose (better UI/UX for most of their users being a major one.)


We see gazillion of the web apps (and pages that are web apps but shouldn't be) because of those who are not willing to learn anything but JS. Good luck dragging them to C/C++.


I don't think the point is that everything should be human parse-able. But most things on the web are not complex applications. Dan Luu isn't trying to use Google Maps over HSCSD, he's trying to read hypertext blogs and Twitter. Do you seriously think Twitter qualifies as a "complex application"?


For a long time, you had to load 2+ mb of data to see a 140 character tweet. Twitter actually recently fixed this; the tweet text is available in the title now, so will be available very early in page load.

Not only does this make twitter usable on dialup again (it was effectively unusable ever since they switched from a simple html page to a massive "application" that you have to re-download every time they deploy), but it lets you search through tweets you've read in the browser history.

Getting this stuff right is not rocket surgery.


I love that they are keen on holding compatibility with archaic SMS but not so much with slower connections.


1. SMS isn't archaic.

2. 140-char limit isn't about SMS.


1. 1992 in archaic. 2. Officially it was, not it's just stupid and unneccessary.


SMS isn't archaic any more than writing is archaic. They're both not brand new, and they're both widely used today.

VHS is archaic.


The 140-character limit was INITIALLY because of SMS, but now it's just the standard format for the medium. The whole point of Twitter is the 140-char limit.


The concept behind Twitter? No. Twitter? Maybe..? They have a user experience they want to deliver to 9X% of their users, so they optimize for that. For most of Twitter's users, their connection speeds aren't the limiting factor in their experience. If you optimize for the lowest common denominator, that 9X% almost certainly gets a worse experience.


* There is no proof in your statement that currrently 9X% of their users have fast internet connection.

* Even so, it would simply be an artefact of them never bothering to optimize for people with slower internet connections. If they did optimize, a lot of their traffic would come from the so called "lowest common denominator" just as the blog post says it did for Google.

* Finally I find the term "lowest common denominator" misrepresentative because it implies that optimization needs to cater to the slowest connection on earth, which is clearly not the case. If the average speed is above 90% of internet connections (as per Akamai report cited in the blog post) then the distribution of internet speeds is clearly skewed and there's value trying to even cater for median if not the minimum speed.


It's not always a tradeoff. Several sites stopped serving CSS during the Superbowl (https://twitter.com/jensimmons/status/828415747625992192). At a smaller scale, the same thing happens to some sites that reach the HN front-page.


So a couple sites go down on a single day of the year because they were unprepared and suddenly it's a bad idea to depend on CSS?

Seems like costly optimization with almost no benefit to me.


It depends on exactly the costs are. But in that example, those sites are spending millions on Superbowl ads, and it's probably their highest day of traffic, so it's not "almost no benefit".


So what's up with that character limit?


> Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

Adding to your analogy, the JS bloat mentioned in the article is like driving a semi-truck carrying only one carton of oranges. It's a lot of extra waste for a very slight benefit.


While I agree with others here that dial-up-friendly sites don't "collide" with complex web-apps (thus rendering this argument somewhat void), I sadly think the underlying problem is much more mundane:

Why are word processors not orders of magnitudes faster than two decades ago? Same reason. No one wants to pay the additional costs of achieving that.

Practically speaking, this means that either Random Local Newspaper Inc. knows their potential online reader base exactly and deemed the additional effort not to pay off, or they have no idea of their potential customers that would flock to their site if they did put in the effort, and thus don't miss them. Add on top of that the fact that much of the internet is based on (perhaps only assumed) prestige (or loss of it if your page doesn't have the most modern features A through Z or looks like its from the 90ies) or extremely short-lived (a newspaper doesn't care about yesterday's news) and this theory pretty much explains it all.

Cynical, I know, but hey... :-/

Edit: Also: We can have nice things. No one with a 56k connection will honestly try to watch Netflix. A much more interesting question would be: How could we create incentives for big corporations to optimize their pages for connections with a small bandwith? After all, much of the web is also built on ad deals - which also almost certainly don't target those people.


> Why are word processors not orders of magnitudes faster than two decades ago?

Because even in the 1990s, word processors weren't orders of magnitude slower than the person sitting at the keyboard, which is the ultimate limiting factor on word processing speed.


Which is, essentially, the same explanation. I would very much like a fast, Vim-style modern word processor (the new cursor behavior in Word drives me insane btw). But I'm not the main target. I alone wouldn't pay the bills.

The same goes for almost any website: People with 56k don't consume much digital goods and returns from ads are most likely almost non-existent (especially if the ads are huge themselves). That easily explains a huge part of the web.

Thus, either the connectivity around the globe has to be improved (thats the route Facebook seems to choose, albeit with debatable conditions for their new "customers") or create other incentives for anyone hosting something on the web to attract people with low bandwith connections.


I very frequently had to sit and wait for my word processor to catch up when typing quickly, well into the mid-2000s.


In the US, we do generally allow horses on our highways. https://asci.uvm.edu/equine/law/roads/roads.htm

The Amish frequently do so in my area.


> The cost however, in supporting people so far behind as to only be able to serve them text is quite frankly unmanageable.

No, no it's not, it's really not. You're already writing your SPAs with a REST backend, right? Well, guess what: static HTML & REST go together like burgers & beer! All you need to do is add an HTML content renderer to your REST backend, and you have _something_ someone can use to interact with.

> The web is becoming a platform delivery service for complex applications. Some people like to think of the web as just hyper text, and everything on it should be human parse-able. For me, as someone who has come late to the game, it has never seemed that way. The web is where I go to do things: work, learn, consume, watch, play. It's a tool that allows me to access the interfaces I use in my daily life.

True fact: learning, consuming & watching are all well-supported by static HTML.

> But to stop technological advancement in order to appease the lowest common denominator seems silly to me.

Part of the problem is that we're not advancing technologically: we're getting bogged down in the La Brean morass which is modern web development. HTML is a terrible but acceptable markup language; CSS is an ugly but somewhat acceptable styling language; the DOM is as hideous as a Gorgon; JavaScript is approximately the worst language ever; the combination of all the above is grotesque, and an ongoing indictment of our entire industry.

A cross-platform application-distribution standard sounds pretty awesome, but that's not what web pages are supposed to be, and it's not what web browsers should deliver. The web is a web of hyperlinked documents. It's right there in its name. And anyone who demands that his readers enable a somewhat cross-platform, highly insecure, privacy-devouring application-distribution tool in order to read his text is welcome to take a long walk off of a short pier.


HTML is certainly not a 'terrible but acceptable markup language'. Used properly, it's a _fine_ markup language. CSS is brilliant, at least in its original incarnations: it's bloated beyond repair now.

Javascript, the DOM, yeah agreed.


>The web is a web of hyperlinked documents.

The web used to be a web of hyperlinked documents. This has not been the case for a long time now. Webapps have evolved to enable widespread communication, collaboration, gaming, social media, and so much more.

It's the single-largest open platform that's available from nearly any device in the world. It's a little more than a document viewer.


'Web apps' are still irrelevant to the mainstream web user. Unless you think that Facebook counts as a 'web app'. Facebook is a great example of a website that very much feels like a website: it's full of hyperlinks.


>Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

More like 'why shouldn't we allow people on our streets?' Which is a good question. It's a question we answered. With a yes.

>I look forward to WebAssembly and other advancements that allow us to treat the web as we once treated desktop environments, at the expense of human readability.

If you want to make a desktop programme, make a desktop programme. Don't ruin the web.

At my job, I work on a web app. It runs only on Android devices through a web view in a native app. It could have been a native app, and would have felt more natural on an Android device, been faster, and probably more secure too.


Its not just about plain text vs. content. Its about doing things at least reasonably efficiently. I commented about gmail taking 500MB on Chrome in another comment. I just checked Mail.app it started with 200MB. I wonder whats the energy budget of all the bloated website and their parsing is on millions of computers. The server side computation allows economy of scale and might have saved cpu cycles for the actual work but I'm sure many times more than that has been wasted by a bloated web.

I don't disagree that web and web applications provide a lot of convenience and I don't want to loose that. But I don't think for a moment that things can't be significantly more efficient.


> Why shouldn’t the web work with dialup or a dialup-like connection? Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

As a counterpoint, I have a 1G FTTH connection, 8GB of ram, but only a dual-core 1.4 GHz Haswell (Celeron 2955U) and an iffy SSD, so I get a terrible web experience if I have more than one tab open.


>only

Sounds like you need an adblocker, Firefox ESR + ublock Origin should be enough to make a 10yo single-core machine usable for common web browsing


I work at a company whose revenue primarily comes from advertisements; it feels unethical to run an adblocker while ads are paying my salary. (And also would blind me to the experience that most users have; even ignoring the nice connectivity and ram that I have)


I have to applaud you for the second point. With many pages I see nowadays just googling for things like Vim-keybindings (I am sadly not allowed to install an ad blocker on some systems), I started to heavily doubt that anyone maintaining those sites has seen them through a user's eyes.


Something that sticks out looking at the table. How can some sites simply FAIL loading? I mean, there is something inherently wrong with our web today, where if my internet is very slow and _could_ load a page in 80 seconds if I just leave it like that, the server itself could have configured the timeout to be 60 seconds. So I can never load the page?!

The assumption is here that both points of the connection is based on earth. When we have these hard timeout limits, how will stuff even remotely work when we are a interplanetary species or even from orbit around earth?


I chatted about just this timeout issue with an engineer from a major CDN while he was at my house enjoying the dialup. Seems like simply a matter of resource management; slow connections do use more resources. Most CDN customers don't care or don't know that a few percent of the US population is getting their web browsing broken by timeouts, so there's no push back.

(NASA has their wacky ways around the issue for ISS residents, something like VNC to a ground-based browser IIRC.)


That's pretty smart of NASA; things like a caching HTTP proxy still wouldn't work in some cases, given that sites can expect your browser to make a given AJAX request within X ms of requesting the page.

I wonder if there's still a more "API level" way to handle things, though, rather than making your computer into a dumb frame buffer client with extremely low responsiveness to typing/scrolling.

Maybe they could run a headless browser on Earth, and use a protocol like the Chromecast does to synchronize its DOM state to a "browser proxy" in space—like a higher-level, domain-specific version of the X11 protocol. That'd still have latency for JavaScript-based webapp UI, though... maybe the JS could be split and its state synchronized so that the "server" handles timer triggers, while the "client" handles input events.


> (NASA has their wacky ways around the issue for ISS residents, something like VNC to a ground-based browser IIRC.)

Wait, VNC? Won't that use oodles more bandwidth than proxying HTTP?


It's about lag, not bandwidth.


I remember astronaut Alex Gertz somewhere saying the VNC was also for security reasons. Keep in mind that most infrastructure on the ISS was installed in the middle 00s and that the Thinkpads were possible running Windows XP and IE 6 then.


I certainly bloody hope they're not running Microsoft software on the International Space Station.


They were in 2001: https://m.theregister.co.uk/2001/04/27/nt_4_0_sp7_available/

This is one of my favorite NT4 tidbits :)


I'm surprised they need anything. I get my home internet from a satellite in geosynchronous orbit and it works fine other than the latency. No human has got that far from Earth since Apollo in the 1970s, so my home internet has to be worse than anyone NASA cares give internet. (though I have no idea what bandwidth NASA has)


If they wanted to use a geosynchronous satellite from the ISS, it would be occluded by the earth half of the time and the other half of the time they'd have to track it with a satellite dish over the course of the 45 minutes (out of every 1.5 hours) they have access to it.

of course the same is true of ground stations... I'm not actually sure how they do it but they probably don't need as high-gain of an antenna to reach them.


The ISS gets connectivity via a small number of ground stations and mostly satellites. Their connectivity is not uninterrupted; there are small regular time intervals at which none of their uplinks is in line of sight.


> how will stuff even remotely work when we are a interplanetary species

IPFS or similar. Basically, make all public content content-addressed (give me the article with SHA 0xabcdef) rather than connection oriented (give me the bytestream that comes from http://news.ycombinator.com/foo/bar)


Open connections take system resources. One way to DOS a website is to open a ton of connections and just sit on them. If the server allows extra long timeouts as long as some bits come in occasionally, then the attacker can send bits occasionally. It's a tricky problem. It might work to allow long turnouts as long as you don't have an ongoing DOS attack, but that sort of thing is hard to configure and test.


After Slowloris and other attacks, this is pretty much a solved problem. Minimise per connection memory, limit connections per IP, drop connections which don't finish the request in X seconds, and separate your app server from your front proxy. And for the front proxy, don't block on reads - do minimal event loop until you can dispatch the full request.


One person's "Solved Problem" is another person's "I just built an app in rails that accomplishes what I want it to, why did it stop working?"


I heard this somewhere on HN, but websites should fail like escalators, not like elevators. Too many people design sites like elevators though.


I love this visual, but elevators have way more safety mechanisms than poorly-designed web sites. Check out the source HTML of instagram.com for a great example. /-:


Well, my point isn't about safety, it's about usability. Escalators can still be used as stairs when they fail. Elevators can't be used at all.


I see. Good point.


I'm going to steal this comparison :)


>how will stuff even remotely work when we are a interplanetary

We won't be and aliens have better internet anyway :)


read a fire upon the deep


He mentions packet loss of 10%. That's a different problem than a slow connection.


High packet loss and slow connections frequently go hand in hand (TCP over cellular modems).

This effect is exacerbated if the web site you're connecting to changes congestion control and TCP ramp up settings.


10% packet loss though typically is the threshold of "completely unusable" TCP connections. Depends of course, but 5% I generally think of "severely degraded" (e.g. ssh being almost unusable but still able to get some basic stuff done during a sev0) and 10% being "drive on-location because you aren't getting anything done" territory.


Exactly, but it's not a modern website problem.


How is it not a modern website problem, if that modern website is being viewed over wireless connections, and follows Google's lead in disabling slow start TCP?


On the TV series Stargate SG-1, they envisioned using conventional EMR (like radio and TV signals) through wormhole connections.

Later plots faced invasion attempts through the stargates (permanent, direct-dial wormhole portals), so matter shielding was employed, and signals were used to authenticate who was on the other side of the connection before lowering the shield.


how will stuff even remotely work when we are a interplanetary species

I would not expect interactive anything when latency is 20min+. Usenet and listservs should work fine tho, maybe worth some tweaks to the underlying protocols if they're too chatty.


After spending a month in Mexico, including regions with spotty/inconsistent service from one minute to the next, I think the problem goes deeper.

Browsers are IMO terrible at mitigating intermittent and very slow connections. Nothing I browse seems to be effectively cached other than Hacker News. Browsers just give up when a connection disappears, rather than holding what they have and trying again in a little bit.

The only thing I used which kept working was DropBox. DropBox never gives up, it just keeps trying to sync and eventually it will succeed if there is any possibility of doing so.

I understand the assumptions of the web are different than an app like Dropbox, but I think it might be a good idea to reexamine those assumptions.


Back in my dialup days (90's), I used to use Opera since it had great tools for dealing with poor connections. E.g. IIRC you could have it only show images that were already cached, with a handy button to async load in new images that weren't already displayed.


Agreed, Dropbox is great in slow connections. Except when they auto update the client and you can't stop it. It tries to download 60 MB and you have to quit the client and restart it every time you need to upload or download a file until you can get the latest update.


Most of the web really sucks on fast internet connections too. Thanks to so many web developers thinking every dang thing needs to be a single page app using a heavy JavaScript framework. Add animation, badly optimized images and of course ads and it becomes really unbearable.

We keep repeating our same mistakes but just in a different way.


I saw a sarcastic comment a while back saying that webdevs should be forced to work on a Pentium II machine and they would cut their bullshit, I laughed, and moved on.

But after seeing many examples where sites were built on huge iMacs with no care for users running off a battery, slower network connection or with an average 1366x768 display I somewhat agree with the sentiment.


I tend to run web frontends in lynx (or links) to see if they can degrade well enough. If the core user flows don't/can't work then there's a big problem with the UI.


> The main table in this post is almost 50kB of HTML

Just for fun, I just took a screenshot of that table and made a PNG with indexed colors: 21243 bytes.


And converted to using single-character class names and reducing the CSS needed, it can be down to about 3KB, sans-compression.

(I manually minified the whole source; the original is 53313 bytes, 12438 gzipped, while my minified source is 25628, 10124 gzipped. Most of the bloat in the tables compresses really well, as is common with such things.)


Out of curiosity, how did you do it? I did as well, and I used Sublimes multi-cursor functionality + some manual work to replace classes where it should be. Mainly because i saw an interesting problem and I like (love) using the psuedo automation tool that is Sublime 2.

Just curious how you went about it, if it was /all/ manual or some interesting technique.

My result was not too shabby, 7kb I think it was when I stopped due to the time cropping up.


Mostly fairly manual, with a bunch of regular expressions and things like sorting the CSS block by background-color (Vim: `:sort /{/`). It was tempting to slurp it in Python, gargle it about a bit and spit it out neatly refactored, but I didn’t do it that way. A small quantity of Vimscript would also have been fairly straightforward. But no, I did it the hard way out of the wrong type of laziness. (Why did I do it at all? Who knows.)


Not related to the contents of the article, but please add a max-width styling to your paragraphs. 40em or so is good.


You're free to narrow your browser window — that's what I used to do, before all web sites decided that they know better than I do what width their text should be.


Agreed.

body{max-width:640px;margin:auto} The extra 33 bytes won't slow things down (unless you somehow hit the next ~1kb packet boundary)

line-height 1.5; would also make it more readable.


640px would be a problem — that's approximately 15em on my screen. I don't think that unit is supposed to be about hardware pixels, but...

Well, that's how it seems to be implemented anyway. Ems usually do better.


On hidpi screens they're emulated pixels, not hardware pixels (pretty much for this reason).


That's the idea. XFCE doesn't deal well with hidpi.


I don't disagree, but your web browser doesn't need to fill your entire screen.


It, um, only fills two-thirds.

That's a compromise. Too many sites don't deal well with thin browsers, but I need space for my terminals. This width usually works, although I sometimes have to do a bit of horizontal scrolling to get the article fully visible. (As opposed to the sadly inevitable sidebars.)

Margins are good, though.


My browser fills however much of my screen dwm tells it to fill.


Joey Hess (joeyh) has been writing about this for a long time (because he uses dial-up at his home). Here is a recent thread about a 2016 blog post on this:

https://news.ycombinator.com/item?id=13397282


> Please, please, if your site requires AJAX to work at all, then retry failed AJAX queries.

Anyone here have information on the most reliable heuristics to do retries?

Or information on the implementations used by say Gmail or Facebook?


I'd say that blanket advice to retry is not terrible but not great either. First ask, is the ajax call important enough to warrent a retry at all, if it is, did the response give any useful information about why there was a failure? If yes, did it tell you that something was wrong with the request? Then don't retry as it's not going to work the second time, or third time either. Did the request timeout? Again, think about whether the request is important enough to hit a potentially already overloaded server.

Also, as a side note, any page that becomes unusable because an ajax request failed to return has some really broken design. Ajax retries are not a solution for that, go fix the design instead.


I have seen https://www.wikiwand.com/en/Exponential_backoff used pretty regularly in many places.


I don't see how that is relevant.

1. There is extra connection information, or information can be sampled. E.g. query to see if anything is responding.

2. Our user just wants to get action ASAP. Not necessary to be a good citizen, our user just wants it to work.

3. Heuristics depend on what works in practice. HTTP/S is a comp layered protocol so it is hard to know what is right.

4. Connection conditions are extremely varied, mobile connection type, overseas location, ISP, IPv6, proxies, VPNs, etc all affect the connection parameters so finding a reasonable heuristic is hard.

5. Sampling connection information is difficult, because when it fails you also fall to log it.


By far the worst site I regularly use, from a page loading perspective, is my local newspaper.

It takes about 10 seconds before it loads to a usable state on a T1 connection.

If I pop open an inspector, requests go on for about 30 seconds before they die down. It's about 8MB.

http://www.telegraphherald.com/


My local paper is the same way, ublock origin blocks 50% of the requests coming from the site.

Builtwith.com shows 48 advertising libraries being used on the site.

The newspaper has a staff of less than 50, who even has time to look through and use all that advertising data?

The site is unusable.


When I opened that page and emptied my cache, I got a lot more than 8MB...

http://imgur.com/a/qt0Zd (59.9MB)

I don't think JavaScript is really to blame for this though, the problem here is they're dumping a whole bunch of full sized images when they could have used thumbnails.


If you wait 10 seconds to get the news you are more patient than me. I would quickly go elsewhere unless they deliver high quality news.


Most of my news I do get from elsewhere, but it's really the only local news source.


I might need a reality check here because this is feeling weird.

I'm currently building a web-based application to store JVM threaddumps. This includes a JS-based frontend to efficiently sort and filter sets of JVM threads (for example based on thread names, or classes included in thread traces). Or the ability to visualize locking structures with d3, so you can see that a specific class is a bottle neck because it has many locks and many threads are waiting for it.

I'm doing that in a Ruby/Vue application because those choices make the app easy. You can upload a threaddump via curl, and share it with everyone via links. You can share sorted and filtered thread sets, you can share visualizations with a mostly readable link. This is good because it's easy to - automatically - collect and upload thread ddumps, and it's easy to collaborate with a problematic locking situation.

So, I'd call that a fairly heavy web-based application. I'm relying on JS, because JS makes my user experience better. JS can fetch a threaddump, cache it in the browser, and execute filters based on the cached data pretty much as fast as a native application would. Except you can share and link it easily, so it's better than visualvm or TDA.

But with all that heavywheight, fast moving web bollocks... Isn't it natural to think about web latency? To me it's the only sensible thing to webpack/gulp-concat/whatever my entire app so all that heavy JS is one big GET. It's the only sensible thing to fetch all information about a threaddump in on GET just to cache it and have it available. It's the only right thing to do or else network latency eats you alive.

Am I that estranged by now by having worked on one low-latency, high-throughput application by now? To avoid confusion, the threaddump storage is neither low-latency, nor high-throughput. Talking java with 100k+ events/s and < 1ms in-server latency there.


Kudos to the author for making the post readable using a 32kbps connection.

My apartment does not have a landline, not to mention any other form of wired communication, so my internet connection is relegated to a Wi-Fi router that's separated by two walls(friendly neighbour) and a GSM modem that, after using the paltry 14GB of transfer it provides, falls back to a 32kbps connection.

Things that work in these circumstances:

- Mobile Facebook(Can't say I'm not surprised here).

- Google Hangouts.

- HN (obviously).

- A few other videoconferencing solutions(naturally in audio only mode).

Things that don't work, or barely work:

- Gmail.

- Slack(ok, this one sort of works, but is not consistent).

- Most Android apps.

- Github.

EDIT: added newlines.


Have you tried the HTML version of Gmail?

https://mail.google.com/mail/h/

I've used this on my kindle keyboard while traveling but the free data speed might still have been faster than 32kbps.


> - Slack(ok, this one sort of works, but is not consistent).

Use the IRC bridge. It's a lot easier on your bandwidth/resources.


gmail _could_ work: use the IMAP endpoint with an offline client. If you need to use mailing on an 56k connection, it's either Squirrelmail or regular mail clients.


Can't browsers provide a service like

txt://example.com

that shows web content in plain text, no images, no javascript, nothing, something like readability but directly without loading the whole page first?

It would also be good for mobile connections.

* Wikipedia should be the first site to offer that txt: protocol, Google second.

* Btw, hacker news is the perfect example of a text only site.


I totally agree. I used to have a really bad mobile connection up until a few years ago (Spain), and still when I use up all my mobile internet it reverses to 2G.

So I know the pain and decided I wouldn't do the same to my users as a web developer. I created these projects from that:

- Picnic CSS: http://picnicss.com/

- Umbrella JS (right now website in maintenance): http://github.com/franciscop/umbrella

Also I wrote an article on the topic:

- https://medium.com/@fpresencia/understanding-gzip-size-836c7...

Finally, I also have the domain http://100kb.org/ and intended to do something about it, but then I moved out of the country and after returning things got much better and now I have decent internet so I lost interest. If you want to do anything with that domain like a small website competition just drop me a line and I'll give you access.


Where did you live in Spain? In Spain my mobile internet is far better than, let's say, in parts of the UK I work, let alone in China. And I live in the mountains down south, an hour away from the nearest city. Best so far have been Thailand + Cambodia. Just blazing fast, even in the rainforest with multiple laptops/phones tethered and cheap as chips. If I can have anywhere between 3G/4G stable, everything I need (including almost all heavy sites work fine); in the south of Spain I get enough to load heavy sites and Skype, download torrents, watch Netflix per device connection. In Cambodia I could do all that in the rainforest, away from everything, for a fraction of the price, with 4 devices tethered. I was impressed. The connection here in Hong Kong I'm on now is worse than that, and that's in the middle of the city.

But yes, developers (including) me not accounting for slow connections is a pet peeve of mine. As I often do it myself, I do understand the issue; it is client constraints, time/money constraints and audience. But it does annoy me when often used sites (notably airline sites and banking sites) are top heavy and their apps time out because yes I do have a bad connection often.


In Valencia, but this was around 4 years ago and I had a data plan that was also 3-4 years old because I used wifi almost everywhere. As I was a student back then the only problem was the bus from my home to the university and back.

I was with Hacker Paradise for 3 months through SE Asia and I totally agree. I have screenshots yet-to-tweet comparing the great packages from Thailand with the prices in Spain and it's absolutely ridiculous.


Not only Spain; most of the EU.


A unit of measure I find appropriate is the "Doom", 2015 prediction:

https://twitter.com/xbs/status/626781529054834688


> In the U.S., AOL alone had over 2 million dialup users in 2015.

I've seen this figure a few times before, and I wonder every time who these users are. Specifically I'm curious what the breakdown is between people who

- Really don't have a better option available (infrastructure in this country is unbelievably bad in some places, so I wouldn't be surprised at a large size for this group)

- Are perfectly happy with the dialup experience so they don't switch to something better

- Don't know there are better options so they stay with dialup

- Don't even realize they never cancelled AOL and are still having it auto-debited every month

- Some other option I didn't think of


"Pretty much everything I consume online is plain text..."

Yes.

My kernel, userland, third party software and configuration choices, the entire way in which I use the computer, are optimized for consuming plain text.@1

As a consequence, the web is very fast for me compared to a user with a graphical browser. This is why every time some ad-supported company claims they are offering a means to "make the web faster" it makes them appear to me as even more dishonest. They are, at least indirectly, the ones who are responsible for slowing it down. They are promising to fix a problem they created, but will never really deliver on that promise. Conflict of interest.

@1 I find there is no better way to optimize for fast, plain text web consumption than to work with a slow connection. It is like when a batsman warms up with weights on the bat. When he takes the weights off, the bat feels weightness, and the velocity increases. When I spend a year or so on a slow connection and adjust everything I do to be as bandwidth efficient as possible, then when I get on a "fast" connection, the speed is incredible.

I also use the same technique with hardware, working with a small, resource constrained computer. When I switch to a larger, more powerful one, such as a laptop, the experience is that I instantly have an enormous quantity of extra memory and screen space, for free. I do not need a HDD/SSD to work. My entire system and storage fits easily in memory.

Now if I do the opposite, if everyday I only worked on a large, powerful computer with GB's of RAM with a fast connection, then switching to anything less is going to be an adjustment that will require some time. I would spend significant time making necessary adjustments before I could get anything else done.


"Google’s AMP currently has > 100kB of blocking JavaScript that has to load before the page loads"

Wasn't it that Google was claiming that by using AMP, you can actually make web pages load faster as it is a stripped-down form of HTML[1].

From what I am hearing from the author (Dan), bare html with minimal JS and CSS should (in theory/reality?) load pages faster.

https://moz.com/blog/accelerated-mobile-pages-whiteboard-fri...


Looking at that first table, one question jumps out at me: what the heck is Jeff Atwood doing on pages at Coding Horror that makes them weigh 23MB?

I mean, I'm all for avoiding premature optimizations, but 23MB for one page is just... wow.

EDIT: As a sanity check, I just tried loading the CH home page from a cold cache myself. Total weight: 31.26MB. Yowch.


Just been to Atwood's homepage, https://blog.codinghorror.com/.

Looks like there's some lazy loading of later content so I actually get accessible content very quickly, indeed I thought something was up as the page appeared to be only a couple-hundred kB, which didn't match your description.

Scrolling down I continue (in FF Network Monitor) to see content loading - reviewing I see that YouTube is responsible for over 1MB of "base.js" which gets downloaded 7 times, and ¼MB of CSS files (again 7 times over). Now Atwood may be to blame in part but Google ... shouldn't they at least do better?? CodingHorror is loading very large image files [1] (which get scaled) for me, perhaps a "retina" handling issue.

[1] https://gtmetrix.com/reports/blog.codinghorror.com/d1x7zZBk


Appears to be mostly lack of image optimization (and he loves gifs). A common issue with blogs.


Images are meticulously optimized, problem is, retina is expensive in file size.


Seems that images such as the superman image [0] or pinball image [1] currently on the front page are much much larger than they should be -- body max-width is 700 (70% of 1000px). Even for retina that's overkill. If you want to get really fancy you could restrict all (served) image widths to under 700px and make a 1400px @2x version to use in a srcset.

[0] https://blog.codinghorror.com/content/images/2017/01/help-ke...

[1] https://blog.codinghorror.com/content/images/2016/11/pro-pin...


I'm working on a visually lossless optimization tool [1] and could reduce the images a bit further from 20.8MB to 16.11MB (-22.6%). But you're right, hidpi images is the main cost factor, adding an srcset polyfill would be a good measure.

[1] http://getoptimage.com


One thing that isn't mention is webfonts. On 2G I can load the whole page, CSS, JS and some images, but can't read anything because the fonts aren't loaded yet. Here is a gallery of a couple of examples: https://imgur.com/gallery/wfjoT


My team has just started work on a new SaaS product. We are taking articles like this to heart and aiming to keep pages light and fast. We are using very little JavaScript.

Let's see if the market rewards us or punishes us for this approach...


There are more than enough other ways for the market to punish you. :-/

Your approach helps with reliability (fewer 3rd-party and browser needs) and accessibility (workable with lynx and screen readers) too. Latency makes people want to scream.

It's also important to remember that, to the customer, you are just another browser tab. The customer's computer is not dedicated to you. They could have 100 or more other tabs open. The customer may even have reason to open more than one copy of your site simultaneously, with same or different login, and same or different browser. Bogging down their computer makes them unhappy and resentful.


Not just web, mobile apps also suck when you have slow connection. For example, you can't open Itunes when you're on GPRS. It's trying to connect to Apple Music and locks you in a screen with a big Apple logo. Same as Spotify. Just try your apps with GPRS :) I camp every weekend so noticed how much they suck long time ago


Did most of the web suck when we were on 28k or 56k modems? I'd argue that it didn't, and yet even with the light weight of pages back then, it was incredibly slower than today's pages (even heavy ones) load over our much-faster connections.

So really, I think what the author is observing is that having experienced high-speed reliable connections, it is very disappointing to move to a much slower connection. For the emerging tech markets, I can imagine the experience would not be great if the load was long enough to cause timeouts and connection failures, but at the same time, the 99% experience, as it probably was when the web was born, is "holy crap look at everything I have access to now!"

Yes, there are some really terribly optimized and redirect-happy sites out there and yes, you should do everything you can to make your page speedy. Everybody benefits when you do. I think, though, that this is more of a case of "let's be thankful for and aware of what we have," and "if you suddenly have a slower connection you might find yourself annoyed" more than "most sites suck on slow connections."


> Did most of the web suck when we were on 28k or 56k modems?

I'd argue that it did, just like having 32MB of RAM and Windows 95 did. But almost everyone was in the same place, including the people making content for websites we went to, so page load times were as good for their minimal experience as the technology would let them be. Even in 2002, my family was still on dial-up, and it sucked because I knew how much was out there that just wasn't feasible for me to access.


> Did most of the web suck when we were on 28k or 56k modems?

Yes, lots of the early web sucked over dialup.


While waiting for some JS-laden crapfest to load earlier, it occurred to me that I haven't heard the term "World Wide Wait" in many years. But here I am experiencing it all over again.


I think average number of requests per site, and average page size in megabytes went up over time. It is much higher now and content we browse is actually very similar - people read news like they did 20 years ago, but now their news site requires 2mb of data when it required 100kb 20 years ago.


> Did most of the web suck when we were on 28k or 56k modems?

Yes! Pages loaded in 10 to 20 seconds.


I think you've missed my point. Yes, that was slow, but if it was literally the best, then it was awesome. The same thing goes today for everybody out there with access to gigabit. There's no reason to complain about it even if downloading takes some time still, because it's not like there's anything better.


> Yes, that was slow, but if it was literally the best, then it was awesome

I remember people complaining that the web was too slow compared to gopher, even on pages without images.


Wasnt this "backwards" compatability the reason blizzard was always so succesfull? Using old but sturdy tech, that would work on the slowest of machines.

Actually one could make a whole slowMo WebStandard from this. No Pictures, just svgs, no constant elaborate javascript chatter, no advertising. No videos, no music, no gifs, just animated svgs. Actually, that would be something lovely. Necessity begets ingenubeauty.


I've been very tempted to start publishing content on the gopher protocol, its immune to the cancers of the modern web.


> Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail. After refreshing the page twice, the page loaded as it was supposed to and I was able to read the blog post, a fairly compelling post on eliminating dependencies.

slow clap

His data on steve-yegge.blogspot.com is particularly unfortunate: Steve's (excellent) posts are almost completely pure text, and there's no reason for them to fail to download or display, except that Google demands that one execute JavaScript in order to get a readable page.

> if you’re browsing from Mauritania, Madagascar, or Vanuatu, loading codinghorror once will cost you more than 10% of the daily per capita GNI.

Maybe the social-justice angle can convince some people to shed their megabytes of JavaScript and embrace clean, simple, static pages? There's probably some kid in rural Ethiopia who might have been inspired to create great things, if only he'd been able to read Steve Yegge's blog.

> The “ludicrously fast” guide fails to display properly on dialup or slow mobile connections because the images time out.

slow clap

> Since its publication, the “ludicrously fast” guide was updated with some javascript that only loads images if you scroll down far enough.

Incidentally, is there any way we can enforce the death penalty against people who load images with JavaScript? HTML already has a way to load images in a page: it's the <img> element. I shouldn't be required to hand code execution privileges over to any random site on the Internet in order to view text or images.


A lot of people here are talking about how 2G connections are "almost unusable" and how this should be optimized server-side and so on. I'd just like to point out that there are browser that cater to this specific demographic (slow connections).

Ever since the days of running Java applications on my old Sony Ericsson phone, Opera Mini has been my favorite. As far as the browser is concerned, the website can be as heavy as it wishes -- it will pass through Operas proxy and be compressed according to user preferences. This could include not loading any images (nothing new), or load all images with very low quality. You can also select whether you want things like external fonts and JS to load, or if you want to block that too. When I moved to a new country my first SIM card had one of those "unlimited but incredibly slow" plans. Opera Mini was a life saver.

I guess my point is that we shouldn't get stuck in optimization paralysis if there is no sound and standardized server-side way to solve this issue (and there doesn't seem to be). It would be nice if browsers had a way to tell web servers that they're operating under low bandwidth, like the do-not-track flag, but AFAIK this does not exist.

Until that exists, and I don't mean to suggest we go back to the days of "Made for IE9" here, maybe some responsibility needs to be shifted to the client side. As long as you design your websites in a sane way, they will pass through these low bandwidth proxies with flying colors. Maybe you don't need to spend hundreds or thousands of man-hours optimizing your page when you could insert a discrete indicator at the top of the screen for anyone taking longer than X seconds to load that there are many browsers available for low bandwidth connections, and that they might want to try them out?


But HN almost never sucks, even on slow connections. That's why, when I'm on mobile, I only read the comments and not the articles :)

By the way, here's how we can collectively make the web faster, safer and more fun to use: [1]

[1] https://news.ycombinator.com/item?id=13584980


I was exasperated by his mobile example. Why? This is my life with Comcast (the faster of the two "choices"!) in Palo Alto. I also have Comcast in my ski house in the sticks and it's faster than Palo Alto. But my wired connection is so slow that I sometimes use my phone on LTE to read a page that hangs on Comcast.


Lately my Pixel has been achieving sub KBps speeds on very good Wifi connection (laptop in same room 100MBps), and it reminded me of the old days with dial-up on win 98 -- but worse. The estimated download-time for the LinkedIn app (70MB... gg) was a whopping 6 months! What a great way to get me to guzzle up my mobile data.


I really wonder how much time designers and developer actually spend on thoughtful testing vs. a/b or automated testing. Sometimes the problems on websites just seem so....clueless.

My current pet hate is news sites that float up a modal window asking me to turn off my ad blocker because bidness. OK, I turn off AdBlock Pro for that domain, turn off HTTP switchboard, and it still won't load. Why? I dunno, try again, still won't load. OK, guess I'm never coming back. Obviously it must be some other extension, but without any technical details how can I tell?

For that matter why did anyone think it was ever a good idea to float dialogs over web pages to get people to share (not submit) their email address? Has anyone ever looked at how poorly these display on mobile devices? Or how making it hard to close floating dialogs is a really good way to annoy people?


> if we just look at the three top 35 sites tested in this post, two send uncompressed javascript over the wire, two redirect the bare domain to the www subdomain, and two send a lot of extraneous information by not compressing images

So uncompressed javascript and images are bad, but I thought apex domain to www subdomain redirection was an optimisation as the apex domain can often only point to a single server but the subdomain can point to a range of geographically well distributed CDNs. So rather than going to North America for every request, the browser only needs to do it once than the rest can come from a regional CDN. Am i misunderstanding something, does this also break down on a slow connection?


The apex domain can only use A records, i.e. point directly to an IP address. It can have multiple A records, ebay.com does so:

  host ebay.com
  ebay.com has address 66.135.216.190
  ebay.com has address 66.211.162.12
  ebay.com has address 66.211.181.123
  ebay.com has address 66.211.185.25
  ebay.com has address 66.211.160.86
  ebay.com has address 66.135.209.52
Without a CNAME (alias) record, eBay need to control the DNS resolution. Most people using a CDN don't, so they must use a subdomain.


Ah, I was unaware that there could be multiple IPs on an A record, thanks for that. If I'm understanding this right though, the extra IPs would just be for redundancy and resilience and cannot be relied for geographic routing? In this case ebay.com redirects to www.ebay.com.


It's not that there are multiple IP's in the A record, it's that there are multiple A records, each with an IP address.

For geographic routing, there is a clever trick that can be utilized using a technology called Anycast. Anycast is basically a way of assigning the same IP address to multiple machines so requests to that IP address results in connecting with the one that's the closest to you, route wise.

Providers sometimes use Anycast DNS Name Servers and configure them to provide the different IP addresses depending on which name server people connect to.

So, if someone wants to determine the IP address of ebay, their DNS client connects to ns1.ebay.com and asks "hey, what's the IP addresses for the A records for ebay.com" and ns1.ebay.com replies with the list.

But ns1.ebay.com might be an Anycast DNS Name Server that's close to them and it provides the list of IP addresses closest to that name server. Someone on another continent might reach a name server with the same name and ip address, but it's a different machine in a different data center. It would provide a list of IP addresses on that continent.

I do something similar with one of my sites. I rent three VPS's from buyvm.net (who has Anycast setup) that have the same IP address and are located in Las Vegas, New Jersey, and Luxembourg. I pay less than $10 a month in total and run my DNS name servers there.

Clients that connect to the name server in Las Vegas get an IP pointing to a Digital Ocean load balancer in San Francisco proxying data from a few front-end VPS's.

Clients that connect to the name server in New Jersey get an IP pointing to an OVH Canada load balancer near Montreal.

Clients that connect to the name server in Luxembourg get an IP pointing to an OVH load balancer in the North of France.

The result is a responsive service that has amazingly low latency for the US and the EU. Gonna try to set up some infrastructure in Singapore soon to make things faster for Australia and Asia.


Could a lighter weight website serve more users for the same dollar of bandwidth as bloated website?

It seems to me there's a business strategy, where rather than pushing for more ads, a website pushes for lighter weight and promises its few advertisers a wider audience.


I actually had a similar idea about radio stations.

Currently, FM radio stations are so typically clogged with commercials that I just switch back and forth whenever the music stops. The sole exception in my area is KZTQ "Bob FM", which has a neat policy: 60 minutes (ish) of nonstop music (aside from their normal station ID stuff), followed by at most two or three commercials, then repeat. I've found that the commercial breaks are short enough that I'm more willing to actually listen to them, since I know that the music will be back in less than a minute or so.

I reckon that has a significant value-add in terms of ad impressions, and thus could offset the normally-decreased ad revenue by charging more per ad.


As someone who had fiber internet then had to spend a year and a half on 1.5MBPS DSL ... (hell) ... I can say I agree that it sucks...

I can also say that at no point did I feel entitled for it to work better for me. I don't understand this level of entitlement (i dont like your ads, i dont like your layout, i dont like your visual effect ...) ... just leave the site.

The modern web isn't simple static pages... its not going to revert to that, either. We're developing actual applications in the browser now... those aren't easily translated to static, simple pages...

This is today's "grumpy old engineer" argument...


The other issue I have with web page bloat: memory-constrained mobile devices are able to cache far fewer pages than a desktop computer, and navigating among multiple tabs, etc. gets slowed down to internet connection speed.


I'm gonna read the article, I promise, but is the title really "If your internet is bad, the internet is bad"?


I'm not gonna lie, if my fat high-res site images make life a lil harder in Vanuatu but convert a bunch of black-turtleneck d-bags in San Francisco to customers, I know which side of the bread the butter is on.


I think thats probably part of the point of the article. I would read it less of a statement like "make you webpages smaller", and read it more as "be aware of the bloat of modern webpages".

If your target audience has great internet, then ignore optimising for size. But be aware that people travel, and your market may change, so what is ok in SF may become unusable if they go on holidays, move offices or need to work of roaming data due to an outage.


Ah, I'm just joking around. Mostly. ;) I agree with and mostly implement the vast majority of google's recommendations vis a vis site weight and speed (when I have time/budget to do so), because I regard making sites fast as a signifier of competent professionalism. Any hack can make a sucky, slow, heavy website. Making a website that really cooks is one of several things I use to justify my rate. ;)


I know I'm a dick, but I love all you guys and I thank you in advance for your forbearance. I need a sandwich.


Slow connection is okay, it's just slow. Now spotty connection, or high latency, that's the killer.

Webapps that make 50 requests to download all the JavaScript and CSS and talk to the API and get 3 images really really really don't behave well when 12 of those 50 requests fail or take 30 seconds to complete. Honestly, I'd rather have slow internet than packet lossy internet.

Still don't know why, but my Xfinity router routinely gets into a state where it drops the first 10 or so packets of any request. The first `ping 8.8.8.8` takes 3 seconds, the rest are the usual 0.1 second. Terrible.


What do you mean by latency in this context?


Probably really means highly variable latency, i.e. jitter, where the RTT spikes horribly, which will cause packets to assumed to be dropped.


Wow, really? Who knew overuse of JS and fancy graphical effects where they're not needed could negatively impact user experience? Could it be that all the web devs using 20 CDNs, cramming 900 frameworks, 100 externally provided analytics, advertisement providers and fancy layout eye-candy were wrong all along? What a surprise!

I'm already sick when I have to visit a webpage and it won't even load ANYTHING if I don't enable scripts on it. At least load the god damn text, I don't care if it'll look like trash, just don't show me a blank page...

The irony is that everyone calls for people to not use Flash, and then they go out of their way to recreate the abysmal experience without it, so really nothing changed as far as UX goes. Remember when pages didn't load at all unless you had flash installed? Well here's some nostalgia for you, won't load unless you run all the JS on the page and then you have to "enjoy" a bloated joke of a website, but Jesus does it have eye-candy!!!


Every time I get angry about this I'll open https://purecss.io/ or http://skytorrents.in and look at the source. It's a form of meditation to browse fast websites.




That uses Google Analytics


Thanks!! First I heard of purecss. Will use asap


JavaScript is not the enemy here, it's very possible and easy to make full SPAs with judicious use of micro libs, lazy loading of images/assets, non-blocking styles/fonts/code, and ad-free. The problem is just not caring or knowing - misuse of the technology, rather than the technology itself.


Agreed. I really enjoy building static websites. With no databases, no server-side logic or caching, everything becomes extremely fast and simple. It does mean moving any logic to the client-side with Javascript though.


Lazy loading really sucks when you're left with a blurry image because it timed out while serving the full image (can happen even with an acceptable connection).

Unless it's really clever I'd prefer if they just let the browser do it's thing with an img tag, because that worked fine in the 56K times.


As a JS hater....

I completely agree. I think the problem is that bad designers are misusing a good tool. It's like salt: If I add a little to my meal, it makes it better. If I add a LOT of it to my meal, it doesn't keep getting better.

Unfortunately this causes a knee-jerk reaction to anything JS. Although I don't think JS is to blame, I think the hatred of JS comes from a good reason.


Honestly, they don't even look that great. Lots of sites could be using lots of modern browser features to be fairly innovative from a visual design and user experience standpoint, but it's mostly just really inefficient (and inaccessible, with no compat fallbacks) implementations of the same old shit.


yes lot of these web pages are the size of mp3 files. it's just fucked up.

we decided static html webpages are sooo 1999. Yet it remains the most secure and user friendly medium to deliver value.

bring back the 1999 frame side bar. what was wrong with that?


> bring back the 1999 frame side bar. what was wrong with that?

That's funny. Back in the day, every Real web developer learned that frames are Evil and must be abolished, because they are breaking the back button and now the poor user can't provide a link to the view he sees, because the state of all the frames isn't encoded in the URL.

Then web2.0 happened and all the Cool web devs knew it's the time to start abusing ajax, use modal dialogs, break the back button, and turn simple websites serving text and images into complex stateful web apps and in doing so, ensure that people don't have nice URLs that encode the view they see, for linking.

Hello??


There is making the web more janky and hard to use and making it a simpler page. I think there is a balance here we can find.


I think AMP is that "bring back the 90s movement". No Javascript. Just super light and fast loading webpages like in the good ol' days.

God I miss the late 90s and the internet. Even looking through neocities gives me a pang of nostalgia.

Now everything has to be "Material Design" or "Flat".

¯\_(ツ)_/¯


Everyone thought it was silly when I did that for my site (minimal css, no js, static pages with hugo). But yea I agree, that's the right future for the web and design.


Many younger devs may have never lived outside of large, very-well connected cities (NY, Chicago, West Coast, etc.). They may assume that everyone has the connectivity and speeds that they do, or in the least they may not fully understand with 1.5 Mbps down really feels like.


>Wow, really? Who knew overuse of JS and fancy graphical effects where they're not needed could negatively impact user experience?

Clearly not enough people, because it keeps happening. I think it would also help if people kept in mind that the internet is global, it isn't just for developed nations.


It's not a matter of whether the internet is global, its who your audience is... if you audience is primarily in developed nations, then the rest of the world isn't much of an issue.


If you're running a purely domestic web store, maybe, but people in developing nations are not that different from people in developed nations, they can be just as interested in a a wide variety of topics, that's what makes the internet so appealing and wonderful.


Sure, we're humans and all, but jmcdiesel isn't saying that people in the developing world aren't interested in similar things, just that many businesses do have audiences which are limited to certain countries.

Much of the internet is a business, not a passion project. There are plenty of businesses that are completely OK with being inaccessible to users on 2g/3g in the developing world.


Even if you're running an international web store, shipping to developing countries plus the increased rate of fraud and different payment technologies generally mean that you're not keen to expand to those places anytime soon.


"Why hasn't the campaign hired anyone to make phone calls?"

"Bill, phone calls are so 1992, they hired a web dev to create online polls"


Even if your audience members are global, the west is going to be where most of your revenue streams are going to be. If you are facebook it may be worth optimizing for rural India, because they have few places left to grow, but for most companies it is just not worth it - and companies exist to make money.


reminds me of node-noop


See also: The Website Obesity Crisis[0] by Maciej Ceglowski

[0] http://idlewords.com/talks/website_obesity.htm


It sucks if you have a fast connexion too, because then your CPU and RAM suffer instead. And as you add addons to rectify the many offending web pages, then the performance penalty of those quickly equal to that of crappy JS. I was so happy with Xombrero as my browser, but it's stagnant and insecure now. I do like my Firefox but with all the blocking addons it's slow, and without them it's slower (not that it's its fault).


I wonder if there is room for a product, a kind of browser-in-a-website, that would eat those big-ass webpages (server-side) and spit out just the text and (heavily compressed) jpegs. With a little layout to match the original website. Something like how streaming services adaptively subsample data, or like how NX tries to compress the X window protocol. Obviously this would be patchy, but it could be much better than "FAIL".


Instead of making sites that try to predict the unpredictable, I'd rather ask the question if TCP is still the right tool to use.

There shouldn't be a reason for a big page with many resources to not load - it should just be slower. Yet I can make the same observations as soon as my mobile signal drops to EDGE: the internet is essentially unusable as soon as there's packet loss involved and the roundtrip-times increase. Interestingly mosh often still works beautifully in such scenarios. So instead of focusing on HTTP2 or AMP (and other hacks) to make the net faster for the best-case scenario, I'd rather see improvements to make it work much more reliably in less than perfect conditions. Maybe it's time for TCP2 with sane(r) defaults for our current needs.


Rather than a "TCP2" that, based on name alone, would be far too likely to aim for semi-backwards compatibility but tweak a few things to be slightly better in general but mostly just better for the specific use cases of the one or three top contributing companies, why not just push for the adoption of one of the existing alternative transport layer protocols?

For example, there's SCTP. From what little I've read about it, it seems as if it has most of the benefits of both TCP and UDP, with the main downside that some firewalls and routers may need to be upgraded. Being an existing protocol, however, there are already working implementations and some amount of network support. Maybe it's even fully usable as-is today!


SCTP can't go through NAT (there is ietf draft in works for that). But SCTP already exist in your favorite browser Chrome, Firefox both use usrsctplib (https://github.com/sctplab/usrsctp) to provide sctp over udp for webrtc signaling.

But there also need to be sctp over udp over dtls (or just sctp over dtls) happen as you can't use TLS with SCTP unordered mode or multihoming.

SCTP slowly gain traction in userspace besides being only in mobile operator networks (lte)


Out in the country enough that I have 3 meg area wifi with a wife who enjoys streaming and Facebook and two boys who enjoy online gaming and streaming. Not much left for me. All you can eat at least and avoiding satellite.

Oh, we find Amazon, IMDb and Facebook are the biggest pigs on a slow connection.


It only sucks if you've experienced a fast connection.

We generally don't target hardware from 98, why should we target bandwidth from 98? Current smartphones and computers are really powerful, and most applications are targeted towards those devices. Native apps don't have this insane requirement to support hardware from 2 decades ago.

The web is so much more than text in 2017. And before you whine about the ads and useless stuff, go read a tabloid and whine about the waste of paper, or try to watch tv and whine about the electricity and time you're wasting watching advertisements.

Media has and always will be like that.

The time spent on backwards compatibility and optimizations are usually not worth it anyway.

Do I think mostly text sites should be 5mb? Obviously not.


For you it's bandwidth from 98. If you go outside your obviously modern country it's a 2017 problem and it will stay this way for a long time. It's not so much about backwards compatibility... it's more like "keep it working" with slow bandwidth. Posting this from Philippines where I'm currently happy with a stable 750Kb/s connection.


> Let’s load some websites that programmers might frequent... > All tests were run assuming a first page load...

ehh, but is that really a good test for sites people "frequent" ?

What happens to the heatmap when we're talking about subsequent page loads!


My main source of clients are people suffering from website bloat because they have no idea how to build a website. They jump on every shiny JavaScript library they need and load 8 different versions of bootstrap and then 5 fonts from various sources, all from CDNs. I wish I were exaggerating, but it's such a mess. In every single case, 90% was garbage, and all they really needed was a nice semantic css sheet. Unless you are developing a web-app, or 100% need you ajax calls, you don't need JavaScript. Is this the same for others or am I just in a less technically inclined area?


Yeah I take my 100Mbps connection lightly, developing an image-oriented web app for the Philippines and holy crap the one guy was lucky to get 0.3Mbps.

So... had to severely redo the code to pull 50px wide images, blur them in, and only load the visible (depending on screen dimension) then a 2-second max refresh thingyyyy (yeah I'm just making this loader-interrupter thing) it's been a mess I feel pretty stupid sometimes. Why can't I get this... JavaScript. Yeap I am lucky to have Google Fiber (and I have the cheaper plan too)


Quora is unusable on a slow connection. It literally shows a popup that obscures content if you lose high speed connectivity or drop packets.

However, the web is even worse if you have no connection at all. This is important because if we provide internet access at a municipal level, we can reach 100% adoption among our pluralistic educational system and progress to primary learning materials that are web based (CA 60119 for example prohibits any primary educational materials not available to all students both in the classroom AND AT HOME).


I have a different suggestion.

Build software that can work on a distributed architecgure. So people in Ethiopia can run their stuff on intranets and mesh networks and only occasionally send stuff around the world.

What broadband has really caused is this assumption that the computer is "always online". Apps often break when not online. When in reality there shouldn't even be "online/offline" but rather "server reachable/unreachable". And you should be building offline first apps, with sync across instances.


I live in a rural area. There are three options for internet - satellite (limited data allowance - but decent speed), dial-up or a local ISP with a Motorola canopy system. I chose the last option. I get 100 KB/sec max download speed (on a good day). Divide by the 4-5 people in the house regularly using the Internet and it gets really slow, really quick. Many times I just give up and shut the computer off or I browse using Lynx.

And nope - no cell phone signal here either..


> The flaw in the “page weight doesn’t matter because average speed is fast” is that if you average the connection of someone in my apartment building (which is wired for 1Gbps internet) and someone on 56k dialup, you get an average speed of 500 Mbps. That doesn’t mean the person on dialup is actually going to be able to load a 5MB website.

As someone mentioned below too, the median value would make much more sense in this case (which it often does, it seems).


Median also does not make sense. Dialup is not as common as other connections anymore. You would get pretty high ADSL Mbps as the median.

Generally the distribution of bandwidth is multimodal, similarly of latency.


YESSS!! ever have your 4G connection drop to shit? well imagine like that but like 24/7 on your wired connection, that's what many people live with today :(


I'm very impressed with Dan's methodology here, and it matches my own experiences with dialup.

One thing I wonder about is, it seems many dialup ISPs these days provide some kind of "accellerator", probably a web proxy that avoids some of the issues with timeouts, perhaps compresses some content etc. So it might be that many of the remaining dialup users don't experience quite as many problems as Dan found.


> A pure HTML minifier can’t change the class names because it doesn’t know that some external CSS or JS doesn’t depend on the class name.

After everything has been parsed, it would know (the browser knows).

Couldn't a proxy service produce super lightweight, compiled web pages? I seem to remember Opera used to offer something along those lines, but I may be wrong.

Would there be commercial value in building such a tool?


But like, if you don't fill your website with megabytes of useless bloat, you'll get called out, because "it's 2017".


I think about this a lot. And I think it's really easy for a page weight argument to fall into an "old man yells at cloud" tone. But I also want the industry to move towards simpler HTML and such, so, I've been thinking up an argument that companies will buy. I'm really bad at it though. Maybe the extra African market will open up new ad revenue?


Shameless plug but I did something similar in 2014 and used PhantomJS to analyze the content of the top 1000 Alexa sites: http://dangoldin.com/2014/03/09/examining-the-requests-made-...


I live in a very remote town in the North Cascades in Washington state and work remotely in development. I'm on a 1.5mb DSL connection and while it's slow it's consistent and I rarely have issues with Skype / Hangouts / Slack / Git / normal work. Downloading large data dumps is another story, but you learn to plan ahead.


I remember dial-up on a really slow modem back in the bbs days.

I was reminded of the slow connection with T-Mobile 2 years ago while in the Philippines. They give you free data in 120 countries, but its throttled.

This was my main motivation for rewriting my side project using highly optimized css and not a large framework that uses web fonts and bloated libraries.


Try it with a bad connection and a 1st-gen iPad. :-)

You basically need to disable JS altogether to have a chance to even view many websites. And some, well, just crash the browser regardless.

It's amazing how much the web evolved in the past few years...

There used to be a time where supporting 10+ year old browsers was matter of factly. No longer.


Website I'm currently working on has no JS and weighs an average of 15 KiB per page. Loads in <20ms.


There is a project called txti which provides a free hosting for a simple websites edited in Markdown: http://txti.es

The idea is to make the content available to all the web users as the fast connection is not as common as we might think.


I will be blunt, you would be amazed at the sites that suck even when you have 1g. I used to always think, damn my DSL is slow until I was at 1g and some sites did not improve and how many of the applications I have which can update are throttled


I feel a good solution to this problem, or at least covers a fair amount of users is having your website work well with safari reader. Anytime even on fast connections, I often find myself loading up a page with reader instead.


Tangentially related:

As affects web apps, some of this is a conscious choice by network designers. First, click on your profile on Hacker News and turn on Showdead. You can then read this thread and my comment in it:

https://news.ycombinator.com/item?id=13597673

While the poster wasn't a web engineer specifically (or didn't say it) so much of the web architecture isn't built for front-loading payloads. But instead, on eventually getting there, through the magic of TCP/IP and letting users wait for a few dozen seconds as pages load.

I disagree with it and think these engineers are wrong and make the wrong decisions (optimize for the wrong things) and that this makes everyone poorer-off.

Thanks for listening. (Happy to discuss any replies here.)


Can confirm, am using horrible internet right now. Googleweblight is a lifesaver for reading article, not sure why it hasn't been mentioned by I recommend everyone facing speed issues to try it.


I just realized this is why the growth of network speed is increasing at a lower rate than that of compute. Even though they both continue to grow in capacity, the accelerations are different.


The worst of all I think it's NHL.com It appears to me that they have been asked to be "responsive" in terms of viewability instead of functionality. Good luck using this site.



Why hasn't someone implemented a kind of low bandwidth accessibility option? (Or is there one?) I would imagine this would be akin to the multipart text only email.


It does. I'm on 2G and HN and the article site are the only usable things I've encountered today [on TMobile's intl roaming thing]


> or one of the thirteen javascript requests timed ou

There's the root cause. Why do I need to download executables just to read static content?


Posting from wifi on a plane over the UK: this is apparently not slow internet, I can read the usual bloated news and blog pages.


This applies server side too. Note what sort of sites do and don't go down when they make the HN or Reddit front page.


Regardless of connection speed it also sucks if you try using LinkedIN's new website. Nothin' but progress bars.


And thats why we need to adapt offline-first.


This post can be seen as exceptional even because of the fact that that page loads instantaneously. Nothing extra. Bravo.


Bloat: its not just for operating systems.


Why not make a site that proxies other sites, but retransmits them as fast-loading? Isn't traffic=dollars?


Do you know how to make the web not suck on a slow connection?

Ssh into a shell account and use a text based browser :)


What is the size per page one should not overstep?

I mean, yes as small as possible. But are there some size-budgets?

For 3G, 2G etc.


We need more sites like this! Absolutely no bloat, so nice to use.


The web sucks, but it sucks less if you have a fast connection.


how much weight would a little css to make the text not full-width add to that page?


And the GFW(great fire wall)...


This exactly one of the problems IPFS will solve by serving content from local peers.


you don't need to travel from Wisconsin to Washington to experience a slow internet connection.

Try any mainstream commute on the South West Trains Wimbledon to Waterloo (London) and you'll a) still get blackouts for about 1/4 of the 25 minute trip (this is one of the most densely populated areas in Europe - no excuses) and b) at 3 of the 4 stations you'll stop at, your vaunted 4g connection will drop to 1998 speeds due to contention. I generally curse the complex sites in these situations because you'll easily be waiting 30-90 seconds (firmly in your heatmap's red zone) for full load at least once per commute.

Incidentally, kudos on perfectly communicative yet lightweight web page (50Kb).


Agreed. It's not just "third world countries" that have slow connections. Low powered devices have slow connections. Places where there are lots of people with portable devices have slow connections. This is now not a long time ago, nor far far away. The more wearables and IoT becomes a "thing" you're going to find that attempting to get more interactions by saving on transmission and client CPU load is worth the investment.


This guys posts are insufferable for constantly namedropping where he works. Ugh.


reddit.com takes 7.5 seconds to load on FIOS? I must be reading this table wrong.


I'm torn here, in a way. On the one hand light page weights and other such optimizations make the internet better for everyone, on the other, there's a certain point where designing your product to target 3 decades ago (we forget 1990 was 27 years ago) gets a little absurd.

I think the greater tragedy is not that the web is bloated (an issue for sure), but that so much of America has internet worse than 3rd world mobile 2G.


The bloat affects us today. A page that's impossible to load on dialup is also going to make broadband viewing extremely sluggish and unresponsive. Each click feels like a risk, especially when even the "back" button comes with 200 ad scripts that have to spin up again.


No need to "target 3 decades ago", just be reasonable.

> Popular themes for many different kinds of blogging software and CMSs contain anti-optimizations so blatant that any programmer, even someone with no front-end experience, can find large gains by just pointing webpagetest at their site and looking at the output.


56k modem spec (V.90) appeared in 1998, so that's just two decades ago. And of course 3G networks appeared mostly in 2004 (in Europe) with (very theoretical) data rates up to 384kbps. About that same era, I recall 256-512 kbps DSL subscriptions being very common. And I can promise that browsing modern web with <512k connection (possibly with quite a bit of latency) will not be very comfortable.


What do we care? The vast majority of out target audience lives in a city with fast internet

(I'm not putting /s because there's actually people that think this is a reasonable opinion in the general case).


It seems totally reasonable if you're not operating a company to be altruistic.

* Pushing all the rendering to the client makes development easier, eases the transition to native apps, and uses fewer resources on the back end.

* The fancy site drives more conversions and makes the stakeholders happy.

* Not having fast internet is a crude filter for disposable income and losing those users probably goes unnoticed and might even increase the value of ad placements.


Well 3G is as low as you can get somewhere deep in the woods, not really a problem...


I take it you don't walk in the woods much...


Did you read it?




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: