Hacker News new | comments | show | ask | jobs | submit login
The web sucks if you have a slow connection (danluu.com)
1269 points by philbo 289 days ago | hide | past | web | favorite | 598 comments



>When I was at Google, someone told me a story about a time that “they” completed a big optimization push only to find that measured page load times increased. When they dug into the data, they found that the reason load times had increased was that they got a lot more traffic from Africa after doing the optimizations. The team’s product went from being unusable for people with slow connections to usable, which caused so many users with slow connections to start using the product that load times actually increased.


Hah! A Jevons Effect[1] in a web site's bandwidth!

[1] When an increase in the efficiency with which a resource is used causes total usage to increase. https://en.wikipedia.org/wiki/Jevons_paradox


Wow, uncanny resemblance to Giffen goods:

https://en.wikipedia.org/wiki/Giffen_good


From the wikipedia article I gather that the only Giffen goods that were actually shown to exist are the Veblen goods and thus disqualify as Giffen goods. It seems to me that Giffen goods are a theoretical thing that has never been actually shown in real world (as the article states, all of the proposed examples were discarded).

The case of website cost going down and demand going up seems pretty standard.


You didn't prove that giffen goods are equivalent with veblen goods, there.

If a package of ribeye steak normally sells for $2.99 and doesn't sell well, but then its price changes to $6.99 (but nothing else changes), and demand increases, that steak is a giffen good. The ribeye steak is not conspicuous consumption (unless your definition of status is really loose and includes posting photos of your food to Instagram). What happened there is straightforward: there's an elastic demand for ribeye steak, people saw the price and assumed quality signaling. When it increased to price parity with higher end brands, people assumed quality parity as well.

Conversely, a mechanical watch is specifically optimized to be good at time keeping in the most functional and cost inefficient ways (e.g. assembled in hand in a white gold case with a hand-decorated guilloche dial and proprietary in-house movement mechanisms, etc). That is precisely conspicuous consumption, and thus it's a veblen good.

One good's demand increases because of quality signaling, the other good's demand increases due to status signaling. The point of there being two of these definitions is the nuance in why consumers would purchase luxury items. Theoretically, people don't buy at Trader Joe's just to brag to their upper class friends that they shop at Trader Joe's (this is not a good example but take away a specific brand and you get the gist).


The steak in your example might not be a veblen good, but it still isn't a giffen one.

The article states that a necessary condition is that "The goods in question must be so inferior that the income effect is greater than the substitution effect". The reason people are buying more when the price rises is not because of the income effect (which would be because they have less money since the price went up, so demand for inferior goods increases), but rather because they now have evidence/reason to believe that the good is quality.

That's not giffen according to the definition given, which includes a causal factor for the demand curve.


IANAEconomist (but I could play one on TV)

It is simply not true that that a good must be "inferior" to be a Giffen good (unless you adopt a special meaning of inferior, which since it's not necessary to do, I won't agree to). The classic example (thought experiment, without regard to whether it actually happened) is potatoes in poverty stricken Ireland: a poor person's diet would be mostly potatoes (inexensive Econ-Utility (compared to steak): calories, fills the belly) with some meat a few meals a week (expensive Econ-Utility: protein, iron, B vities, tasty, "not potato", even a touch Vebleny)

So, arbitrary budget example, let's say $20 at the grocery gets you $15 potatoes every day and $5 of steak 1 days a week. If the price of potatoes goes up, you need to reduce something, but you need to eat every day so you can't reduce potatoes, so you reduce your steak consumption, but now you have some extra money which you spend on even more potatoes. Price of potatoes went up, consumption of potatoes went up. <-- there is already a theoretical problem there, you could reduce steak just enough to keep potatoes equal, so let's just say you can buy a steak or not buy a steak, no half steaks, OK? just trying to make the point "what is a Giffen good", and not trying to prove whether Giffen goods exist or not.

So, potatoes are not "inferior" to steaks, both are requirements for a balanced diet; I supposed a technical econ-definition of inferior could be designed to mean something along the lines of "inferior is defined to rule out your example, aaight"

In any case, while Giffen goods probably can't exist in a market for any length of time, the concept is completely understandable as a short-term reasonable thing that occurs: I go to the store with cash intending to buy an "assemble your own" burrito with guac, the price of beans went up, I don't have the cash at hand now to get the guac, but it's not a burrito at all without the beans, so I leave out the guac... but turns out by leaving out the guac, I can get a larger size burrito: consumption of beans just went up at the same time as the price. This effect happens for sure... does it happen enough to counteract the people who would leave out the beans and keep the guac? Can the "substition of beans for guac" function always be seen seen to be continuous and differentiable? <-- perhaps not, burrito shops like to have overly expensive add-ons for 2nd order price discrimination, so the price of guac might very well be "quantized" at an absurdly high level, and does that make beans not a Giffen good? ...

my point is, the way you guys are arguing this is leaving too much out, can't be answered and wikipedia at this level of analysis is too unreliable.


Inferior good just means that demand increases as income goes down. Potatoes in your hypothetical are inferior, since if you have less money you can't afford steak and so buy more potatoes instead.


so that's what I mean, that term-of-art definition is intertwined with dependent variables of the definition of Giffen goods, so it would be no wonder if the ideas get tied together even if the concepts are not facially.

Terms of art annoy me (the legal profession and philosophy are full of them, overloaded (OOP definition) on preexisting words) because IANALinguist but I could play one on TV without rehearsing, so my point is, if you want to have a narrow morphology for a word, don't recycle a word that has broad meanings, invent a new word that is precise, like econ-inferior. Then at least when a person doesn't understand what you say, they will think to themselves "maybe I should look up the definition" as opposed to actually believing you said something different than you did.

Nobody can live on potatoes alone, you'd die. Nor can anybody live on steak alone. Neither good can be said to be precisely econ-inferior to another, only econ-inferior over some delta range of prices and/or time (and assuming demand, etc). But the whole question of Giffen goods is also valid only over some delta, so as long as they are different deltas, the definitions would not be in conflict (and vice versa all the variations of that).


Fair enough, but this is a standard term taught in Econ 101 (at least, I was taught the econ meaning of inferior good in my first econ class).


I've studied econ at the graduate level at MIT after having taken it as an undergrad as well, and I have a degree in Finance, so I didn't mean to imply that I don't know what I'm talking about. But I know a lot of other topics as well and I've always objected to terms of art in one field being easily confused with terms from other areas, and hell if I can remember what an inferior good is 20 yrs later. My point was not that you didn't know what you were talking about; I joined in because between the two of you I replied to, I didn't think your discussion was benefiting the rest of HN as much as it could because many of those people have not taken any econ at all. I was trying to Econ 100 the discussion, without losing the flavor of what is interesting about Giffen goods; and I think that if researchers are going to "prove" that Giffen goods don't exist in aggregate (<-- not Macro term of art), they need to also address the obvious short term circumstances (as I tried to describe) where it's clear that the underlying principle is actually operating, whether it has an effect on market clearing or not, because people can go one extra week without meat, just can't do it forever.

Not trying to argue, just trying to clarify what I came upon. Econ theory I think is sound but requires many simplifying assumptions to teach and learn, and then when we talk about whether Giffen good actually exist or not it's easy to lose track of simplifying assumptions like "long term" or "substitution".

cheers.


> It seems to me that Giffen goods are a theoretical thing that has never been actually shown in real world

In practical terms (that may not fill the theoretical definition of the giffen good), spare time in certain circumstances is quite obviously a Giffen good. Once your income increases (which means the opportunity cost of your spare time increases), you are willing to work less, i.e. consume more spare time. Of course, this is not a universal rule, but I think it is obvious that for _many_ people this is the case. If it was _not_ the case, there was no way people in sweatshops work longer hours than western middle class.


Giffen goods are likely to exist only in communities of extreme poverty, where the cheapest things you buy dominate your spending. That's why it was only found in an experiment performed on people living on subsistence:

https://en.wikipedia.org/wiki/Giffen_good#cite_note-4

I bet you could find it in some video game economies.


How are you distinguishing between Giffen and Veblen goods? If you define Veblen in such a way that all Giffen goods are Veblen and then say that disqualifies them then of course you'll find there are no Giffen goods.


According to the article:

> To be a true Giffen good, the good's price must be the only thing that changes to produce a change in quantity demanded. A Giffen good should not be confused with products bought as status symbols or for conspicuous consumption (Veblen goods)

Veblen goods = Goods for which demand rises with price because they are status symbols Giffen = Goods for which demand rises with price - Veblen Goods

However there are no examples there that hold, to me this signifies that the only goods for which the law of demand does not apply are status symbols.


Isn't a Giffen good then just something where one infers quality from price? I've seen that happen many times with my own eyes, so hard to believe they've never been identified. It's possible to price something so cheap, people assume there's a catch.


No. The example of a giffen good given is a high calorie food, that's exceptionally low status. Thus, when its price falls, people will demand less of it, as they can afford to replace some of their consumption of that food with more expensive, better food. Workers replacing some proportion of their bread or potato intake with meat, as the price of that bread or potato intake drops, say.

The idea is that the good is the lowest quality way of fulfilling some need - so people buy it because they can't afford anything else.


> Isn't a Giffen good then just something where one infers quality from price?

No, it's an inferior good (in the economic sense) in which the (negative) income effect of a price increase outweighs the success substitution effect.

What you are describing is a good that has a positive elasticity of demand with respect to income (or, technically, two different goods, because the higher price represents a different good altogether - one with a higher status symbol).


How so? I don't really see the similarity


Both describe unexpected effects when trying to extrapolate from price and quantity involved in individual use cases.

Jevons: Quantity required per use goes down, so you might expect total demand to decrease. Instead total consumption goes up.

Giffen: Price per use goes up, so you might expect total demand to decrease. Instead, total consumption goes up.

Either could increase consumption by displacing available substitutes, though that's not necessarily the case with Jevons. They are indeed different phenomena, they just have some similarities.


And now with both of those things Baader-Meinhof is going to be triggering every hour for the next month at least


> And now with both of those things Baader-Meinhof is going to be triggering

You know that Andreas Baader and Ulrike Meinhof were the main founders of the terrorist organization RAF (Rote Armee Fraktion (Red Army Fraction)) in Germany. RAF was also the reason that grid investigation was used in the 70th (where lots of innocent people were accused wrongfully) after a RAF terror series, after which some constitutional principles were quashed. The German word for this is "Deutscher Herbst" (German Autumn; https://en.wikipedia.org/wiki/German_Autumn).

These experiences lead (indirectly) to the rise of a completely new party (Die Grünen; The Green Party) and are (besides the experiences with the two dictatorial regimes on German ground in 20th century) one of the reason why data privacy is taken very seriously in Germany.

Thus mentioning RAF, Andreas Baader or Ulrike Meinhof to (in particular older) Germans is perhaps like mentioning Al-Kaida, 9/11, Mohammed Atta etc. to US citizens.


So triggering^2


I'm not sure why this gets a special term. It sounds like basic supply and demand. If you decrease the price of something by increasing the efficiency of production, you will obviously capture more of the demand curve. What am I missing?


A lot of people naively assume that if you can use a resource more efficiently, then total use will go down, "because you don't need as much, right?"

See: the entire popular support for efficiency mandates.

(Edit: Also, this very example -- I certainly didn't expect that a faster site would allow that many more users: my model was more "either they want to see your site, or they don't", i.e. inelastic demand.)

The (common) error is to neglect the additional uses people will put a resource to when its cost of use goes down. ("Great news! We get free water now! Wha ... hey, why are you putting in an ultra-thirsty lawn??! You don't need that!")

Also, I wouldn't call it basic supply and demand; depending on the specifics (inelasticiy of demand, mainly), total usage may not actually go up with efficiency.


This sounds like the reason widening roads doesn't usually ease congestion.

Which, really, can be summed up by my favorite Yogi Berra-ism "No one goes there nowadays, it’s too crowded."


> This sounds like the reason widening roads doesn't usually ease congestion.

It usually does actually.

What is happening there is that you have different demand levels at different congestion levels. If you alleviate some congestion by widening the road then demand goes up.

That is only a problem if the demand without congestion is higher than what even the wider road can handle. As long as the new road can handle the higher but still finite demand you get when there is no congestion, there is no problem.

In other words, as long as you make the road wide enough for the congestion-free demand level to begin with, that doesn't happen.


That's technically true, but it assumes away the core, ever-present problems:

- It may not be physically possible to add enough lanes to e.g. handle everyone who would ever want to commute into L.A.

- Even if that road was correctly sized, it still has to dump the traffic into the next road, through the next intersection point. If you've increased the capacity of the freeway but none of smaller road networks that the traffic transitions to, you've just moved the bottleneck, not eliminated it. And that too may be physically impossible.

In any practical situation car transportation efficiency does not scale well enough that you can avoid addressing the demand side.


It isn't physically impossible to use eminent domain to seize all the property around the roads and then build 32 lane roads all over Los Angeles.

That is a separate question from how stupid that is in comparison to the alternative of building higher density residential housing closer to where people work and with better mass transit.

But if people don't want to do that either, you have to pick your poison.

And there really are many cases (Los Angeles notwithstanding) where adding one lane isn't enough but adding two is and where that genuinely is the most reasonable option.


Also, one thing that's often forgotten is that roads take up space. A lot of it. You make your roads bigger to accommodate more people, and all of your buildings wind up farther apart as a result. When buildings are farther apart you have to drive farther, meaning that everyone's journeys are longer, meaning more traffic... and on and on it goes.

14 percent of LA county (not just city!!) is parking. http://www.citylab.com/commute/2015/12/parking-los-angeles-m...

I'm trying to find a better source, but at one point supposedly 59 percent of the central business district was car infrastructure (parking, roads, etc.) http://www.autolife.umd.umich.edu/Environment/E_Casestudy/E_...

I mean, at what point do you just build a 400 square mile skid pad with nothing else there just to "alleviate traffic"? Hell, that's practically what Orange County is already.


Gosh, 3.3 parking spaces per car (citilab article). That's be some space to free up when they're self driving.


Assuming "parking spaces" include one's home space (like garage or reserved spot), 3 should be expected, at least: home, work, and wherever you're visiting.


Now assume Uber et al own fleets of self-driving 9-passenger minivans. During peak commuting hours they're completely full because they pick up different passengers who have the same commute, and that way you eliminate the parking space both at home and at work.

The rest of the day they don't actually park anywhere, they just stay on the road operating by carry one or two passengers at a time instead of eight or nine. Or half them stay on the road operating and the other half go off and park in some huge lot out where land is cheaper until demand picks up again.

Then instead of 3.3 spaces per car you can have <1, and most of them can be in low land cost areas.

It's actually kind of like dynamically allocated mass transit.


More likely scenario will be self driving cars cooperating with each other to drive from start to finish without any stops. Whether on freeways or local streets, cooperation amongst vehicles will raise the average speed and the volume of vehicles you can process through a given area. Pools work to a certain degree if everyone is starting and ending at the same location. When they don't, then it actually takes longer than driving by yourself.

Final point, you can certainly build out less parking spaces, but pre-existing spaces won't go away without redevelopment.


They're already empty most of the time, for what it's worth.


You're right -- I should have said "feasibly" rather than "physically" above. It's certainly physically possible, but requires a tradeoff I don't think many people would actually sign off on: blowing 3 years of budget for multi-deck freeway tunnels and having twelve-lane streets for most of the city, a parking garage for every block, and 95% of the city allocated to roads.


> It isn't physically impossible to use eminent domain to seize all the property around the roads and then build 32 lane roads all over Los Angeles.

This might be an extreme example that won't work for other reasons, but generally adding more lanes will increase demand, so you still won't have enough lanes.


The Wired article posted below has a pretty good rebuttal on those ideas.


No, that is still only the short term new equilibrium. What happens is that roads with unused capacity (or at capacity, but acceptable congestion) get busier as activity increases around those roads, because of the excess capacity/low congestion. Of course it's more complex than just that; it heavily depends on the spatial relationship with job activity centers within commuting distance, social expectations, economic characteristics and many more, but the core tenet remains - adding roads is not a long term solution for congestion, spatial planning is.


Right, widening roads doesn't increase speeds for existing commuters, it serves more commuters at the same speed.


all the cars on this new lane are not on another road adding to the traffic tho. Maybe it eases the traffic elsewhere There is a finite amount of cars after all



Scott Alexander has some examples of that in his post:

http://www.scottaaronson.com/blog/?p=418

For example:

> Why are even some affluent parts of the world running out of fresh water? Because if they weren’t, they’d keep watering their lawns until they were.


The supply and demand "law" refers to the observation that the price of a good settles at a point where the available supply (which increases as the price goes up) matches the demand (which decreases as the price goes up).

Jevons Paradox is only tangentially related. It is based on the observation that sometimes using a resource more efficiently results in higher overall consumption. For example, say 40 kg of lithium is needed for the batteries of an electric car. At some point, 4000 tonnes are produced annually, enough for 100,000 electric cars per year. Now a new battery comes on the market that needs only 20 kg of lithium. Should the lithium producers be worried that the lithium demand will drop, since only 2000 t will be needed for the 100,000 electric cars? Maybe. But if Jevons Paradox comes into play, the annual production of electric cars might triple as their cost drops due to lower lithium usage, and the new demand will then settle at 6000 tpa. So, paradoxically, reducing the amount of lithium in each battery could be good news for lithium producers.

Whether or not Jevons Paradox occurs depends on the elasticity of supply-demand curves, in this case the curves for lithium and for electric cars.


Not to mention that reducing the cost of batteries may lead to new classes of devices suddenly making sense as battery-powered (instead of corded or gasoline-powered), leading to increased demand for batteries.


Well, that's because it's often not the case. Take the two cases:

* Engines get more efficient (fewer litres per kilometre traveled). Does the total amount of petrol consumed go down or up?

* Flushes get more efficient (less water / successful flush). Does the total amount of water consumed go down or up?

Both of these have a more efficient use of a consumable quantity. Often, however, more efficient engines lead to more traveling and larger vehicles whereas more efficient flushing leads to reduced total water consumption usually.

The fact that gains from efficiency can be outraced by the induced demand can be seemingly paradoxical. And "seemingly paradoxical" is the only thing that makes anything labelled "Paradox" interesting.


> What am I missing?

The paradox is that they tried to reduce demand to reduce consumption, but accidentally reduced price, so increased consumption.

The bit you're missing is that duality, that an action intended to reduce demand could reduce price instead. Applying rules of supply and demand happens as a step after categorising the action, that the action was miscategorised led to a misprediction.


Because it's easier to say somename effect than the half paragraph or so that describes it.

This is the basic reason for naming anything after all a car is just a fossil fuel internal combustion kinetic comvertion wheeled people and goods pilotable transportation platform but saying a car is just easier :)


Well, if you'd follow the link you'd see that Jevons identified this phenomenon in the mid 1800's with respect to coal usage. It probably wasn't so obvious then.


To put it another way, marginal efficiency increased but total efficiency went down, which is (to some) unintuitive. It's certainly rare to observe!


I saw a similar effect with really fast disks. If you make the kernel faster at passing requests down to the disk, a simple benchmark with one request at a time will be faster. However, with many requests at the same time and less time spent processing then, you now have more time to poll the disk for completed requests. Each time the kernel polls the disk, it will typically see fewer completed request than before your optimizations, and overall this can actually result in decreased throughput.


That's like saying 'deadweight loss' "shouldn't be a term, because it's basic supply and demand".

There's a clear and identifiable trend or pattern resultant of the general model, there's no reason not to assign it a shorthand way of being referred to in discussion or study.


> I'm not sure why this gets a special term.

It gets a special term because it was coined in 1865, before most of modern economics was codified and this was a cutting edge finding. You may as well ask why Newton's laws get a special term, because they're all just obvious basic equations in physics that high school students are taught.


By giving it a special term, additional commentary/analysis can coalesce around it, such as that "governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising".


Is there a negative counterpart for that ?

I remember turning an algorithm upside down, making it so fast, it went from "number crunching wonder" to "users saw nothing, this software is meh".


Microsoft UX research found that with row adding in Excel. It was instant, so users weren't quite sure if it happened or if it happened correctly. Now it animates for that reason.


I believe that this is partly for ergonomic reasons, it's hard to track a grid changing instantaneously, animation allow for "analogous" traceability.


Yep. I see the commonality as being "it's so fast it's hard to tell any work was done" - whether the user is trying to gauge whether their actions had any effect or whether something is worth paying for.


I've tried using Google products from Africa (Ethiopia ... last time this January), and generally, it is right out unusable. JS-heavy apps like GMail will never load properly at all.

This while the connection in itself is not THAT bad. I use to use a 3G/4G mobile connection and it generally works excellent, with pretty quick load times, for everything else than javascript-heavy web apps.

I have a hard time understanding why this issue is not paid more attention. Ethiopia alone has some 99 million inhabitants, with smart phone usage growing by the hour. Some sources say "the country could have some 103 million mobile subscribers by 2020, as well as 56 million internet subscribers" [1].

[1] https://www.budde.com.au/Research/Ethiopia-Telecoms-Mobile-a...


In Ethiopia's case, it's not so much the connection speed in Addis. There's a great deal of interference from the national Deep Packet Inspection filters that leads to timed-out requests, reset TCP connections, etc.

JS-heavy apps make a lot of requests to background servers and should one of those requests fail, apps will hang. It's quite frustrating and I would often load pages with the console open to see which requests have failed so I'm not left wondering what happened.


I may get flamed for pointing this out either by people who are offended by the viewpoint, or by those who find it so bleeding obvious as to not be worth stating, but those page hangs (and I know exactly what you mean) are really down to poorly architected and implemented front-ends rather than an inherent flaw with JavaScript-heavy apps and pages.

Any time you do an XHR you can supply both a success and a failure callback and, if you care at all about your users, the failure callback can come in handy for error recovery, handing off to a retry mechanism, etc.

Modern web apps can be a lot more like fat client apps, just running in a browser. Even there, there's no inherent need for them to be unusable, even over relatively slow connections. A lot of it comes down to latency, and the number of requests going back and forth between client and server, often caused by the sheer quantity of assets many sites load (I'm looking at YOU, almost every media site on the Internet).

I seem to spend my life citing this paper, from 1996, but "It's the latency, stupid" is still relevant today: http://www.stuartcheshire.org/rants/latency.html.


Nothing controversial here, it's common sense. Most web stuff is built by total amateurs figuring things out as they go.


I'd like to complement: most _stuff_ is built by total amateurs winging it.


Nonsense, email is an impressively well designed and logical protocol ;)

From


There can be whole lot of reason for this and it kind of make sense. What doesn't make sense is that for such a big company, such a big product thats the best Google/gmail can do. I can understand if scaling the gmail backend can be tough. I can appreciate gmails feature set of spam filtering, tagging but on the UI feature-set I don't see anything that revolutionary that makes it (according the chrome task manager) the heaviest tab in my browser ~500MB. I think thats to the point of shameful.

I think the standard of whats considered slow, bloated, complex has become absurd. I think if the processor companies today release processors that say improves the single thread performance 10 times in two years gmails and facebooks of the world will eat all that up with marginal improvement in functionality. I'm talking about the client side, in the server side yeah they may make 10 times more complex analysis though most likely 80% will go to feeding us more accurate ad.


That's why fastmail is such a breath of fresh air. It has lots of features and is wicked fast.


Agreed. I'm actually really impressed at how fast and responsive the UI is. As a user, it's probably one of the most responsive and functional UX's I've used in years.


>and should one of those requests fail, apps will hang

That also happens on 'good' connections, when some crappy isp router drops the packet without any icmp. The request fails only after a tcp timeout, which is large enough to be noticed. I cannot understand why asynchronous js requests do not involve smart adaptive human-oriented timeouts and why this problem is still not solved in general. TCP timeouts are simply insane nowadays.


That's good to know, Thanks!


GMail has a HTML-only version that is much more lightweight and usable on slow / flaky connections.

It's worth it to memorize or bookmark the address, in case you ever need it:

http://mail.google.com/mail/h/

(on mobile browsers you need to "request desktop version" and then paste the address again, before you can see it)


Thank you for sharing this link. Here is a related HN discussion: https://news.ycombinator.com/item?id=7513388

My problem with accessing the low-bandwidth Google tools with archaic browsers (http://dplus-browser.sourceforge.net/, etc.) is that Google still requires the high-bandwith login.

Are you aware of any alternative login URLs or authentication mechanisms?


I just tried the two-year old 3.4-dev copy of NetSurf I had buried on this computer, and was able to login to Gmail's Basic HTML.


It's not only about connection speed but also about infrastructure. If you look at this map https://cloud.google.com/about/locations/ you'll see that your packets have a looong way to reach their data center. AWS is no better than Google on this point. Guess it's not bankable


That only adds about 200-300ms RTT I'd guess. I live in India and use many websites which are hosted in the US, and they work fine.


(Hello from Kenya)

There are usually CDN nodes in India. Cloudfront has edge nodes there, google's cdn does.

There's also a Mumbai AWS datacentre.

When you get far away from the common edges, it gets real noticeable.


~$ ping imgur.com

PING imgur.com (151.101.40.193) 56(84) bytes of data.

64 bytes from 151.101.40.193 (151.101.40.193): icmp_seq=1 ttl=53 time=342 ms

imgur.com works fine

~$ ping python.org

PING python.org (23.253.135.79) 56(84) bytes of data.

64 bytes from 23.253.135.79 (23.253.135.79): icmp_seq=1 ttl=48 time=267 ms

python.org works fine

news.ycombinator.com and reddit.com also works fine even though I'm logged in (there's about a 300ms, 700ms delay in the Network tab of Chrome's devtools for news.ycombinator.com and reddit.com)


PING imgur.com (151.101.12.193): 56 data bytes 64 bytes from 151.101.12.193: icmp_seq=3 ttl=51 time=499.633 ms 64 bytes from 151.101.12.193: icmp_seq=65 ttl=51 time=330.021 ms 64 bytes from 151.101.12.193: icmp_seq=66 ttl=51 time=557.491 ms 64 bytes from 151.101.12.193: icmp_seq=67 ttl=51 time=478.380 ms 64 bytes from 151.101.12.193: icmp_seq=68 ttl=51 time=400.365 ms Request timeout for icmp_seq 69

PING python.org (23.253.135.79): 56 data bytes 64 bytes from 23.253.135.79: icmp_seq=0 ttl=44 time=615.871 ms 64 bytes from 23.253.135.79: icmp_seq=1 ttl=44 time=539.681 ms

I'm on an island lost in the middle of the Indian Ocean. But ping weren't that different (50 ms more or less) on the continent (I went to RSA and Namibia)


Gmail worked surprisingly well from Antarctica


> I have a hard time understanding why this issue is not paid more attention. Ethiopia alone has some 99 million inhabitants, with smart phone usage growing by the hour. Some sources say "the country could have some 103 million mobile subscribers by 2020, as well as 56 million internet subscribers" [1].

How much disposable income will they have though? Most web products like those you describe are produced by businesses looking to make money.


You may be surprised. Certainly the number of people with disposable income will be less than in many other places but those that have disposable income often pay more.

Tax on cars is upward of 200% and traffic is becoming a major issue. When it comes to services, another major challenge is that there are no widely accepted payment mechanisms besides cash and checks. Debit cards only work with ATMs and some very select retailers.


This is a perfect example of why "average" metrics for such values aren't that great and are often overused as vanity metrics.

A nice chart showing how many users are in each bucket of load time would be far more useful. One that you could easily change the bucketsize from 0.1ms to 1 second and these types of 'digging' wouldn't even be a second thought.


>This is a perfect example of why "average" metrics for such values aren't that great and are often overused as vanity metrics.

The average human has, on average, one testicle and one ovary, but there aren't many humans who can actually fit this description.


Funny anecdote about the freeway system of southern California.

When they were initially planning the system in 1930s, 40s, they were planning to have the system in use for next 100 years. So they built over sized roads (like 10 lane freeway, without having to stop for traffic lights, that go THROUGH center of a major city).

When the system proved so car friendly, more and more people moved in and bought cars. Within in a short period of time (much shorter than 100), the system is completely jammed.

Always look for unintended consequences...


The original designers of the interstates didn't want the roads to go through the downtown areas. The idea was for the high-speed roads to go near cities, and have spur roads (3-digit interstate numbers that start with an odd number) connect them - like the design of the original Autobahn.

But there was a coalition of mayors and municipal associations that pressured Congress to have the roads pass through their towns (jobs! progress!). President Eisenhower was not amused, but he found out too late to change the design.

A consequence of this was the bulldozing of historically black-owned property to make way for the new roads.


They didn't really NEED cars to move people around because Los Angeles area already had a GREAT light rail system called the Red Line. The current walkway on Venice Beach was what's left of that line. Can you imagine? An above ground light rail system running parallel to a beach in LA?

They RIPPED it out thanks to lobbying by car companies and tire companies. Yay to lobbyists.

Now, it takes a billion to build a few miles of a subway/lightrail system that practically goes no where...


The Red Line is the new Metro system (which was destroyed in 1997's Volcano). The electrified light-rail from the 1930's was the LA Railway.

https://en.wikipedia.org/wiki/Los_Angeles_Railway

GM, Firestone, and several other companies were indicted in 1949 for attempting to form a monopoly over local transit. The semi-urban legend part (it was never definitively proved there was a plot behind it all) was the ripping out of the streetcars, replacing them with GM-made bus networks.

https://en.wikipedia.org/wiki/General_Motors_streetcar_consp...


> A consequence of this was the bulldozing of historically black-owned property to make way for the new roads.

Indeed. A couple years ago, the city of St Paul actually formally apologized for exactly this, destroying the primarily black Rondo neighborhood with freeway I-94.

http://www.usatoday.com/story/news/local/2015/07/17/rondo-ap...


This is known as induced demand: https://en.wikipedia.org/wiki/Induced_demand


Reading through some early texts on the coal and oil industry. One notes that at then-current rates of consumption, the coal reserves of the United States would supply over one million years' consumption.[1]

The current North American coal proven reserve is less than 300 years of current utilisation rates: http://www.bp.com/en/global/corporate/energy-economics/stati...

It's amazing what a constant increase in growth rate can accomplish. Also the overwhelming tendency for lowered costs to induce an increased demand -- the Jevons paradox.

If you want people to use less of something, increase the cost, not the efficiency.

________________________________

Notes:

1. Henri Erni, Coal Oil and Petroleum: Their origin, history, geology, and chemistry, 1865. p. 15.

https://archive.org/stream/coaloilpetroleum00erni#page/14/mo...


One way to plan for the next 100 years when building roads but not cause people to buy cars is to have a 10-lane wide verge on one side of the road. Have a row of 30-storey buildings on one side of a 5 lanes each way road, but on the other side the buildings are all set back at least a 10-lane width which is used for car parking, 1-storey buildings, public spaces, etc.

If there's ever a need to widen the road, it can be done without demolishing any tall buildings. I see this in new road layouts in China all the time. Of course, under the road will be a new subway system -- another disincentive for people to buy cars.


Sorry for saying the same thing in two comments, but that (like the Google case) looks like a Jevons effect (where induced demand is a special case):

https://en.wikipedia.org/wiki/Jevons_paradox


Isn't that a good unintended consequence? They built out infrastructure which attracted lots of people & jobs. Today, SoCal is home to world-leading firms in entertainment & aerospace. They also have top-tier research and educational institutions.


Which is why, when discussing infrastructure upgrades in our Hackerspace, I keep reminding that infrastructure is an enabler - it should not be built to support current needs, it needs a healthy margin to enable people to do more. People always find interesting ways to use up extra capacity.


There is a limit to that. People point out all the growth that overbuilt freeways caused argue we need to build more freeways. Nobody every asks if the trend will continue. They want to bring that same growth to small middle of nowhere towns, but it isn't clear if that will happen.

Also lost in the conversation is opportunity cost: sure people drove more and that drove growth to those areas. However what if the roads had not been built - what would have happened instead? We don't know, but it is fun to speculate. (maybe railroads would still be the most common mode of transport?)


The unintended consequence I'm talking about is they expected it will take 100 years for the system to be utilized at full capacity but in reality it took just 20-30 years...



I always wonder about this. You have to reach saturation eventually, right?


This isn't an example of Braess' paradox.


I think this is the same anecdote: http://blog.chriszacharias.com/page-weight-matters


One wonders how a user that takes 2 minutes to load 98KB is actually able to watch a video.

Even by the most optimistic estimations, a video that is a few minutes long at 480p will weigh in at 10 megabytes, meaning it'll take them OVER 3 HOURS to download the entire thing.

You would probably be able to browse (slowly), read comments, but not actually do much else.


This was the baseline experience everywhere in the 90s: people would just do something else while they waited for things to download over dialup. Clients for things like email, Usenet, browsers, etc. commonly had batch modes where you could queue large downloads so you could basically see what's new, select a bunch of large things, and then let it download while you got a cup of coffee / dinner / slept.


There was a time when Netflix was young, and I had slow internet, that if you queued up a movie and then paused it, it would continue to load. So you'd pick a movie, queue it up (literally) and then go make snacks and get situated. When the bar looked long enough you'd start watching. Then if it stalled (which it would do like clockwork every evening around 8 pm) you'd take an intermission.

By the time I got stuck with slow internet again (boycotting Comcast), they had removed that feature and it really sucked. Now I have fiber and the only drama is around Netflix only allowing you to stream a couple movies at once.


Yes, back when things actually buffered properly on the internet. The way YouTube buffers now: not actually loading more than a few seconds of the video ahead of where I'm watching, even when it KNOWS my internet is spotty, really frustrates me.


You could install the YouTube Plus browser extension and check "disable DASH playback" in the settings. Then if you start playing a video for a second or two and pause it, it will buffer the entire thing. The only downside is I think it reduces the maximum video quality for all videos to 720p.


I have a 1920x1200 monitor, so yeah I don't think so lol.


So do I and it's what I do. I can't really see much difference between 720p and 1080p on YouTube.


The sense I get is that it's an elaborate way to soften people up for abandoning their ISPs and going with Google Fiber. There's a whole system of alerts and pages for basically saying 'LOL ur internetz sux', and it's plausible to me that they'd get that in place in markets they've not currently entered yet.


This drives me INSANE.

I'd rather let it buffer for 10 min, then watch it in 360p because you guys can't figure out your buffering. The same goes for netflix!


They try to match the resolution to your bandwidth. Sometimes that's the right solution, but not always. When the playback stalls right at the big reveal it can be painful, but that doesn't mean I want to watch the minecraft edition of the movie the whole time


And, as most things, I'd rather have the choice. If you want to default to auto-configured resolution, that's fine. But give me the choice to override if I want!


Of course, you could spring for the high capacity Netflix plan.


Alas, I bought a new TV just before 4K was a thing. It saved me the temptation of buying one while they were so expensive. The HD level is good enough for me.


Oh yes, I remember my teenager days of happily waking up in the morning knowing that our ISDN connection surely has finished the download of that album or live recording I started last night via Soulseek / DC++ / Audiogalaxy.


Not sure what your comment has to do with mine.

I used Internet in the 90s and dial-up specifically as recently as 2004 - and have quite a good memory of the experience. Internet at dial-up speed was an extremely valuable commodity, to the point of disabling images in the browser and only downloading the most essential things (which is about as far from a youtube video you can get) - like documents and zipped installers (after researching that it was what you actually needed) and checking email.

Internet "videos" didn't really even exist as content before ~2005 and Youtube. The biggest player before them was Break.com, which posted a whopping 10-15 videos a day.

While someone may have spent several hours waiting for a key software installer to download, almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

________________________________________________________________________________________________________________________

To illustrate my point even further, the speed we're talking about isn't even dial-up speed. It's 6.5kbit/second, which makes it almost an order of magnitude slower than a 56k modem! 10 times slower!

And people are actually suggesting that someone would spend that valuable bandwidth to spend days to load a video...


> While someone may have spent several hours waiting for a key software installer to download, almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

I vividly remember waiting hours to download a video in the '90s. The Spirit of Christmas short that spawned South Park, and that news story about the exploding whale, were viral videos that predated modern video sites by many years. You'd download it off of an FTP server somewhere. At one point, the majority of my hard drive was devoted to half a dozen videos that I'd show to everyone who came over.

Basically, watching a video on your computer felt new and exciting and worth waiting for. I'd never do that today even if I were stuck on a slow pipe, but at the time it was oddly compelling.


Or that stupid dancing baby CGI that blew up in the late nineties for some reason.


It was a demo of Autodesk's Biped bone-animation software. It really took off when the producers of Ally McBeal licensed the baby to appear on their show.

That always reminds me: Earlier that decade, a major plot arc of Beverly Hills 90210 featured a nightclub called Peach Pit After Dark. The door of the club had a flying toaster, from the PC screensaver After Dark.


Hmm, either your experience, or memory, of the nineties / dial-up, is different than mine and all of my (quickly polled friends). We all built a library of painfully-obtained 320x200 horrible overcompressed video... ahh, the memories of RealPlayer. Some video was down to 180p... heck, I still have videos that were one or two megabytes in size in my archives - that I darn tootin' well waited hours to download :)


Please don't mention RealPlayer again. I feel ill and queasy thinking of that piece of software, its million rewrites and the CPU hog they released for Linux (at least on my lame hardware).

I remember using "download accelerators" to try and grab files faster back in the day. Who knew they were just doing 4 simultaneous downloads of ranges of the same file eh?

Ah I feel old


As I recall, the download managers were primarily useful because they could resume files that didn't finish.

Incredibly useful for enormous files like visual basic 6, which I spent like...a month or so downloading in the late 90s.


TIL the RealPlayer brand still exists... http://www.real.com


Can confirm, I remember downloading anime clips at 6 or so hours apiece! Not even full episodes, clips! And it was amazing!

And before there was YouTube there was flash videos/animations on Newgrounds. The people I hung with back then were into animutations.


We drink ritalin!


> almost no one would do the same for a video of dubious quality and content, certainly not in the 90s.

Umm. Porn?


RealPlayer video rips of my favorite TV shows for me, all in wonderful 160p!


A year or two ago, I went through some old files that had somehow followed me all through highschool. Among them were a handful of music and video files I got from friends passing around burned CDs. I was quite amused to play some random episode of Dragonball Z and have it pop up a postage-stamp sized video on my relatively high-res modern screen!


Or to have your kid say "Dad, why are there video thumbnails in the archive directory and where are the actual videos?"


I spent like a week downloading individual music videos over 28K. People will do a lot of things


Two minor points: GPRS is 36-112Kbps and EDGE is faster still, so the 90s modem comparison seems apt to me. Latency is terrible so the player page loading slowly but the actual video playing is entirely plausible since that's the best case network traffic profile. Other technological improvements help, too: H.264 is much better than the codecs like MPEG-1 used for things like the infamous exploding whale video.

The bigger point was simply that people will wait for things they want. Not being able to load instantly changed the style of interaction but not the desire to listen to or watch things and there's far more content available than there used to be. Tell teenagers that the cool music video is available and you'll find a lot of slow downloads over their entire day at school, work, etc.


IIRC in 1997 (was it 1996?) this thing called webcasting made lifecasting possible, Jennicam and Amandacam come to mind. Then there was stileproject and its treasure trove of videos, Not mentioning internet porn videos that people burned onto CDs to sell to people without internet.

Internet video has been a thing a while before youtube got released hoping to capture enough of this thing through offering to host it for free in the hope to be bought later by one of the big players.


> Not sure what your comment has to do with mine.

Someone doesn't agree with your comment, but rather than come back with a refutation they voted you down instead :)

And someone else don't agree with my comment .. modded down -4


Likely because your comment doesn't add anything to the conversation, and the guidelines ask not to comment on downvotes.

https://news.ycombinator.com/newsguidelines.html


If I cared about karma or being modded down, I wouldn't post on the Internet, at all.

If people chose to download porn over 28.8k modems, that's their choice. Just seems like a waste of time, that's all. Probably quicker just to take your dad's nudie mags rather than wait 6 hours for a 400x300 jpeg.


> it'll take them OVER THREE HOURS

I remember frequently spending three hours downloading 5MB files over dial-up in the late 90s. Mostly software, not videos+, but it really just felt like a regular thing back then.

+ Computers nearly didn't have the power to decode video—or even audio—in realtime back then, unless it was the entirely uncompressed kind. I recall ripping a CD to WAV and finding out half-way that my 2GB hard drive was now 100% full.


I think your memory of the late 90's is actually from the early 90's ;)

I remember the first realtime mp3 player on Windows, Fraunhofer IIS' WinPlay3, which launched in '95. Then Winamp came out in '97 and blew our minds.

https://en.wikipedia.org/wiki/WinPlay3


You needed about a 100mhz CPU to playback mp3s without skipping. Seemed to hold both on my PPC mac and pentium windows machine.


I used a Pentium at 133 MHz at the time, and it struggled to play MP3. The tracks would stutter.


I played MP3's on a 486 DX4/100mhz. There was a setting in winamp I had to turn on (I think it was quality related) otherwise it would stutter and be generally unlistenable. Even with that setting turned on it pegged the CPU at about 100% the computer was unusable for anything else while the mp3 was playing. Trying to seek in the track would sometimes crash winamp.

My Pentium 166 on the other hand was much more capable I could multitask while playing Mp3's (I used to play mp3's and Quake at the same time).


I think somewhere in that timeframe, standard soundcards adopted support for hardware playback of mp3 files.


That would be interesting to hear more about.

AFAIK all sound drivers in Windows accepted standard PCM data.


I'm not sure about that, but 'mmx' did come to fruition with early pentiums.


I had a Soundblaster 16 in the 486


Wait a sec guys.. are you sure you had the turbo button on your machine active?


I used a Pentium 100 MHz and it would play just fine. Encoding them would take real-time (i.e. a 3 minute song would encode in 3 minutes), but playing back was fine. I even listened to music while doing other things. This was with Winamp.


My intel box may have been a 166Mhz machine. Either way this assumed you weren't trying to do anything else. The PPC mac was definitely 100mhz though - maybe Apple marketing wasn't a lying about perf/clock back in those days.


Yes I remember using mods and xm files instead as attempting to play a low bitrate MP3 on a Pentium 100 (I think?) Elonex laptop meant 100% CPU.

Even tried it on mpg123 and mpg321 on RedHat on a 486 DX66 - I was poor. Didn't fare any better.


You would have much better chances with Opus and its integer decoder today.


I used a Pentium 150MHz to play mp3s and do other stuff at the same time. It worked just fine. This was with Winamp in around 1997 I believe.


Seems about right; I could play mp3s on my Libretto 30, but only with the Fraunhofer decoder, Winamp wasn't quite optimised enough.


Yes, my point was specifically about videos, and I've elaborated on it further in the post above.

I, too, remember quite vividly waiting 30-40 minutes to download a 5MB installer for WinAmp and ICQ.

I honestly wouldn't even know where to look for videos in the 90s internet. Most people probably didn't have the upload speed to even consider sharing them online.


I think shockwave.com hosted Flash animated videos. Also, I remember downloading Troops from TheForce.net, seems like they had several other fan made videos. I also remember watching the Star Wars Episode I trailer on StarWars.com (I beleive it required the QuickTime plugin); so anyway, those are some sites hosting video content in the 90s.


I got some from Sony's BBS system where fan groups shared (links to) music videos.


> three hours downloading 5MB files over dial-up in the late 90s

Ah Napster.


I remember my family's pre-PPC Mac could play MP2, but not MP3.


A Pentium II can play a DVD back at 24 fps back no problems.


That's because the pentium 2 chip isn't decoding the video. Try it with a software decoder some time. It hardly works. I had to disable video scaling before my 366 would stop dropping frames.


I'm guessing you've never downloaded porn from usenet over a 2400 baud modem.


I have not. That was a little bit before my time. The first modem I've had the luxury of using was a 33.6K.


Perhaps you shouldn't be asserting yourself as an authority on what did or did not happen if you are too young to have experienced it in the first place.

I'm very young, only 27, and know that I missed a full decade of early internet culture and can't speak to it. Even given that, I had a 14.4 and fondly remember downloading a music video for hours. (No porn on the modems personally, I think I was barely adolescent when we upgraded to cable)


Erotic literature?


Literature? alt.binaries.


ascii art.


They don't watch videos (speaking from experience). But you do not and have never needed to watch TV to be well educated. The same is true of the internet. The tragedy that the post points out is that text and diagrams are being artificially weighed down with video-like anchors for no good reason.


When I had a slow connection I loved tools like youtube-dl because they didn't expect to be used interactively. Network tools like browsers that just assume they can monopolize your time are probably the most frustating things in these situations.


YouTube-dl helped a lot when I was living in a rural town a few years ago. I would download a bunch of tutorials over the weekend when I visited my parents in the city and watch them course of the week. Seems strange to say now, because this was the case only 3-4 years ago and only a few hundred km from where I currently live. Today I can't imagine watching in anything less than HD.


I save the links to download later at some other place and view offline. Before I had ssh access to a server from a friend who copied what I downloaded (he also copies whole debian repositories for me).


YouTube still has 240p mode for that very reason, they just do not advertise it in the UI.

The modern codecs make these super compressed video and sound relatively watchable.


In my observation (of people who still use dial-up here in the US), they usually just do something else (like make dinner or watch TV) while the video loads.


I live in the first world and I rarely watch videos at 480p. I have 100gb of bandwidth a month, and typically view YouTube at 360.


> Even by the most optimistic estimations, a video that is a few minutes long at 480p will weigh in at 10 megabytes, meaning it'll take them OVER 3 HOURS to download the entire thing.

Right there in the same link, it says that the video page was a megabyte. It's the sentence directly after the 'two minutes' but. The sentence even has some 'all caps' words in it - even skimming, your eye is drawn to it. Why on earth did you stop reading halfway through the penultimate paragraph?


if you have that kind of speed you watch Youtube in 144p !


I have done that many times and it's doable for most things that don't have hard-coded subs.


3 hours is so low in the grand scheme of things. It used to take 3 hours to download a jpg over dialup ...


I used to rely on PC Plus and cover CDs for all of my software needs. I remember needing Qt for Linux (it was all new to me) and spending hours downloading a 16MB file, only to be grabbing the non-devel RPM.

We take file sizes for granted these days.


Why do you need 480p? 144p/240p was the standard YouTube less than 10 years ago.


As someone who lives in Africa, hoorah! More of this please. For me the best feeling is visiting a web page that is almost entirely text based. It loads in a few seconds which feels like quite a rare feeling these days.


Page Weight Matters by Chris Zacharias. http://blog.chriszacharias.com/page-weight-matters


this was in relation to Project Feather of the Youtube - the Youtube website did not even load before for them and when it did, they started watching more videos even though it took more than 20 seconds to load!


Read somewhere it was YouTube


Something I have had at the back of my mind for a long time: in 2017, what's the correct way to present optional resources that will improve the experience of users on fast/uncapped connections, but that user agents on slow/capped connections can safely ignore? Like hi-res hero images, or video backgrounds, etc.

Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics. And in those demos, it's often not acceptable to degrade the experience of users on fast connections to accommodate users on slow connections.

So -- if I write a web page, and I want to include a large asset, but I want to indicate to user agents on slow/capped connections that they don't _need_ to download it, what approach should I take?


This seems like the thing that we'd want cooperation with the browser vendors rather than everyone hacking together some JS to make it happen. If browsers could expose the available bandwidth as a media query, it would be trivial to have different resources for different connections.

This would also handle the situation where the available bandwidth isn't indicative of whether the user wants the high-bandwidth experience. For example, if you're on a non-unlimited mobile plan, it doesn't take that long to load a 10mb image over 4G, but those 10mb chunks add up to overage charges pretty quickly, so the user may want to set his browser to report a lower bandwidth amount.


Here in Greece, the internet is plenty fast (in bandwidth), but everything is far away, so there's lots of latency in opening every page. Going to the US on a trip, it's striking how much faster every website loads, there's no 300ms pause at the start anywhere.

Because I got frustrated at fat websites downloading megabytes of useless things, I decided to start an informational site about this very thing:

http://www.lightentheweb.com/

It's not ready yet, but I'm adding links to smaller alternatives to popular frameworks, links to articles about making various parts of a website (frontend and backend) faster, and will possibly add articles by people (and me) directly on the site. If anyone has any suggestions, please open an issue or MR (the site is open source):

https://gitlab.com/stavros/lighten-the-web/issues


Very interesting.

I would suggest swapping the current structural aesthetic of "come in and look around" for the somewhat more widespread approach of having one or more calls to action and making the homepage fully sketch out the points you want to make.

FWIW, I say this out of pragmaticness. I don't mind the "welcome! browse!" approach myself, but it won't appeal to the demographic you're trying to reach: people who themselves are being paid to eat/sleep/dream modern web design.

Another thing I would recommend is using every single trick in the book to make the site fast. For example you could get ServiceWorkers caching everything for future visits (with maybe AppCache on top just because) and use the HTML5 history API so you can preload all the site text (say, in an XHR that fires after page load) and use that to make it feel like navigation is superhumanly fast.

TL;DR, use this as your playground to learn how to make sites load better. Voila, the site will be stupidly fast, and it will self-describe too, which is kind of cool. And you'll wind up with a bunch of knowledge you could use for consulting... and then you could use the site as the home base for that, which would be even cooler.

(I realize you just started this project, and that the above suggestions are in the "Rome wasn't built in a day" category)


It's funny that you mention that, because I just wanted to have a site I could optimize to hell, and it seemed apt to make an informational site about optimization for that . AppCache is obsolete and harmful now (yes, already), and I should link to the articles that talk about that, thanks for reminding me.

As for the "come browse" approach, you're definitely right, and I don't intend the finished site to look like this, but I'm also not sure how to structure the content. What do I send the user to first? Maybe I'll write a tutorial hitting all the bullet points with links, though (eg add caching to static media, bundle them, don't use heavy libraries, load js async if you can, etc etc).

Thank you very much for your feedback!


I've wanted to play around with some similar ideas for a while too, actually. I have a few loose high-level ideas - I know I want it to feel like an app, but I want to use plain JS; I want to leverage everything modern browsers can support, while remaining backward-compatible (!); I want to try odd things like using Lua inside nginx for everything (or even writing my own web server), or programmatically reorganizing my CSS and JS so runs of similar characters are grouped together and gzipping has the best effect. I also have a hazy idea of what I want the site to be about (not a content site, some sort of interactive thing) but I haven't resolved all the "but if you make it about X, it doesn't really nail all those bits about Y you wanted" stuff yet. Anyway.

Thanks for the note that AppCache is now out of the picture. I actually think I remember reading something vaguely about it being not the greatest, but I didn't know it was actively harmful. Do you mean in a security sense or it just being bad for performance?

I wasn't sure what to say about the content structure thing at first, but then I thought: take the pragmatic approach. Gather piles and piles and piles of actual content and dump it either on the site itself or your dev version. Notions about structure, presentation and content will likely occur in the process of accumulating (or writing) what's on the site.

As for what kind of content to put up, I would suggest focusing heavily on links to (and/or articles about) pragmatic, well-argued/well-reasoned arguments for lightening page load, and the various kinds of real-world metrics that are achieved when people make the investment to do that.

An obvious example: it's one thing to say "I recommend http://vanilla-js.com!", it's quite another to say "YouTube lightened their homepage weight from 1.5MB to 98KB and made it possible for brand new demographics in 3rd-world countries to experience video playback (http://blog.chriszacharias.com/page-weight-matters). Also, the reason the site feels so fast now when you click from one video to the next is that the platform only pulls in the new video URL, comments and description - the page itself never reloads."

Regarding where to start, I was thinking that a mashup/ripoff of halfway between https://developers.google.com/web/fundamentals/ and MDN might be an interesting target to aim for. I'm definitely not saying to [re-]do that much work (although I wouldn't be protesting if someone did... some of those Fundamentals tutorials are horribly out of date now), I'm just saying, the way that info is presented could do with cleanup and you can always run rings around them in various ways (layout, design, navigational hierarchy) because of bureaucracy blah blah... but you could do worse than aiming for something that feels like those sites do. Except you'd be focusing on making everything as lightweight as possible, and you would of course make the site your own as time went by. Maybe what I'm trying to get at here is that nobody's done a full-stack (as in, "bigger picture") top-to-bottom "here's how to do everything lightweight, and here are a bunch of real resources" sort of site yet, and I'm suggesting the lightweight-focused version of Google Web Fundamentals... :/

On a related note, I've come across a few websites that are nothing more than a bunch of links and a tiny bit of text describing some technical/development issue or whatever. They almost feel like spam sites, except they talk about legitimate issues and are clearly written by a person.

I'm sure these people mean well, but the low-text high-link format (or the "I'm going to rewrite what's in this link in my own words" approach) doesn't work for blog sites (possibly because of WordPress's 10000-clicks-to-get-a-high-level-overview browsing model...tsk) and similar - I'm trawling for actual text when I'm on a site like that, if you give me a link I'm not even on your website anymore.

You've probably seen sites like that too. (Note that I'm slightly griping here, I don't see your site as similar at all. I think I got a bit off track, I was trying to demonstrate the exact opposite of the direction I would suggest you go in. :P)

Also, I just thought of https://www.webpagetest.org and https://jsperf.com. Arguably microoptimization-focused, but I thought I'd mention them anyway.


> Do you mean in a security sense or it just being bad for performance?

It is bad for performance. I was going to link you to the article, but I figured I'll add it to the site :) http://www.lightentheweb.com/resources/

> Regarding where to start...

Ah, good idea, thank you. Yes, I'm not really in a position to rewrite content that exists (it would take too much time), but I would like to at least index it sensibly.

> nobody's done a full-stack (as in, "bigger picture") top-to-bottom "here's how to do everything lightweight

That's exactly what I'm aiming for, with clear steps and possibly a checklist (good idea!) on what to do.

> I've come across a few websites that are nothing more than a bunch of links

I think that's hard to avoid when making an informational site, but the links could possibly be embedded into article-style copy, making it not look as spammy. I'll keep that in mind, thank you.

> I was trying to demonstrate the exact opposite of the direction I would suggest you go in

Haha, yes, I know what you mean, and the links will be the "read more" material. I'd like to add some original content and non-time-sensitive guides to the fundamentals.

> Arguably microoptimization-focused, but I thought I'd mention them anyway.

Those are great, thank you!


Can I suggest you add caching to the css, javascript and logo?


I will, it's still early so I hadn't taken a look. To be honest, since this is hosted on Netlify I was kind of assuming they'd be sending caching headers for all static files, but I see that they aren't.

I'll look into it, thank you!

EDIT: Netlify would be setting everything properly if I hadn't turned that off... Luckily they have great support!


Whole things super fast now, loads in 55ms. I assume most of that is ping (I'm in Australia).


Fantastic, thanks!


Thanks for the links to lightweight CSS and JS libs; I actually need exactly that right now for a project.

Gitlab link is 500-ing, unfortunately.


Ah, oops :/ Seems to be okay now, let me know if there's something else you need! I'm looking for ideas on how to organize the docs/site at the moment.


>If browsers could expose the available bandwidth

I don't know why this seems like such an imposition, but I think I'd be uncomfortable with my browser exposing information about my actual network if it didn't have to. I have a feeling way more people would be using this to track me than to considerately send me less data.

That said, browser buy-in could be a huge help, if only to add a low-tech button saying, "request the low-fi version of everything if available." This would help mobile users too -- even if you have lots of bandwidth, maybe you want to conserve.


Indeed; as an user, I don't want the site to decide what quality to serve me based on probing my device. It'll only lead to the usual abuse. I want to specify I want a "lightweight" version or "full experience" version, and have the page deliver an appropriate one on demand.


I remember when websites used to have "[fast internet]" or "[slow internet]" buttons that you could use to choose if you wanted flash or not. Even though I had a high-speed, I chose slow because the site would load faster.


It doesn't have to be your actual bandwidth. The values could be (1) high quality/bandwidth, (2) low quality/bandwidth, (3) average. The browser can determine that automatically with an option to set it if you want to (e.g. for mobile connections).

That should solve most problems without giving away too much information. But an extra button would probably just confuse people.


Progressive resources would help a lot here. We have progressive JPEGs and (I might be wrong) PNGs, you can set your UA to low-fi mode and it will only download the first layer of the JPEG.


I think if someone wants to track you, the bandwidth is not the first thing they'll be looking at.

It's just another signal, but there's already a few tens of them, so adding one more is not going to make a significant difference.


If you consider every identifying piece of information as a bit in a fingerprint, it makes more than a significant difference; it makes an exponential difference. Consider the difference between 7 bits (128 uniques) and 8 bits (256 uniques) and then 15 bits (32K uniques) and 16 bits (65K uniques). Every additional bit makes a difference when fingerprinting a browser.


This sounds just like the idea that the website should be able to know how much power you had left on your device, so it could serve a lighter version of the webpage.

I think the arguments against are pretty much the same.


You can get the network type or downlink speed in Firefox and Chrome using NetworkInformation interface: https://developer.mozilla.org/en-US/docs/Web/API/NetworkInfo...

Then you can lazy load your assets depending on that condition.


One hack-y way to do it would be to load it via JavaScript. For example, see this stackoverflow [0]. Obviously not a great solution, but it works if your dying for something.

I bet people w/ slow connections are much more likely to disable javascript, though.

    let loadTime = window.performance.timing.domContentLoadedEventEnd- window.performance.timing.navigationStart;
    if (loadTime > someArbitraryNumber) {
        // Disable loading heavy things
    }
[0] http://stackoverflow.com/questions/14341156/calculating-page...


It's too late to disable loading heavy things at that point - the loading is already started.

Do the opposite, start loading heavy things if the page loaded quickly.

A clean way would be to set one of two classes, connection-slow or connection-fast, on the body element. Then you could use those classes in CSS to choose the correct assets for background images, fonts and so on.


Well, I meant to leave the heavy things out of the HTML you send over to the browser and then inject them only if they can be loaded.

So, yeah totally agree with you. Should have been clearer.


>start loading heavy things if the page loaded quickly

...and not loaded from cache. You need a way to determine this reliably. AFAIK there's no way to determine this for the main page itself.


There is a proposed API for that.

https://wicg.github.io/netinfo/

And like most such APIs, it has been kicked around for a long time and it has only been adopted by Chromium on Android, ChromeOS and iOS. It'd be great if it were more widely adopted...


Yay, more browser fingerprinting data points!


Well, to be fair, as the spec notes, you can already fingerprint on speed by timing how long an AJAX call takes.

Also, "on a shit home DSL connection" doesn't really distinguish me from millions of other UK residents.


Yes it does, when used in combination with the other data points.

That said, I always get "unique" on those fingerprinting tests. You can't be "extra unique," so I guess I don't mind it.


> Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics. And in those demos, it's often not acceptable to degrade the experience of users on fast connections to accommodate users on slow connections.

This is prejudice. People use Craigslist, for example. If the thing is useful, people will use it. If there's a product being sold, and if it's useful to the potential clientele, they'll buy it. Without regard to the UI.

In the past ten years while my connection speed increased, the speed at which I can browse decreased. As my bandwidth increased, all the major websites madly inflated.

> So -- if I write a web page, and I want to include a large asset, but I want to indicate to user agents on slow/capped connections that they don't _need_ to download it, what approach should I take?

Put a link to it with (optionally) a thumbnail.


> People use Craigslist, for example. If the thing is useful, people will use it. If there's a product being sold, and if it's useful to the potential clientele, they'll buy it. Without regard to the UI.

Craigslist achieved critical mass in the 90s, so it's not a good example. Many useful products disappear because they can't attract enough users to become sustainable. A nice UI can affect users' credibility judgments and increase the chance that they'll stick around or buy things.

[1] http://dl.acm.org/citation.cfm?id=1315064


Random idea: Get the current time in a JS block in the head, before you load any CSS and JS, and compare it to the time when the dom ready event fires. If there's no real difference, load hi-res backgrounds and so on. If there is a real time difference, don't.


Wouldn't that be measuring latency more so than bandwidth? You'd run the danger of confusing a satellite internet connection (high(ish) bandwidth, high latency) with a third-world, low bandwidth connection.


Satellite ISPs have low data caps and/or charge a lot per GB of transfer. Avoiding unnecessary downloads seems like the correct behavior in this case.

I think the best solution would be an optional http header. That way, the server could choose to send a different initial response to low-bandwidth users. If connection speed is solely available via JavaScript API or media query, then only subsequent assets can be adapted for users on slow connections.


In the ideal future, FLIF [0] would become a standard, universally supported image and animation format. Almost any subset of a FLIF file is a valid, lower-resolution FLIF file. This would allow the browser - or the user - to determine how much data could be downloaded, and to display the best-quality images possible with that data. If more bandwidth or time became available, more of the image could be downloaded. The server would only have one asset per image. Nice and simple.

[0] http://flif.info/


We outsource this to CloudFlare and their Mirage service: https://support.cloudflare.com/hc/en-us/articles/200403554-W...


I think this is an important question.

Like another reply to your comment, I thought about having a very small js script in the header puting `Date.now()` in a global, then on page load, having another script checking the amount of time that had passed to see if it was worth downloading the "extra" at all. But then again where do you put the threshold? Has anyone tried this with some degree of success?


Design your UX so that any large assets can be requested at will by the user, and indicate the file size? That way it's the user's choice if they want to load that large video over their slow network, etc.


Most users on fast connections are not going to enjoy explicitly clicking to download every background image, font, etc. For videos it might make more sense, but there are many more optional assets to deal with.


Background images and fonts are examples of things probably not needed at all. I already have fonts on my computer, I don't need yours.


I have to agree with the original comment here:

> Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics.


I think your underestimating how perceptibly slow the internet has become for a lot of people. They don't realise they're downloading 2MB of JavaScript, they don't realise what JavaScript or css are. They'll say things like "I think my computers getting old" or "I think I have a virus". More often than not this is just because they're favourite news site has become so slow and they can't articulate it any better than that. All they want to do is read their text oriented news sites with a few images.


I don't think there is an easy way to tell the browser not to download something because the connection is slow. Progressive enhancement can work well for giving users a basic page that loads quickly with minimal assets while also downloading heavier content in the background that renders later. That's still different than putting a timer on a request to download the content (which would require JS to detect the slow connection).

If you make a page well it should render quickly under any network condition, slow or fast. As an example, you could try serving pictures quickly by providing a placeholder picture which uses lossy compression to be as small as possible. It could be byte64 encoded so it's served immediately even over a slow connection. Then after the page is rendered, a request could go out to download the 0.5Mb image and a CSS transition could fade the image in over the placeholder. People on fast connections wouldn't notice a change because it would load right away, while people on a 40kbit 2G connection would be OK with your page too.

The requests to download larger content will still go out over a slow connection but the user won't suffer having to sit through seconds of rendering. Maybe similar to how people have done mobile-first responsive design, people could try doing slow-first design. Get everything out of the critical rendering path and progressively enhance the page later.


I think srcsets are a reasonably proxy for this. Serve extremely optimized images to mobile devices, and the full images to desktops.

It isn't perfect - you'll get some mobile devices on wifi that could have consumed the hero images and some desktop devices still on dial up, but it's still a lot better than doing nothing.


Server side logs and sessions. You should be able to tell users devices/browser and what bandwidth they operate. Then calculate average speed they use. You could then create tiered service where media quality and JS features can be adjusted. You will periodically process logs to make sure that grouping is still correct. As additional feature users could choose in theirs setting what tier they want to use.

On client side, You can achieve some of this by heavy use of media queries. https://msdn.microsoft.com/en-us/library/windows/apps/hh4535... You can basically manipulate resolution/disable of assets based on screens quality. This is under assumption that someone with retina screen will have decent internet.


There are two ways:

1. Serve the AMP version of your page (https://www.ampproject.org) which uses lazy loading and other optimizations

2. Use the Network Information API (https://developer.mozilla.org/en-US/docs/Web/API/NetworkInfo...) to load heavy assets only on high bandwidth connections.


Here's a proxy: buy a 1st-gen iPad, turn JS off, and then use it to browse the site.

If it crashes the device, you're way off.

If it's appreciably slow or clunky, find ways to improve it.

Iterate until it's fast and usable.


Which demographic likes large images that are not necessary for the task at hand?


Simple. Design your webpages to only load the functional lighter weight essential stuff by default. Then use javascript to add in all the large assets you want. Users with slow connections can browse with javascript turned off.


I like this solution.

Unfortunately there is little consistency in how well browsers accommodate JS-less browsing. Firefox removed the ability to switch off JS from its standard settings panel. Brave lets you switch JS on/off for each domain from the prominent lionface button panel.


I found out this the hard way.

T-Mobile used to offer 2G internet speeds internationally in 100+ countries included in Simple Choice subscriptions. 2G is limited to 50 kbit/s, that's slower than a 56K modem.

While this absolutely fine for background processes (e.g. notifications) and even checking your email, most websites never loaded at these speeds. Resources would time out, and the adverts alone could easily exceed a few megabytes. I even had a few website block me because of my "ad blocker" because the adverts didn't load timely enough.

Makes me feel for people in like rural India or other places still only at 2G or similar speeds. It is great for some things, not really useable for general purpose web browsing any longer.

PS - T-Mobile now offers 3G speeds internationally; this was just the freebie at the time.


Disable JavaScript. You’ll be surprised at how most of the web still works and is much faster. Longer battery life on mobile, too.


I use NoScript. The web is much less annoying by default, and I can still enable scripts for those sites where I think it might be useful.

There are a good number of sites now which have entirely given up on progressive enhancement and simply don't show you anything without JS... but I generally find I just don't care, and just close the tab and look at the next thing instead.


I found that, usually, the ones that don't show anything when javascript is disabled are the ones loading scripts from ajax.google.com.... it might appear to be so because google is so much larger (or maybe Google did that on purpose)


I started using "Image On/Off" and "Quick Javascript Switcher" plugins to easily toggle images and js while traveling South America in 2014 to increase speed and save costly bandwidth.

Still using them for the side effects. Its nice to be able to start reading an article immediately without waiting for the jumping around of content to stop. And actually read until the end without having modal dialogs shoven down my the throat.


> You’ll be surprised at how most of the web still works and is much faster.

And you'll be more secure, and you'll retain more of your privacy.

I find 'this site requires JavaScript' to be another way of saying, 'the authors of this site don't care about you, your security or your privacy, and will gladly sell all three to the highest bidder.'


Well, that's quite unfair. JavaScript is also used for creating interactive web applications - not just tracking users. Really your attitude comes off unnecessarily aggressive.


There are exceptions where JS is needed. They are exceptions though. A vast majority of the sites I see now are web-pages that think they need to be SPAs. Sorry, sucks to be them, but if they didn't mis-design, I wouldn't mis-interpret their intentions.


Obviously things like gdocs need JavaScript, but blogs and news sites and forums sure don't.


I think it depends on what the JavaScript is used for. I agree that blogs and news site should be static, but forums - and in general, sites with a high degree of user interactivity - can see significant UX improvements with some JavaScript, for things like asynchronous loading, changing the UI without reloading the page, and even nice animations (although many of those can be done in CSS these days). However, graceful degradation is very important - disabliing JavaScript on these sites shouldn't break them, merely impact the UX.

[Edit] "blogs and news sites should be static" -> this should read "blogs and news sites don't need JavaScript"


Agreed, enhancements are good (and often nice on a modern devices with all the bells and whistles enabled), so long as it degrades nicely.


How can one know that first time viewing a webpage? I just enable some particular websites to run JS because I know that they are indeed interactive web applications that I want to (read: have been constrained to) use. Also, most interactive web applications can stay just as interactive even though they were to reduce the amount of JS and CSS libs, fonts, images, icons, videos and other stuff that they thoughtlessly pull in. I disabled fonts loading on webpages and all the search boxes are now an "fl" ligature for me, though many times I see that it's not more cryptic then before I disabled fonts, because weird icons some people invented are just as meaningless to me as random letters. I've gotten used to the fact that an identity sign is for a meny, but every other day someone invents another one that now I can't click anything without fear and uncertainty, as most of the time no-one bothers to put a tooltip or a litte label.


Is it more aggressive than the uses to which JS is being put these days?


Practically speaking, I think it's much more appropriate to just assume the admins are lazy.


I'll respectfully disagree with you.

It takes more work to have a bloated JS mess of a site, than it is to have a small, simple, clean site. If they were lazy, they wouldn't have gotten to that spot in the first place.


Not really. It's very easy to get bloat if you integrate ad-networks, analytics tools, social media tools etc. willy-nilly without looking at all the resources they fetch.

The lazy approach WILL lead to bloat.

No news agency is running a plain jane HTML website.


Your kind of proving my point. If you add these things in, it's more work. If you make a plain html site, which is what these sites should be doing, then you aren't going to add that stuff in, which means less work.


I think you're forgetting content creators that aren't developers - although to be fair I don't see why you can't create an interface for the user that spits out / retroactively updates old pages/links/images.

There is definitely a trade-off between ease-of-use and cost-of-use and I feel this gap is bridged by the content created by those who could not publish bare bones.


> I think you're forgetting content creators that aren't developers

I don't understand what you mean by that. Content creators don't need to be developers for us to use simple, reliable systems.

> There is definitely a trade-off between ease-of-use and cost-of-use and I feel this gap is bridged by the content created by those who could not publish bare bones.

Yes, but I personally find the "ease of use" to be worse on heavy, slow, bulky sites. If content is "easier to use", then why are people constantly angry at slow, non-responsive interfaces? I see and feel this all the time, yet it's somehow "easier to use"? I don't see people complain when sites are fast, responsive and simple. Everyone's top complaint is that their computer/phone is "soooo slooow". Why is this, when we have extremely fast computers?


If you're hand coding it either way, maybe but, at least in my personal experience, it's much faster/less effort to drop bootstrap and jquery on the page and get to something acceptable looking then to hand code just the 50 lines of js/CSS I actually need. Obviously there are many benefits to the latter approach, especially in the long run, but it's definitely not the lazier approach.


> hand code just the 50 lines of js/CSS I actually need

That's the problem. If you do legitimately need it, then yeah, it might be, but my experience says you probably don't need that.


Quite the opposite actually, people don't know how to set up a website, let aside making a simple, static one. Many websites are created on services like Squarespace, wordpress (used by many as a CMS), CMSes, Blogger, etc. And then for who knows to edit text files, it's easy to start with a tutorial and end up with a +1Mb hello-world website.


I can agree with this. I was talking about people who know how to code, but you make a good point.


Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

Antoine de Saint-Exupery


Yeah, but it takes diligence to know that.


Or, they assumed everyone nowadays has a browser with javascript and don't care about people who won't accept to using js or even know about these people existing


Cannot help but chuckle at the irony when I forget to allow goog/gstatic when using gmaps and get the non-blank-page stating "without js all that's left is a blank page".


This is the main reason why Brave is my default browser: you can set the default to disable JavaScript and enable it only on sites which actually do something useful with it. My data usage dropped something like 1GB the first month I switched.


Chrome on Android has this exact same setting. I currently have 26 sites allowed to run Javascript.


hackernews even works without javascript, the whole page just reloads everytime you upvote something.


Well, that's just the default way interactivity in websites works - submitting forms.


You could have used an iframe for each button instead of a normal form to prevent the reload of the page. Using an iframe with data: should take no longer to load than a normal form.


If only legacy didn't exist, and I can't think of a way to toggle iframe and form without js =\

I guess we're almost all on evergreen browsers now anyway...


FWIW, I just tried this in Firefox (set javascript.enabled=false) and went to my bank's website to see how it would fare. Firefox crashed. Tried again with no other tabs open, still crashed. Crash report sent.

OTOH, in Chrome the website actually works fine and feels more snappy with JS disabled. So, thanks for the tip!


Can you share the crash report IDs from your Firefox's about:crashes page? Can you share a link to your bank's crashing web page? I'd like to try to reproduce the crash. Thanks!


Sent by mail. Thanks for looking into it!


This crash is Firefox bug 1328861: https://bugzilla.mozilla.org/show_bug.cgi?id=1328861


Note though that disabling JavaScript can also slow down many sites. One good use of JavaScript is to detect the speed of the user's connection and then load in smaller and lower-quality assets. Disabling JavaScript can result in the default assets being loaded, which rapidly offsets the benefit of not loading that JS. Other sites will load in portions of the content first and use JS to load in extra chunks as requested, but load in the entirety of available content if JS is disabled, slowing down initial pageload enormously.


I can't think of a single website I know that uses JS to intelligently load thing via connection-speed sniffing. It's a nice thought, but it doesn't happen. There used to be JS fills for responsive imagery -- it was never connection-speed based, but viewport based -- but this is all browser-native these days. Some things might provide simpler assets via CDN-based UA sniffing.


Yeah when I used to run over my T-Mobile data allotment (in the US) and they dropped me to whatever speed they throttle you to when your "high speed" data is gone, Google Maps wouldn't load, Facebook wouldn't load, YouTube wouldn't load. I remember using all of those things back in the days when a 3G connection was a luxury, back when Windows was the best smartphone platform. What happened between then and now that suddenly nothing works?


High paying customers are going to have high speed connections. No one will talk about it, but it's discrimination. If you are on a slow connection, they don't want your kind on their site. If you try, they will mock you for not knowing your place.


No one will talk about it, but it's discrimination.

This will make a few segments of people cringe because much like the topic of racism, there's a school of thought that it only counts when things are exhibited in severe forms like water cannons, attack dogs or restrictive housing covenants or otherwise people being directly being told 'no' because of superficial topics like race, gender or sexual orientation.

But as a tech guy who's slowly pivoting towards law, I've long held the belief that technology will become the next battle ground for civil rights-and has the potential to even change (in the sense of expanding the definition of) how we talk about civil rights. Think along the lines of people being left behind when it comes to accessing information they need to request public resources as more and more cities move towards online only forms, or even utilization of "entitlement programs" to pay for internet access (http://www.usnews.com/news/articles/2016-03-31/fcc-expands-o...).

Now it may not be active discrimination in the sense that one will be outright told 'no', but disparate impact deserves to be at the table of discussing this sort of thing.


It seems more plausible to me that they just don't want to take the trouble to support low-bandwidth connections than that they're actively pumping up the space to keep out poor people.


> What happened between then and now that suddenly nothing works?

The average expection changed. Back in the day, everyone was on $SLOW_SPEED, so pages were designed for it. Now-a-days they can, and do design, pages for higher speeds


Yes and the average page size has sky-rocketed, despite there being no more actual content (i.e. text). Instead we have animations of images or text fading in as we scroll down a page and lots of Javascript to do things I do not know.

Kind of makes me miss the old plain HTML days - much less CPU intensive too.


But still should make sure low speed works. But try convincing some privileged 20 something developer of that.


It's really not about a privileged youngster, it's about business priorities. Most of the time the cost benefit ratio doesn't justify the effort to optimise for high latency high packet-loss connections. Let's say 1% of your potential users use such connections. It only makes sense to support them if your total userbase is a large enough number. For Google, it's a no-brainer. For other sites, it's something to consider.

At work we did something similar a few years ago with our Android app. We dropped support for Android 2.3 users because we only had a couple hundred of them and it didn't justify the developer cost to maintain it. WhatsApp only dropped support a month ago. I don't think that was because they were somehow less privileged than us.

The casual ageism in your comment is unbecoming. You could reconsider it.


> Let's say 1% of your potential users use such connections. It only makes sense to support them if your total userbase is a large enough number. For Google, it's a no-brainer. For other sites, it's something to consider.

Your making it usable for that 1%, but you're making it better for the other 99%.


You aren't necessarily, though. Efforts spent optimizing the existing functionality are not being spent adding new features.


But your optimizing things people are using frequently, not adding things they probably aren't. Adding features is usually diminishing returns as well.


Even if you just want to polish or optimize existing functionality, bandwidth usage may not be the biggest bottleneck for all or most users.


I could, but I've worked with too many examples that only care about writing new code in whatever is the latest hotness and moving on. They[0] don't want to fix their bugs. They don't care about anything but "works on my machine." They certainly don't care about using bandwidth.

[0] The ones I've worked with


Bring it up! They're newer to development then you are, they're newer to life. They're far more likely to have always have had high speed internet growing up, and not have had that visceral experience. The initial reaction will probably be negative, but initial negative reactions to perceived increases in scope/work is basically a universal human trait, it's surmountable.


What is an ageism? People are always inventing new ways to get offended...


I've heard about ageism (like racism, but for people of different ages) since around 2000-2001. Really around the time baby boomers started getting close to retirement age and some companies decided it was a better deal to fire them or lay them off than pay the pensions they had earned, and also with the DotCom boom where startups would only hire 20-somethings. It's not really a new term.


> What happened between then and now that suddenly nothing works?

Single Page Applications with dozens of MB of Javascript, Google AMP (which has a JS runtime taking several minutes to load on 2G), and so on.


> which has a JS runtime taking several minutes to load on 2G

source? the entire goal of AMP is to load pages quickly


Minutes might be slightly overselling it but AMP has a bit over 100KB of render-blocking JavaScript alone before you get the actual content.

Here's the current top story when I hit news.google.com in a mobile browser:

https://news.google.com/news/amp?caurl=https%3A%2F%2Fwww.was...

Loading that in a simulated 2G connection takes about 80 seconds and at least 30 seconds of that is waiting to display anything you care about. Looking at the content breakdown shows why: ~200KB of webfonts, 1.2MB of JavaScript, 275KB of HTML, etc.

https://www.webpagetest.org/result/170208_DV_R2R1/2/details/...

https://www.webpagetest.org/result/170208_5Y_R404/1/details

Loading the same page without JavaScript pulls the content render time down into a couple seconds, still over 2G:

https://www.webpagetest.org/result/170208_5Y_R404/1/details/...


On my throttled 2G connection, it’s ~2½ minutes, and because I rarely visit pages with AMP, it’s never cached.


I live in a major city, have an iPhone 6S with good LTE coverage according to benchmarks, etc. and still routinely have AMP take 15+ seconds to render after the HTML has been received. I don't know if that's Mobile Safari applying strict cache limits or an issue in Google's side but the sales pitch isn't delivering.


I have all of those as well, and AMP takes < 1s for me to load pages.

Sounds like either a configuration issue on your end or maybe your wireless carrier.


Note that I did not say it always happens — when everything is cached, it performs as well as any other mobile-optimized site — or that it's specific to my device/carrier – it also happens on WiFi, Android, etc.

The problem is simply a brittle design which depends on a ton of render-blocking resources. The assumption is that those will be cached but my experience is simply that fairly regularly I'll click on a link, see the page title load (indicating the HTML response has started), and then have to wait a long time for the content to display. Many news sites also load a ton of stuff but since fewer of them block content display waiting for JavaScript, the experience under real-world wireless conditions is better in the worst case and no worse in average conditions.


Well, it would have to be downloaded once. It'd stay cached after that though so it's not really a concern. They use version numbers for cache busting.


On mobile cache sizes are very limited, and with the size of modern web pages it has to get reclaimed regularly. You can't rely on caching to solve poor performance.


I was in rural China with an EDGE connection on Google Fi last month.

Hacker News was pretty much the only site I visit that could reliably load quickly. m.facebook.com had a slight wait but was still bearable. I had to leave my phone for 10 or 15 minutes to get Google News.

WeChat and email worked well.

Everything else was horrible, especially ad networks that would ping pong several requests or load large images.

Opera has a compression proxy mode that helped a bit when it worked but it was still painful.

For search results, Stack Overflow, and YouTube, it was easier to easier to ssh into an AWS node and use elinks/youtube-dl.

Using SSH as a socks proxy/compression was insanely slow due to something with with the great firewall.


> PS - T-Mobile now offers 3G speeds internationally; this was just the freebie at the time.

I don't think this has changed, at least not in general. The included roaming package is still free international 2G roaming everywhere except Mexico and Canada (which get free 4G), with "high-speed data pass" upgrades available for a daily or weekly fee if you want faster. They did have a promotion for the 2nd half of 2016 (initially for the summer, then extended through the end of the year), where international 3G, and in a few areas 4G/LTE, was free without buying the upgrade passes for most of Europe and South America [1]. But that's now over, and I believe it's back to free 2G internationally now.

[1] https://newsroom.t-mobile.com/news-and-blogs/t-mobiles-endle...


On the new "One Plan" it's now 128Kbps, and 256Kbps if you pay for the One Plus International plan ($25/mo).


I use T-Mobile as my ISP because the only landline choice in my apartment building is AT&T and I absolutely refuse to do business with them. I regularly hit the monthly bandwidth cap on my plan and get booted down to 2G.

I live in California -- this is not just something people internationally are dealing with.

Annoyingly, T-Mobile's own website doesn't work properly when you're throttled to 2G speed. Found that out the hard way when I ran out of minutes on Thanksgiving and couldn't talk to my family, and couldn't load their website to add more minutes.


I mainly used it for things like slack, skype and emails, and mapping.

With iOS9+ content blockers and things like Google AMP, I think the web is a lot more usable.

Apps tend to be less bloated in terms of bandwidth as well, since they usually don't load as many assets on request.


You have just discovered why apps are so good, they can download content in small amounts.


My 35Mbit cable got shaped down to 0.25 Mbit/s yesterday because we went over our download limit. It was like having no connection. I just gave up using it.

I hate the all-or-nothing approach to shaping. At least give me 5Mbit or something!


5mbps is a perfectly fine connection, they might as well not throttle you at all then. If they want to give you barely-usable internet, about 500kbps might be reasonable. 250kbps is quite slow indeed.


I wouldn't call 5mbps "perfectly fine", but I could do basic web browsing and email etc. And that's my point. I don't want to be shaped down to a barely-usable connection. Why do they need to shape at all? The only argument is congestion. And if there's congestion, they should shape us down to a reasonable level like 5mbps. No reason it should be all or nothing, 35Mbit or zero.


Having used both, I'll take the 2G mobile over the 56k modem every time.


I just looked up EDGE. It's crazy to think that the first iPhone topped out at double the speed of a 56k modem. And that I actually used my iPhone on that network sometimes, when 3G wasn't available.


I had a blackberry a bit before the first iPhone. I remember getting an update wirelessly that was something like 3MB and just thought "Good, it should only be about 10 minutes this time."


FYI, I had the same connection and I'm pretty sure T-Mobile simulates 2G by switching 3G on and off to get the correct speed on average. Breaks a lot of stuff. Almost unusable!


It's what makes me wish designers and developers would work with artificial constraints. Sure, it's easy to design and develop without really thinking of bandwidth constraints, but reality is you are and will always be a better developer and designer by setting artificial bandwidth constraints in your mind and choices.

Seeking out or thinking as though you have bandwidth constraints can push you to find better solutions and thereby make your services better. The west and the tech centers in particular is really rather blinded by the glutinous bandwidth that keeps eating up greater and greater amounts of data with only marginal improvements in outcome or user experience.


I say this so much that I should probably just copy/paste it in the future but...

I used to work at a place that had a <1mbps modem, and a ~7 year old destkop. If their software didn't work on that, it needed to be optimized. I wish more places would test this way. Your sight may work fine in downtown SF, but that doesn't mean it's going to work well anywhere else.


Databases too. Hosting the database on a fast machine with a lot of RAM and an SSD will hide performance problems that should be immediately apparent.


With games, it's way easier to see problems in the profiler on the minimum spec PC than it is on your dev machine. Everything is magnified.


Chrome's Developer Tools has throttling options immediately available in the Network tab.


UX guy here. I've always kept performance in mind. One of my pet phrases is that speed is part of design.

I've gotten a lot of blank stares.

That's why more designers don't bother: decision makers usually respond only to look/flashiness/branding.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: