Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Internet magically gets faster when opening speedtest?
685 points by halgir on April 17, 2022 | hide | past | favorite | 346 comments
I want to start by saying this is anecdotal, and I feel paranoid for even thinking it. But often my internet will feel very slow, so I'll open speedtest to check if something's wrong. When I do, all of my stalled tabs suddenly spring into action and finish loading.

The tinfoil hat wearer inside of me speculates that my internet provider is overloaded and throttling my bandwidth, but immediately prioritizes me when it senses that I'm trying to check if I'm getting what I pay for.

Has anyone else noticed this pattern? Is there a way I can test this more scientifically?




Yes, this is true. This is why Netflix runs fast.com: it serves Netflix content from Netflix servers, so if ISPs want to prioritize fast.com content they'll have to prioritize Netflix content as well. Of course, that doesn't help non-Netflix sites...


Google does the same if you search “speed test” they have one that runs through the search results page.

Cloudflare runs one too. https://speed.cloudflare.com/

Those are all harder to game than speedtest.net


The cloudflare one is the most accurate I've ever found. People often accuse it of "reading low" but after much experimentation I think it actually reads accurately and most of the others read fast.

So much of the web is behind cloudflare that most ISPs are unwilling to prioritize traffic to it. It also loads a selection of dummy files of varying sizes which is a more realistic portrayal of normal internet usage than a single stream of data.


I just tried the cloudflare one and saw something interesting, while it was gathering data, the bandwidth doubled from below my advertised rate to above it, which kind of suggests my ISP recognized it was a speed test and increased the bandwidth...


I just tried the cloudflare Speedtest and it doesn’t read low, in fact it read higher than what’s possible.

Monitoring the NIC rx is the most accurate in my experience.


> I just tried the cloudflare Speedtest and it doesn’t read low, in fact it read higher than what’s possible.

Perhaps there is a confusion between megabytes, and megabits.


Unfortunately, no. I intentionally constrained an ethernet link at 10Mb (in the PHY layer, so any bugs in the network stack is irrelevant) and both Fast.com and Cloudflare misread the upload speeds up to 20 times (what should be 9.something Mbps became >200Mbps). Speedtest.net is immune to this, even using their browse client. I don't know what's causing this, but I've ruled out browser bugs. I haven't tested Google's though.


You mean you told the PHY to renegotiate at 10M with something like `ethtool -s <dev> speed 10`?

While stumbling around attempting to figure out `tc qdisc` a while back I found that the shaping it was applying was very synthetic, such that asking for low bitrate and high latency would mean the kernel would just wait a second or two then shunt several KB of data through at once. IIRC I was just playing with the default approaches you'd find bandied about on tutorial websites and such. (I'm still looking for a way to synthetically limit a link in ways that are physically accurate.)

Remembering that experience got me thinking - without any idea what I'm talking about, I'm wondering if the PHY layer is doing something vaguely similarly stupid-simple that does technically limit the line rate to 10Mb at full blast, but still allows throughput to very briefly burst higher than that.

I think speedtest websites try to measure both the burst rate and the line rate, so perhaps something's gotten very tangled up on both the PHY and JS sides.

Now I'm curious what model PHY (well, NIC) you're using.

FWIW, Chrome's devtools has a network rate limiter built in (network tab, dropdown that says "No throttling", open that and hit "add"... aaaand remember that it's persistent (for just the tab) until you turn it back off :) lol)

Edit: Just found https://news.ycombinator.com/item?id=31063184 downthread describing seeing 100Mbps through a 10Mbit hub. I think you've either found a technical bug or a, uh, "the speedtest is good so my internet must be fine" "bug" (which would be very interesting).


Ah, apologies for not responding here. It's (shocks in disgust) a Realtek one, I forgot which specifically, but the "smart" switch (ugh, I hate the naming everything smart) definitely said that it's only a 10M Ethernet connection.

Using Chrome's limiter, capped at 5,000,000 bits/sec (5Mbps) both direction: Speedtest.net: 5.00Mbps in both directions (nice, are they're including overhead in these calculations?) Cloudflare: 4.80Mbps downlink, 4.81Mbps uplink (well, it's true now) Fast.com: 5.0Mbps downlink, 58Mbps uplink (holy crap, that's not what I've expected)

So I will re-test Cloudflare again at link-limited PHY, but Fast.com's test seems a little weird.


All good - realtek aren't that bad at the end of the day, I'm running my internet connection through a 100M ADMtek card from ~2004 :D

Well it probably was 10M then. The port LED on the switch might've changed color as well. (Smart reminds me of the consumer-lite answer to SDN.)

I just tried fast.com myself using 100kbps and it quite happily measured at line rate (6.8M+800k, yay D:). For some bizarre reason I found I needed to open a new tab, open the devtools, set the network speed, then load fast.com for it to work. My guess is that Chrome applies connection ratelimiting metadata as new connections are created, such that existing sockets still in keepalive aren't affected. I'd say that's at least a documentation bug.

I'm not really able/sure how to reproduce the other comment I linked (using an actual 10M switch and seeing 100M of throughput). That's really interesting.


If you're using a VPN, I've noticed that behavior when the VPN was compressing data in the background, since Speedtest data compressed well.


What VPNs do compression? And don't speed tests normally use TLS?


OpenVPN supports compression, and Private Internet Access used to enable it by default. I don't know if they do with Wireguard (now default). I also don't know if most bandwidth testers use TLS. I would actually assume not since it would add a lot of CPU on high bandwidth tests for not a lot of benefit. At any rate, at the time I noticed such was before using https had become nearly as widespread.


Which is more likely: that these tools are all broken, or you didn't actually set the ethernet link speed successfully?


It's (shocks in disgust) a Realtek one, I forgot which specifically, but the "smart" switch (ugh, I hate the naming everything smart) definitely said that it's only a 10M Ethernet connection. So definitely double-checked that.


I usually use the DSL Reports test [0], downstream is similar (CF: 193, DSL: 205; I pay for 200, but speeds do vary from slightly more to slightly less over the day), but the upstream for CF is plain wrong, it gives 25 Mbit vs. the correct 50 (both what I pay for and reach normally) for DSL Reports

[0]: http://www.dslreports.com/speedtest


Anecdotally for what its worth my personal preference to avoid gaming the system is to run iperf3 on a random VM that has a solid tier-1 network and many peering agreements. I start it on a non standard port non standard for iperf3 and test bi-directionally with TCP and UDP. On each end I use the "watch" command to call a simple bash function that sorts the output of "netstat -s" to get tcp retransmits. I also watch the output of "tc -s -p qdisc" on each end to see how many packets had to be reordered or dropped. This can be useful for tuning or picking traffic shaping methods.

In reference to gaming the system one can also run iperf3 on ports used by https, voip, common bittorrent and other service ports to see if the ISP is blindly traffic shaping on ports by means of different behavior per port.


Cloudflare could easily be gamed if they did some passive SNI for that hostname and boosted internet speeds for 15 seconds. But maybe they'll make it a cdn-cgi endpoint+randomize the endpoint to random customers' sites at some point in the future if they discover ISPs doing this.


On someone else's Cox connection and seeing ~450 with the Google search results test but only ~150 with the fast.com test. Interesting.


I worked at a cable company for a while and the guy who sat behind me managed gigabit connection roll outs. He always told the techs in the field to use fast.com because it was the only one they could rely on. He knew the cable company prioritized connections to people who were checking their internet speed


Doesn't Netflix also provide hardware with caches to ISPs? Thought I saw that somewhere. (Not saying you're wrong; just mentioning what might be another factor.)

EDIT: Here's what I was thinking of: https://openconnect.netflix.com/en/appliances/


Strangely, fast.com still reports faster connection speeds than my broadband is supposed to offer. No, my broadband is not faster than what I'm paying for, I can see that when monitoring actual transfers(including the transfers when doing the tests). They do offer about %95 of the promised speed, not %110 that fast.com or speedtest.net measures.


In New Zealand it is somewhat well known that our local fibre providers actually provisions each connection for slightly more than the agreed speed. E.g. 100mbits is usually provisioned at around 110mbits. So our speed tests are usually just over 100mbits (due to TCP etc overheads taking some of the overprovisoned parts). Wonder if that is what is happening to you.


This seems to fall off at gigabit, but I’d love to test with 2, 4 or 8gbps which Orcon (and maybe others) are offering.

My ONT only supports gigabit out, so I can’t test faster options. That said, fast.com has told me I’m getting 1.2gbps recently…


the common problem with speedtests is that they're all pretty brief. whether it's speedtest.net or fast.com, it's over in a few seconds so they're not necesarily measuring the sustained speed your ISP is giving you, just the burst speed.

that being said, getting a higher burst speed still makes browsing the web a much more pleasant experience, so it's not like they're just gaming the speed tests.


You can't always saturate transfers to and from random servers (whose bandwith is) not controlled by you.


This doesn't make sense to me as an explanation for OP reported behavior, that opening a speed test would make other content load faster than it was before opening the speed test. (Doesn't make sense to me whether or not the other content was netflix, but presumably it was not (all) netflix).


Time Warner used to have a “turbo” mode where connections would temporarily speed up when you started a download.

Pretty sure many ISPs do something like this - especially on mobile. If you have access to a prioritized phone, the side by side comparisons are pretty interesting.


This is not necessarily true. It’s trivial for an isp to exempt Speedtest.net traffic so it doesn’t get throttled while the rest of your traffic does.

It’s unlikely they would do something more complex like lifting all traffic limits once a connection to Speedtest.net opens.


At least in the old days, they'd have to unthrottle everything to get decent speedtest numbers. The cable modems the ISP issued around here had FIFO network buffers capable of buffering 5-10 seconds of traffic.

So, any slow connection would stop the entire household from having decent internet.


I’m suspicious that what they’re saying is true. If so, a simple curl to the site would be all that is needed without running a full speed test.

It’d be great if they could be more methodical and track down what’s really going on. I’d also wouldn’t be surprised to see an ISP doing something like this, but at the same time, it’s much more likely to be networking issues on the consumer’s side than a malicious ISP imho.


I'm not. This would not be the first time ISPs have lied to consumers about their service. https://stopthecap.com/


I’m now imagining a consortium of software companies that all put most of their service requests behind a shared speed-test domain, to do the same thing as a common good

Maybe this could even be a standalone service


Does that mean that having Netflix streaming in a window would make my connection faster as well?


Probably not, they would likely prioritize your packets to Netflix addresses but everything else would be subjected to any kind of traffic management.


I mean, it's a bit of an odd choice because it's pretty easy to detect if a user has "opened fast.com in the past few minutes"


That's why we need DNS over HTTPS and Encrypted Client Hello to become widely used. Then there would be no way of knowing, as long as Netflix uses the same infrastructure to serve fast.com and netflix.com


"Need" is too strong. Without going into the arguments against DoH, we don't "need" it, because there is an alternative: run your own recursive DNS. That should be something that a consumer could switch on or off. A default install of Unbound, for example, seems to /just work/.

Downside: if you have any kind of home network, running your own recursive DNS probably implies running your own DHCP, which isn't normally a consumer alternative (you could do it using settings on a domestic router, but not if it's crippled or locked-down). Also, DNS caching shouldn't be so effective (but I haven't noticed that effect).

Upside: all kinds of problems resulting from relying on DNS servers controlled by others disappear. You're not relying on them any more.


Recursive DNS is still unencrypted and would be visible to your ISP


Yes, they can observe my queries to authoritative servers. But I didn't claim that running your own recursive DNS guaranteed privacy; I only claimed that it dispelled all problems arising from relying on someone else's DNS recursors.

I happen to trust my ISP, BTW; I really just like running my own because I can observe what it's doing.


So your “alternative” is strictly worse / solving a different problem.


> I happen to trust my ISP, BTW

Unless you can prove otherwise you should assume they're selling you out.


I can't "prove" it. But my ISP is a nerd ISP; it's Andrews & Arnold, a rather unusual outfit. If they were caught selling us out, they'd lose all their customers in a flash.


What if I want privacy on a non-home network? Huge part of my traffic is over 5G/LTE.


There are some apps (like Intra) which allow you to connect to a built-in (cloudflare, google) or custom DoH server (your own or other providers which support DoH).

Nextdns have their own app for this as well.

For android 9 or higher there should be an option in network settings.


Right, I specifically use DoH with Cloudflare on my phone. I'm talking about the "we don't need this" reply above, which is a suggestion for tinkerers, not a good recommendation for general population.


VPN if that isn't blocked or throttled by your carrier?

Though you need a trusted host to act as the other end, or you are just swapping one monitored link with another, and finding that could be a task in its own right depending on your threat model / paranoia level.


That wouldn't really help; the network activity signature of a speed test is noticably different from browsing or video streaming, and even if it wasn't, the connection for streaming lasts a lot longer than the test. You could defeat it with a timer.


Netflix can literally implement fast.com to do the exact same operation as when streaming and then measure performance of the stream.

Yes you could do a timer, but that would mean e.g. first 30s of all Netflix streams to be much faster and then slow down just in case they were in fact speed tests. Also would be difficult to differentiate between running a speed test vs a stream that gets its connection closed and reconnects.

All of this is an order of magnitude for ISPs to deal with than unthrottling traffic for specific domains. It doesn’t have to be perfect, but making it prohibitively more expensive helps a lot.


Very different IPs for me.


1. Add a <img src="https://fast.com/pixel.png"/ >on netflix.com

2. ???

3. profit


If you're a web developer, you owe it to your users to add this to your site even if you have no connection with Netflix or Fast.com. Randomize the image name every time to avoid caching - it doesn't matter if the image isn't found.


Do you want tracking? Because that's how you get tracking


They would still only prioritize network from Netflix' servers, so it wouldn't help you, unless Netflix starts a hosting service...


I mean, it's just as trivial for netflix to make periodic requests to fast.com fron their streaming apps if ISPs were using that as a metric to throttle their traffic.


I’m not sure why, but I’m seeing 1mbps on fast.com, and 20mbps on speedtest.net. Weird how there’s such a huge difference


We’ll that would support the creation of fast.com — your ISP is either lying to you (prioritizing only speedtest), or they’re punishing Netflix. Or both.


T-Mobile and others offer “HD Video” as a premium. So they cap the connection to Netflix to force 720p video.


I'm glad that at least they'd offer 720p as "non-HD", it's not too bad compared to YouTube’s 360p default in Safari iOS. Heck 720p is the limit there.


I would do a little further investigation. 1080p on Netflix is like 5MB/s. If you have a router that can give you real time feedback or maybe glasswire, that's probably how I would measure it.

Also make sure nothing else is consuming your total upstream bandwidth. With asymmetrical bandwidth, TCP acknowledgements that get choked out or buffered will limit your downstream bandwidth too.


Are you on mobile? Some mobile providers limit streaming quality and they do this by limiting bandwidth to streaming providers.

For example, Visible will never be faster than 2 Mbps to fast.com.

You’ll get faster on Speedtest.net because it doesn’t throttle non-video related sites.


I almost forgot Visible limits video streaming. My phone has an always on Wireguard VPN connection to my home which prevents Visible from limiting streaming. Just tested. From my current location, which doesn't have the best signal, with the VPN on I get 66Mbps to fast.com and with it off 2.1Mbps.


Ah this is probably the explanation. I was using my phone as a hotspot for my laptop.


That would explain why fast.com is slow and speedtest.net was fast. Wow, I thought just the servers of fast.com where just a bit overloaded, when I checked recently after the tariff-upgrade.


Is there a way to automate this speed test every few minutes to make sure the ISP prioritizy stays up all the time


It's probably per-traffic, not a blanket speed boost, lol


Can't ISP just prioritize just fast.com based on its url? Or fast.com and netflix share the same IP


Yes, they could just check if fast.com was resolved recently.


And this, ladies and gentlemen, is why net neutrality is important.


It's also an example of how it's already been lost.


In the US we had it for a brief period of time. Then we got an FCC chair who not only took it away, but actively trolled people who were angry about it. (And that's before we even get into the issue of the public comment ballot box getting stuffed by copypasta spam in favor of killing net neutrality and wink-and-nudge blind eye turned away from it.)

That's the double edged sword in the US of having executive level departments with wide latitude to set regulations with the weight of law behind them but not the same high barrier of changing them as it takes to change a law. Technically this allows agencies to be more nimble, or as much as possible in any lumbering bureaucracy. On the other hand it means that very important pieces of public policy are even further removed from election accountability and subject to the whims of different administrations.

Whether you're for or against it, the Keystone pipeline is a prime example of the insanity that results from this. It was approve & construction begun on one administration, a final portion for it put on hold by another, then opened again by a third, a fourth stopped it again resulting in its backers cancelling it (seemingly) once and for all. Forget about the difficulties of generation-long projects, this makes anything longer than 4 years highly uncertain.


You can have plenty of projects get done that take longer than 4 years. You will have problems getting projects done that half the country hates if it takes longer than 4 years.

This seems like a feature not a bug.


It's a feature and a bug. A bug because it doesn't take anywhere near half the country to block something or push it through. A vocal minority or well-funded politically connected group is all it takes. Refer back to my net neutrality example. Polls generally showed a significant majority of the public was in favor of net neutrality, I've seen polls peg it anywhere from 57% to 83%.

Popular policies around the issue created by legislative statute instead of agency rules would have been a lot harder to repeal.


It's not about "half the country hates it", though. Plenty of projects have widespread support, but are only backed by one party (or no party). It's clearly a bug, as it causes undue preference for the status-quo above and beyond the popular will (and the common good).


One entire point of government is to work on multi-generational projects — like nuclear waste storage.


Maybe there would be tangible progress on waste storage/ processing if government set standards and then got out of the way. That seems to be working in the area of space launches.


something half the country hates can be good for the whole country. think COVID vaccines and lockdowns


Like the lockdowns did jack. Just look at the numbers and the spread of the virus with lockdowns in place. Hell, look at the Chinese in lockdown still today. Lockdowns were all political and crushed a huge portion of this country that the one voting party didn't like.


That’s pretty defeatist. A cartel of corrupt politicians and large cable companies conspired to regulate against it. These regulations can just as easily be removed and replaced with proper net neutrality rules.


There is the cartel effect going on (lobbying against municipal broadband), but even in the absence of that, it's still tough to be an ISP. Think about how much money it costs to dig up a street and put cables under it, and then run them into each house along that street. People that want to use the Internet don't have enough money to do that, and that's why the business is tough.

My experience from working at fiber ISPs is that the infrastructure is the hard part. Google Fiber got a good deal in its partner cities to run the fiber next to existing power lines (which not every city has, they're all underground here). When they ran out of easy install opportunities, expansion stopped. My experience was the same at a NYC-based ISP. If your building was on a street that already had open access tunnels, then we could serve you. If not, you were out of luck.

If you figure out how to profitably get wires to people's houses, you solve the ISP's problems. I've seen proposal after proposal for trenching, microtrenching, nanotrenching, picotrenching, attotrenching... and none of these worked in the real world. (I might have made some of those up; I think "attotrenching" is what I called just running cables into people's windows and not burying them. Didn't try it, but I imagine it would irritate people with its intrinsic flakiness and ugliness.)

The rest is simple, you can set up a full fledged fiber ISP for under $1000 in equipment! Figure out how to dig up the street profitably and you unlock a ton of wealth. (This, incidentally, is why people keep trying to sell wireless solutions. No streets to dig up. WiFi with pringles can antennas! 5G! A huge constellation of satellites! And honestly, it kind of works. But not so well that people are switching from their fiber connection to Starlink or whatever.)


> I think "attotrenching" is what I called just running cables into people's windows and not burying them.

Behold, MSU dorm LAN in early 2000s. https://medium.com/@pv.safronov/moscow-state-university-netw...

Partial “topology” shot: https://chronicles.igmsu.org/wp-content/uploads/2015/10/topo...

Home LANs and early ISPs were built with the same equipment by the same kind of enthusiasts. Typical commieblocks have enough place for cable shafts inside, though. Then the building roofs would get connected (with any kind of cable having any properties you could find) in the same grassroot manner. All that free libertarian enterprise, and now, after a number of tech upgrades, mergers, and so on, you can get gigabit for ten bucks.

I get it's not that suitable for suburbian installations, but the idea is that it all starts small and ugly. That's why your cable corporations are never ever going to allow even a tiny bit of alternative to happen, heh.


That's beautiful and I love it.


There are plenty of feasible wireless ISP technologies. They have a directional antenna on the subscriber side, pointed at a shared tower.

I've had mixed experiences with them. The wireless link itself tends to be 100% uptime with no bandwidth contention. The quality of the network between the tower and the internet backbone varies wildly depending on the ISP.

Anyway, last mile of 10's to 100's MBit symmetric over multiple miles is a solved problem these days.

The remaining problems are mostly political. Sadly, that's been true since dial up modems became obsolete.


>> If you figure out how to profitably get wires to people's houses, you solve the ISP's problems. I

Same as they did with electricity a century ago. If you want to provide service to the profitable urban area then you must run wires to the non-urban areas too.


Heh. Instead, you just need to sell the unprofitable parts of your network to Frontier.

Their business model seems to be using sales volume to make up for marginal losses.


>If you figure out how to profitably get wires to people's houses,

The same way we got power to the homes which was government assistance and the private companies not expecting them to be totally profitable in a few years.


FWIW In my place grave diggers is a synonym for a least paid profession.


I didn't say it couldn't be regained. But it will be an increasingly uphill battle. Regulators and politicians have been sold to the highest bidders.


Compliance can be forced by law in a country which is capable of self regulation.


It is quite obvious, at least from outside of USA, that “net neutrality” there is how certain group of corporations has been trying to lobby the codification of rules preferential to them to use in battle with other group of corporations.

Anywhere in the world there are local and global popular services that generate high volumes of traffic. Providers naturally care about getting shortest paths to them, plan in advance, join local exchanges, and so on, even if they don't have any direct agreements. For starters, it might lower your traffic costs. In some locations there are mid-tier ISPs offering just good connectivity with all locally popular networks; they offer the solution for low-tier ISPs for some extra coins. However, it is hard to imagine that some ISP would try to make the big source of traffic pay for what has already been paid by customers.


> It is quite obvious, at least from outside of USA, that “net neutrality” there is how certain group of corporations has been trying to lobby the codification of rules preferential to them to use in battle with other group of corporations.

Complete nonsense. "Net neutrality" means that there is no preferential treatment of packets based on origin or destination.

> However, it is hard to imagine that some ISP would try to make the big source of traffic pay for what has already been paid by customers.

I think you need to work on your imagination. ISPs in the USA have already tried this (not sure if it is still happening).


I am not talking about media propaganda, I am talking about what is happening at the source. This is just another chapter of peering wars, far from being the first in Cogent and its enemies' histories. The actual question here is whether ISPs see themselves as equals (so they can peer with each other, and let traffic fill the links naturally), or belonging to different levels of virtual hierarchy (so one has to pay the other). Netflix and its traffic volume and asymmetry is used as public attention grabbing leverage here, and government decrees are supposed to make one side weaker than the other. Anyway, that has been going for a decade, and lots of bullshit has been promoted by both sides to mud the waters.

As for the latter, my point was exactly that in “normal country” it's hard to imagine, but giant corporate clashes managed to turn this into some big argument about “freedom”.


> Complete nonsense. "Net neutrality" means that there is no preferential treatment of packets based on origin or destination.

Categorically false. There’s no “net neutrality” legislation enacted or proposed that doesn’t have giant and vague QoS and general network operations carve-outs.


The EFF was fairly happy with the FCC's Title II proposal back in 2015:

https://www.eff.org/deeplinks/2015/02/huge-win-open-internet...

In 2018 Markey and others proposed simply overturning Pai/Trump's overtuning of Title II.

I don't know precisely what you're referring to ... the closest I could find was the EFF's objections to the FCC proposals from 2011 which had things like "vague QoS and general network operation carve outs":

https://www.eff.org/deeplinks/2011/02/pt-ii-eff-evaluates-fc...

The EFF seemed much happier with the FCC proposals 4 years later.

What am I missing?


What does the EFF's happiness have to do with anything?


I consider them one of the most on-point organizations in the fight for actual net neutrality.


> "Net neutrality" means that there is no preferential treatment of packets based on origin or destination.

Which is utter nonsense for any large service provider that requires traffic engineering to handle elephant flows.

This is the problem with blanket slogans, they constrain the network in ways that impact user experience dramatically, and is why a LOT of internet pioneers oppose net neutrality.

Encouraging open access networks seems more striking at the heart of the matter.


I have long worried that net neutrality rules mis-applied would prevent the default availability of quality fair queuing and AQM solutions along the edge of the network.

https://blog.cerowrt.org/post/net_neutrality_customers/

https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-...


Only if you use the internet primarily for streaming. If you use the internet primarily for something else, with "net neutrality", you will be paying for the video streaming traffic (which makes up 60% of downstream traffic) even though you don't use it for that. If Netflix is required to pay the ISPs, Netflix just increases their prices and it doesn't affect internet users who don't primarily stream video.


Damn, maybe we should get rid of "electricity neutrality" too. Would be great if we just charged extra to all these bitcoin miners and marijuana grow farms so I wouldn't have to subsidize their electricity use.


You pay electricity by usage. We don't have "unlimited" electricity plans. This is why you do not subsidize Bitcoin miners.


That's not the full picture.

Electricity prices are dominated by supply vs. demand. There are many sources of electricity available, some of them being very cheap (hydro) some of them expensive (currently gas).

When demand for electricity goes up, the cheap power sources are going to be running at capacity, and utilities will need to buy more expensive power. This increases the marginal cost of electricity for everyone.

So just because Bitcoin miners pay for electricity, doesn't mean we aren't subsidising them. We are subsidising them by paying more for imported power even though our needs could easily be covered by cheap, local power sources.

The way to fix this would be to charge progressive prices depending on usage, ie. every person gets to use 1000kWh at the cheap local hydro power price, then the price increases gradually, and any electricity usage beyond 10000kWh/person/year you need to pay the full market price of electricity.


What you are describing is merely competition. Everyone pays the same marginal amount per kWh for a given amount of electricity—ignoring minor details like connection charges and time-of-day pricing. Yes, having more aggregate demand will tend to raise prices for everyone, but that doesn't amount to a subsidy.

Internet access is typically billed as "unlimited" so those who use it very little pay a high rate per bit transferred, while those who use it a lot pay a low rate. Some of this is due to the base cost of providing service (fixed last-mile connection costs incurred whether the line is used or not) but the rest is a subsidy from low users to high users. How much of each is debatable; the costs of home internet are largely concentrated in the last-mile connection costs rather than bandwidth and data centers, yet the bandwidth still isn't free.


> that doesn't amount to a subsidy

It's really the same thing as in the flat rate case. It's a bit less unfair since the cost rises linearly, but it's not enough to leave small consumers unaffected. People who consume small amounts of a utility still pay more because of people who consume a lot.


> People who consume small amounts of a utility still pay more because of people who consume a lot.

No, they really don't. People consuming small amounts and large amounts both pay more due to higher aggregate demand. If it weren't for the many small users the aggregate demand would be lower and the large users would also be paying less—do you consider that a subsidy in the opposite direction? No, of course not—because a price being bid up by competing consumers is not what is meant by the term "subsidy".

Now, if you banned large users from the system, or imposed discriminatory pricing as you previously suggested for the purpose of keeping prices low for the small users, that would be a subsidy: interference in the market to benefit one group at the expense of another.


Internet companies are free to offer internet plans other than "unlimited" for people who don't want to stream a lot of video.

On mobile most plans are already "limited", but that hasn't given us net neutrality on mobile.


How much does it cost (in energy or materials) to transmit and route 1GB of data?


With electricity you don’t care where your energy is coming from. It’s not like “Oh! I only charge my laptop with nuclear energy because it runs better”.

With internet, where the data comes from and goes to is the whole point.


I do though -- it costs money to produce electricity. So if more electricity is used overall my costs ultimately go up to accommodate. Perhaps fuel prices increase, or to offset the cost of additional power plants to support the higher demand, or any number of other reasons. Paying by your usage does offset this but it doesn't help if the total amount of demand increases.

It also costs more money to support higher bandwidth of network travel. But unlike with electricity, there is no cost (to network providers) to generate the data, since the data is usually paid for by the source. This means the average consumer doesn't get any of the benefit of the increased pricing -- it has not changed how the internet providers charge to consumers at all, only how they charge content makers. And customers who like the content get increased costs on top of that.

Therefore, I don't see any benefit by removing net neutrality -- it can only hurt consumers. If it only benefits network providers, why should I support it?


Industrial usage rates and rules are different in every country I believe if you use huge amounts. At least here in Spain I pay more for monthly fees because I tend to use more for hearing with electricity. So it's baseline kwh a month consumption. Also different contracts when you want to use a lot of current at the same time.


This is already done in sensible parts of the world to some extent where consumer electricity is priced differently from electricity for business purposes... but yes, a more fine grained control would lead to better prices for most users.


Honest question: is there any place in the US where electricity is subsidized or has discriminatory pricing?


California, sort of: They figure out what sort of place you own, pigeon hole it with other "similar" places and charge you more per kilowatt hour if you use more than they think you need. It's a procrustean bed approach.


A lot of electricity in the US is charged differently during "off-peak hours". I know some EV owners who take advantage of this by scheduling when they charge their cars.


Thanks, that's a more relevant example of discriminatory pricing. I wonder though if people defending net neutrality would look at this and think "off-peak pricing is wrong, everyone should pay the same..."


> I wonder though if people defending net neutrality would look at this and think "off-peak pricing is wrong, everyone should pay the same..."

Probably not, as it has nothing to do with where the electricity is coming from or how it's being used. Having different rates for different times of the day which still apply to all users and all services is not generally perceived as a violation of neutrality.


Suppose there was a broadband provider charging more for data transmitted when the network is congested, don't you think it would be a violation of net neutrality?


No, not as long as they charged everyone equally for the same service during the congested times without regard for how the bits were being used.

The point of net neutrality is that ISPs should ignore the content, source, and destination of the packets they carry when it comes to prioritization and pricing. QoS is fine as long as it's under the subscriber's control (for example, via DSCP tags), but it can be an issue if it leads to the ISP one subscriber's traffic over another's based on the services being accessed. Tiered services (paying for higher bandwidth, lower latency, etc.) are also fine so long as the tiers assigned to each class of traffic are chosen by the subscriber and not tied to specific protocols or peers.

Consider the USPS as a model of neutrality: they offer various grades of service from bulk package deliveries to Priority Mail but everyone paying for the same service gets (more or less) equal treatment.


Actually, my point above was that if we are going to get rid of net neutrality, it makes logical sense to also get rid of neutrality for electricity since it's more likely to be beneficial there than for the internet.

I'm honestly surprised at the backlash against net neutrality on here.


But there isn't neutrality for electricity.


Probably not many, but you're probably not going to have much luck changing the minds of people on net neutrality. Judging from your comments, you and I probably agree on the issue, but I think technical folks in particular have pretty well-entrenched opinions on the topic.


Electricity is routinely subsidized for large companies building new factories such as the GigaFactory Tesla is building in Nevada.

You can also look up "name of area / state" and search for electrical tariffs.


Ok, but these subsidies are not based on a specific group. Some might think to be bad to subsidize Tesla, but would there be such a thing as subsidies for weed farmers?

The reason I am asking: unless there are such subsidies, it would mean that people pay proportional to their usage, so OP's joke about "electricity neutrality" would not only be wrong, it would be against net neutrality as well.


By way of example some states have discounted rates / special rates for cotton gins. Here is one. https://www.epelectric.com/files/html/Rates/TX_Tariff_Schedu...

Such things are complicated to figure out fully, the subsidized power might not be subsidized directly by the power generators / power distributors. In some cases two bills are generated for the duration of the power subsidy so the subsidized part only sees the subsidized rate.


Definitely in WA state - certain counties with hydro plants sell power locally at ~$0.02 and sell outside the county at ~$0.2.

They changed the definition of local after crypto miners flooded in to limit exploitatiom and boom/bust speculation.


Isn't this why Internet service providers have multiple bandwidth tiers? If you don't want to use your Internet connection for bandwidth-intensive tasks, you can save money by buying a slower plan.


Through peering agreements, when one peer is handling a disproportionate amount of traffic, they already pay.


Reminder that Netflix was a big proponent for axing net neutrality laws.

Disagree with the premise though, Netflix has fast.com to make it transparent to customers that their NETFLIX volume is being throttled, and hence providers naturally do not throttle Netflix


>>> Reminder that Netflix was a big proponent for axing net neutrality

No, I remember the opposite.

Netflix argues for net neutrality because that forces last-mile internet providers to absorb the bandwidth costs of Netflix's business model.

https://www.theregreview.org/2021/11/10/kim-can-netflix-win-...

' In the original claim, Netflix v. SK Broadband, Netflix invoked a net neutrality argument—based on the idea that internet service providers should treat all content equally—and argued that SK Broadband would discriminate against Netflix by imposing high network usage fees. Netflix also argued that SK Broadband should cover the increased costs of network traffic since it has an obligation to provide internet services to its customers. '


*should be an option that you can book at your ISP.


Given that I only have one ISP available to me. How would that work?

This is a problem of their own making. They can't have their cake and eat it too. Comcast has a monopoly in my area for high speed internet. Every other option is 25mbps(on a good day) or below. Comcast has shut down all potential competition.

So yea if they want to be a regional monopoly they might have some regulations placed on them they don't like.


Please elaborate


I was on the phone with a buddy trying to troubleshoot Local file transfer speeds vs Speedtest results.

Speedtest kept reporting 100Mb/s range speeds (the tier he was paying for). Local file transfers however were stuck in the 1MB/s range.

After some questions and troubleshooting he discovered that he had used a 10Mb/s hub instead of a 1Gb/s switch that was sitting right next to it. A simple mixup.

His internet connection was also plugged into that same 10Mb/s hub.

There is NO WAY his PC could have reported 100Mb/s to speedtest.

Do not trust them for accurate results.


> Do not trust them for accurate results.

Ookla speedtest properly transfers data and divides to find throughput-- both with the website, the GUI app, and their CLI program. The open source speedtest-cli that uses their infrastructure is hampered by only one connection and can't seem to ever report higher than 300mbps.

Fast.com is ... a little optimistic about transients getting transfers started and sometimes reports numbers 5-10% too high.

Neither is grossly wrong by an order of magnitude. Maybe there was also wifi around and simultaneously connected, confounding the situation? Easy to end up in a situation where all your internet traffic is going over wifi (because e.g. your dhcp client replaces the default route with one going over the wifi interface), and all your local traffic goes over ethernet.


I'm seeing gigabit(-ish) via speedtest-cli:

   Speedtest by Ookla

     Server: TELUS Mobility - Calgary, AB (id = 17555)
        ISP: TELUS
    Latency:     0.50 ms   (0.12 ms jitter)
   Download:   938.58 Mbps (data used: 422.8 MB)
     Upload:   919.90 Mbps (data used: 959.8 MB)
  Packet Loss: Not available.
FTTH, Ubiquiti EdgeSwitch -> UniFi Dream Machine Pro SE via 10G link -> Nokia ONT.

I _do_ get limited to around 300mbps if my Mikrotik is in the mix and using CPU for routing.

Yeah, my home network is overkill but it's a cheaper hobby than other ones might be, hah.


> I'm seeing gigabit(-ish) via speedtest-cli:

You're using their cli client, not the open source one. Below, first their "speedtest" and then the open source "speedtest-cli"

    mlyle@brazen:~ $ ./speedtest 
    
       Speedtest by Ookla
    
         Server: Cloudflare - San Jose, CA (id = 44932)
            ISP: Frontier Communications
        Latency:    12.10 ms   (1.29 ms jitter)
       Download:   953.32 Mbps (data used: 1.0 GB )                               
         Upload:   891.89 Mbps (data used: 1.1 GB )                               
    Packet Loss: Not available.

vs.

    mlyle@brazen:~ $ speedtest-cli
    Retrieving speedtest.net configuration...
    Testing from Frontier Communications (47.155.215.132)...
    Retrieving speedtest.net server list...
    Selecting best server based on ping...
    Hosted by Nitel (Los Angeles, CA) [446.50 km]: 49.103 ms
    Testing download speed................................................................................
    Download: 323.09 Mbit/s
    Upload: 300.49 Mbit/s
speedtest-cli is a python program, vs. the native speedtest:

    mlyle@brazen:~ $ file /home/mlyle/.local/bin/speedtest-cli
    /home/mlyle/.local/bin/speedtest-cli: Python script, ASCII text executable
    mlyle@brazen:~ $ file speedtest
    speedtest: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, stripped


Practically the same. Perhaps it's a server selection issue?

  Selecting best server based on ping...
  Hosted by iTel.com (iTel Networks Inc) (Calgary, AB) [3.20 km]: 2.275 ms
  Testing download speed................................................................................
  Download: 931.69 Mbit/s
  Testing upload speed................................................................................................
  Upload: 854.01 Mbit/s


Interesting. I consistently get garbage results with speedtest-cli (across multiple machines and network connections) and good results with speedtest's own.

On the pi, I suspect it's running out of CPU--

    real 0m9.877s
    user 0m2.737s
     sys 0m5.595s
But that doesn't explain the systemic problem.


The Pi is definitely limited in terms of CPU. This is a fairly old, but beefy Xeon chip on my unraid box. Might be native vs Python code on ARM?


Yup, native vs. Python would explain a lot on the Pi, but I've also seen this on my big Threadripper machine here and on my Hetzner server in Virginia.

I'd assumed it was just something intrinsic.

On the other hand: speedtest's own thing uses multiple parallel connections. So if you have low amounts of loss that prevent one TCP connection from going super fast, it'll do better on speedtest than speedtest-cli.

Thanks for the help/info. Something to spend more time understanding at some point in the future.

edit: Just noticed that the Hetzner server in VA geolocates wrong and chooses servers in Kansas.


> edit: Just noticed that the Hetzner server in VA geolocates wrong and chooses servers in Kansas.

This is a common failure mode of geolocation, especially as it pertains to cloud providers. The databases can guess which country the IP address corresponds to, but not anything more (and as such, they default to something in the middle of the country).

https://en.wikipedia.org/wiki/Geographic_center_of_the_Unite...

Geolocation data tends to be worse for cloud providers for whatever reason (I believe it's a combination of differing Internet footprint characteristics, as well as IP aggregation mismatch on behalf of the geolocation database).

(Also see https://en.wikipedia.org/wiki/Null_Island for another common geolocation placeholder)


No worries.. best of luck with that. As a total aside, fast.com is giving me 420mbps which should be impossible given my wifi setup. :/


fast.com tends to be 10-20% high for me versus what goes by at the router.


> speedtest-cli ... can't seem to ever report higher than 300mbps.

I routinely get 700-900Mbps reported using speedtest-cli from centurylink fiber. My bottleneck is the CPU on my router running openwrt.


Can you look at this, and tell me which of the 2 you're using:

https://news.ycombinator.com/item?id=31063868

From speedtest's official cli program, on my 2.5gbase-t connected machine I get 2200mbps up and down.. but from the open source speedtest-cli I get 300-350mbps.

(Here, I only got ~900mbps because I was running from a raspberry pi with a 1000BaseT link).


Ha! I'm currently down the rabbithole of trying to understand why when connected through nordvpn, i'm seeing ~5% faster download and ~10% lower ping latency to speedtest.net when connected to an in-country vpn endpoint.

Does not compute. Compression? I can't explain the lower ping, traffic shaping?


> I can't explain the lower ping, traffic shaping?

Better routes. Most latency on the internet is not due to c.


Classic story where this is related: https://www.ibiblio.org/harris/500milemail.html

tl;dr timeout was sufficiently low and user sufficiently skilled to work out his emails could be delivered only within 500 miles


My favourite aspect of this story is how the academics, being academics, took it upon themselves to characterise the problem in detail before telling the postmaster about it. Could just as easily have said "huh, that's weird" and thrown it at them on day one.


And their explanation was probably more baffling to the admin than a simple "it is broken" report too. It would definitely have been for me.


C being speed of light


I get this same with AWS. The routes to my country in Northern Europe from us-* are terrible (eu-* is fine), to the point where a download from S3 during peak times can be tens of kB/s. If I connect to a VPN in another country it jumps to tens of MB/s.


When I use a VPN the measurable ping might go down but the noticeable ping goes up. There's always a barely-perceptible delay when doings things like using voice typing.


Some providers provide speedtests that test (only) last-mile speed between, e.g., cable modem and head unit. Mine does that and reports 1Gb/s just fine. Of course, only to the head unit, nowhere else in the universe.


speedtest.net has a CLI version that is much more reliable than the web version.


Edit: removed due to downvoting. Original comment pointed out that transfer speed is normally measured in Mbps not MB/s.


Can't get 100 Megabits on a 10 Megabit line was the point.


> Speedtest kept reporting 100Mb/s range speeds (the tier he was paying for). Local file transfers however were stuck in the 1MB/s range

I read that speed test reported 100Mb/s and local files were receiving 8Mb/s (1MB/s). Either interpretation doesn’t change the intent much. But now I’m curious if the MB was intentional.

I don’t really care what the normal convention is, but the fact that Mbps and MB/s are sometimes used interchangeably does cause confusion, especially outside this environment.

Was the choice to use Mbps as a standard a marketing idea? Like they have 1 MB/s? Well we have 8 Mb/s.


Yes I did intentionally mix the units up. I know it's not best practice but I also know that most people that frequent this site are aware enough of the differences to not get fooled/confused. To be honest it was just lazy writing on my part.


Unlikely your suspicions are correct. Provisioning high QoS to the speedtest site is common practice (see fast.com for countermeasures), but changing the entire subscriber QoS because traffic to one particular destination is detected seems like too much work and doesn't really achieve anything useful.

If you are on a wireless network (either your upstream from a WISP, or internally on your own network), then I'd suspect some wireless air access control syndrome is the cause -- e.g. stations are usually randomly fighting for air, but when you pull down your big speed test file that ends up putting the network in a state where your station gets all the air time. Lo and behold all your traffic has better QoS. Basically your speedtest traffic, because it is a constant stream, shoved everyone else off the air and allowed your other traffic to get through at the same time as a result.


> but changing the entire subscriber QoS because traffic to one particular destination is detected seems like too much work and doesn't really achieve anything useful.

About seven years ago, when unencrypted HTTP was still commonplace, I observed exactly this behavior (subscriber-wide QoS based on speedtest sensing) with my residential cable internet. My connection was nominally 100 Mib/s, but I would often observe downloads e.g. in Steam running at only 10-20 Mib/s. In those cases, I would open up a terminal and run

  while sleep 1; do curl http://<one-of-my-domains>/speedtest; done
Didn't matter that this URL would give 404. Just because the URL contained "speedtest", the ISP machinery would adjust the QoS and the Steam download would immediately shoot up from 10 Mib/s to nearly the full 100 Mib/s.

By now, this has not worked in a long time. I'm guessing they tore down the respective machinery once the important speedtest websites moved to HTTPS.


> and doesn't really achieve anything useful

At my previous apartment, I would get a full month rebate off my internet bill if I could document speeds considerably below average on a computer plugged in with ethernet. I could very well see an ISP with such a policy looking for nefarious ways to "fix" that money loss.


There's that, and also visiting a speed test site is an indicator that a particular user is more sensitive to speed than the general user base (and perhaps considering jumping to another ISP). It absolutely makes business sense to prioritize traffic to users who pay more attention.


No, as an isp these are the worst customers. It’s better to piss off an dump people who consume a lot of bandwidth an are obsessed with throughput if you’re “business obsessed”.


I'm assuming "an" is a misspelling of "and". If they're using lots of bandwidth, that's one thing, but I sometimes see 480p YouTube videos laggy and wonder what's going on. Maybe I'm in the minority, but there are at least some customers who aren't using a lot of bandwidth and still check their speed regularly.

Dumping bandwidth hogs makes business sense, and bandwidth hogging is probably correlated with checking bandwidth, but the correlation isn't 1.0.


In the US most people are reportedly served by a single ISP and cannot really choose. ISP that intentionally pisses off speed-sensitive customers would be sued in no time.


> changing the entire subscriber QoS because traffic to one particular destination is detected seems like too much work and doesn't really achieve anything useful

Unless something has changed recently, the Speedtest site just acts as an intermediary between you and one of the "volunteer" servers that actually perform the test for the client. Since these are hosted on many networks, it's conceivable that this is the easiest way to QoS the speed tests.


This should be easy to test. Find a a big file somewhere served reasonably fast and see if you can replicate the results while pulling it down. Maybe some Linux distro ISO? I really doubt an ISP would optimize that.


From my experience about 8-10 years ago when the shaping hardware was not that flexible a lot of ISPs analysed the traffic and changed shaping routes dynamically which usually reset the shaping for the local ISP network segment (coaxial networks). This resulted in exactly what OP describes.

I was surprised about how a lot of stuff in networking I quickly dismissed at first was actually reality.


Lots of people here have already mentioned the Netflix fast.com test, which is indeed good.

But also worthy of recommendation IHMO is the Cloudflare one (speed.cloudflare.com). Main reason I say that is its one of the few that measures jitter (or at least openly exposes the measurement).


CloudFlare one is giving me less than 1/3 the download speed fast.com shows, and in fact, testing by downloading a Linux ISO [1], the latter seems to be the correct one. I don't know what CloudFlare's test is doing (their server is geographically nearby) but I obviously can't trust their measurement.

[1] https://mirrors.edge.kernel.org/archlinux/iso/2022.04.05/arc...


That's probably due to your ISP imo, they're known to favor some routes/servers

Or the cabling to the cloudfare server is worse than to other servers


My ISP is unlikely to be a problem. I literally get higher speeds downloading from servers across the world in Australia [1] than from CloudFlare's benchmark. Something around CF's side is funny, not mine.

I'm looking at the network tab, it seems like they send a ton of small requests instead of a large one... which might point in the direction of the problem.

[1] https://syd.mirror.rackspace.com/archlinux/iso/2022.04.05/ar...


The issue is likely not with Cloudflare, but with the connection between your ISP and Cloudflare. That's why this speed test is relevant - if your ISP can't manage to make good, settlement free* connections (=peering) to Cf it's not a good ISP.

Example: Rackspace Sydney (AS58683) has (likely paid) peering with AS7474, who has peering with German Deutsche Telekom (AS3320). Peering with DT is significantly more expensive than many others because they have a regional monopoly.

* Cloudflare is present at many regional Internet Exchanges and any good ISP connects to these, too. Some don't, however, like Deutsche Telekom.


When CloudFlare is the odd one out having trouble with people's ISPs (just read other people's comments right here about their experiences with that benchmark... I'm not exactly the only one getting weird results) and when my ISP isn't having the same trouble when connecting to other servers across the world, blaming my ISP is... weird. It's a little bit like driving on the other side of the highway and then blaming everyone else for driving the wrong way. But I mean, if you still prefer to blame everyone but CF, by all means feel free.


for me download is correct but upload is 1/3 ...


There is also a test based on free software: https://librespeed.org/.


Interesting, while fast.com and speed.cloudflare.com got the same download speed, upload speed was much lower on cloudflare, but cloudflare used test server, that is geographically closer.


TIL, I didn't know Cloudflare has one and usually use fast.com

Nice to see the tests are breakdown with more info when hovering or expanding them.


Fast doesnt do upload


> Fast doesnt do upload

It does but it's hidden, you have to hit "Show more info" (once the main test has finished).

A bit stupid, but I'm sure some committee thought it was an awesome idea to do it that way.


Makes sense to me. The point is that the speed test traffic pretends to be netflix traffic which is mainly download. The upload to other servers could be limited by the ISP while being unlocked for netflix with low cost (because people don‘t upload a lot towards netflix). So there is less signal in the fast.com upload speed measurement


Where is this on mobile? I went to fast.com

Is it after the speed test is over?


> Is it after the speed test is over?

Yeah, you have to let it run thorough first, then the button appears.

(At least on desktop, I can't remember what it does on mobile)


It actually does -- you just have to click "Show more info".


I don't know if it's related but some time ago when I was testing the Arch distro, Garuda, I noticed that my Internet was very slow (Kbps) when updating. To test my speed I loaded fast.com and something strange happened, while the test lasted, the update speed was normal (at the top of my Internet plan). Once it finished loading it was slow again. At first glance it seemed that my ISP (Tigo) gave priority to Netflix and released the 'normal' speed when loading the test (fast.com is Netflix's). I made a script that called fast.com every time I updated my OS and it worked super well. I even used the script for other downloads. After switching to Arcolinux I stopped noticing that network behavior and stopped using the script.


> Is there a way I can test this more scientifically?

How about the following?:

1. Choose a dozen or so public files on hosts known to have high bandwidth (Google or Microsoft perhaps). Maybe choose a couple of public legal torrents too.

2. Write a script to download all the chosen files (either sequentially or concurrently), and emit timing information.

3. Run that script a couple of times a day at random times. Try to cover weekdays and weekends, days and nights.

4. Repeat 3 but running a speed test first each time. See if there is any statistically significant difference.


Yes, I do this as part of my job.

I help manage a system that does something like this for a large network operator. We have test phones that run TWAMP, RFC 6349 and iperf (and a few other tests) on a routine schedule to special test servers. generating data which you will never ever see.

But you could do something similar if you had the desire. I have yet to see anyone do that yet.

All network providers have various throttling systems set up to limit your throughput, that is pretty universal, and, most of such systems are intentionally bypassed based for speed tests. They know the 5 tuples for all the speed test servers and bypass throttling for those test.

Yes, speed tests are rigged, think, Volkswagen Diesel, an inside joke pretty widely known in the industry.


Just have ping running in the background. You can observe the difference yourself


Ping measures latency, not bandwidth.


A lot of network providers treat ICMP traffic differently.


Ping tests the possibility of connectivity and can provide a hint at latency. That is not a speed test.


Yes, and, the joke in the industry is that network speed tests are to network performance what the Volkswagen Diesel was to emission tests :).

All carriers implement various types of throttling or rate limiting, for a lot of reasons (anti-ddos, cost control, etc.)

But; universally, they bypass these rate controls such as Cisco PGW ADC (Application Detection and Control) and LTE eNB ABR for wireless as examples, and many others similar features that limit the rate the user actually gets. And in almost all cases, these systems are designed to allow speed tests to bypass the rate limiting.

Commercial Speed tests are in no way a valid or reliable way of understanding the quality of a network service. I am a little surprised that this is breaking news on HN however. I would think this would be pretty well understood by this community?

BTW. Fast.com does is not always immune to this "Volkswagen diesel" effect. The only way to know for sure is to have a VM or bare metal server someplace and do an iperf to it.


even if it were well understood, it is not common knowledge. if it were common knowledge, it isn't common enough of an annoyance. the last part is definitely not correct.

isn't throttling and selective benchmark fakery illegal for the same reasons VW got into trouble? one would assume FTC has a far narrower and well-fanged positioning to punish the perpetrators. what prevents this? (I think monopoly power + no designation as a ultilitt despite being one)


It's not illegal to make a speedtest that says you got some big value that is BS.

It's illegal to put arsenic in a bottle and label it Vitamin C.

It's not illegal to call you TV channel "something" news then just make things up.

Maybe it should be illegal to call what you are saying is news when it is made up? it used to be (google fairness doctrine)

Maybe there should be some regulated valid speed test, but there isn't.

there are a lot of things that are legal that are unethical.

there are a lot of things that are illegal that are sort of marginally unethical.

But, when you elect congress people who are not very technical, probably not going to happen. Just saying.

I don't have a good answer for you, just a lot of interestig observations.


Same, expected better from this community. I assumed it is common knowledge that most ISPs run traffic shapers and whatnot that take into account what you are connecting to, what sort of data you send, how intensively are you using your connection over whatever time window, etc.

And IMO secure DNS is not a solution. Either this ISP behavior is made illegal, or it will remain an arms race (with ISP being clear winner for most regular users).


Agree. It's funny how people I know who are "non-tech" intuitively understand that of course speed-tests are rigged, but tech people somehow think that the same place that allows Facebook pages that say that the jews have space lasers, that same place somehow is completely honest about speed tests. I am just pointing out, fake world is fake world. Just because it's says Mbps does not exempt it from fakeness. Fake is as fake does.

Just interesting that people who are smart enough to understand other fake things on the internet (that maybe some people not as smart as you are not understanding). But when it is some ISP tricking you, all of a sudden, everyone is up in arms about it.

It's interesting.


Well I'm a techie, for sure when it comes to networking, and on neither our current ISP nor our previous ISP did speedtests ever report a different value than what file transfers/bandwidth monitoring showed.

For that matter, I've never been able to find (and I've checked) that any particular site or connection type was actually given priority and more bandwidth. When the connection is slow, it's equally slow everywhere. When fast, equally fast everywhere.

Also I get the feeling that some people think the speedtest sites themselves are giving fake numbers. Why? What would they gain from that? And why have I never ran across this, despite using many speedtest sites on multiple ISPs?


For what it's worth, a popular speed test site could gain kickbacks from ISPs who want their customers to see "correct" number.

However, the deal is mostly not about speed test sites but about ISP behavior. ISP may maintain a registry of known speed test sites and shape traffic so that you get great results on those.

Registries aside, your ISP may also be detecting and throttling BT and/or VPN traffic, shaping sustained traffic, etc., meaning you will tend to get good speed test scores but get slow speeds if you attempt to use your connection in a real-life scenario.

And back to speed tests, they do tend to prefer servers located very favourably for ISP but not for you, routing to server on same ISP's network in the same city by default (this may be overridable by a tech-savvy user). I would assume they have agreements with ISPs on hosting and maintaining those servers, making any sort of manipulation a no-brainer for the ISP.

All in all, I routinely get cases where a speed test reports great results but due to whatever reason I get really slow effective speeds.


You mention same city. That could be part of the issue, if the ISP's connection to other cities is where the congestion is. I live in the middle of nowhere relatively speaking, so all my Internet traffic is non-local, and the same at my friends' and neighbors' houses.


Yeah, same, in real life I never need my connection to download something local or have a video call locally. However, speed test pre-selected local server on the same ISP network, so it reported unbelievable and useless number.


I was cursing on my ISP for a long time till I understood that Im on a 100Mbit wifi.

Then added a wire connection to my server and instantly 10x the speed :^]

Dont be like me, have wires set up everywhere where needed.


Wires are great, and I try to use them when possible, but you can saturate most home Internet with decent midrange WiFi hardware. I get over 1Gbps between local machines just using the stock Comcast/Xfinity WAP, and about 330Mbps to the Internet (capped by service plan). I splurged a little on a ~$120 4-antenna NIC but even my cheap older ones can saturate the WAN link.


Reminds me of buying a 144Hz monitor but not enabling 144Hz-mode..


My local wifi is faster than my 1gb/s ethernet.

This holds true in the same room and one room over.

I like this.


If you’re happy, great. But you might want to figure out what’s interfering with your wired bandwidth, because wired 1G ethernet should basically win every time compared to current WiFi standards. Maybe not ax under ideal conditions. But that’s a maybe.


1G ethernet is 125 megabytes (spelled out to avoid acronym ambiguity) per second

802.11ax is much faster than that, my wifi router is good for 1.5G or so while the protocol allows for 3.5G or so and my file transfer speeds are easily more than 125 megabytes per second

I’m not going to run more wires either way

So……. you sure I should care?

Its great fast speeds that scale to the file sizes I want to handle, which are pretty large for prosumer needs.


I won’t dispute your numbers, but have you measured latency and jitter? Some people who like to call themselves hard-core gamers care a lot about those metrics, and Wi-Fi has historically been pretty abysmal at latency. I don’t know if the latest standards have caught up with wired yet.


Not very scientific but you can try something that is unlikely to be whitelisted.

Eg. http://134.209.196.181:8080 This is an instance of https://github.com/e7d/speedtest running on Digital Ocean in Amsterdam.


Interesting. Using Wave broadband in Seattle from this server I get

    38.69 Mbps Download
   154.43 Mbps Upload
While from speedtest.net (to a local server) I get

   178.34 Mbps Download
   192.23 Mbps Upload
I'd chalk it up to distance/noise except that the download speeds are way more similar than the upload speeds.


I'm in Los Angeles. My downstream is half using this tool compared to the one built into Google.

That's reasonable, actually pretty impressive for talking to Amsterdam

It's hard to really comprehend how amazing it is that I can talk to Europe at 200M/s over WiFi from my personal apartment on my smartphone while lying in bed in Los Angeles, and pay a flat rate that works out to about $2.25/day for the privilege.

It's just absurd. It's pretty easy to forget how much of a future world we live in


Amsterdam is a kind of special case as its where major internet providers peer with each other via the AMS-IX

https://www.ams-ix.net/ams

You should try if possible connections to other pages in other countries.


Or just running iperf on a personal server. I use that to diagnose differences between raw UDP / TCP throughput and HTTP.


Ah yes, the internet's worst kept secret. The reasons behind it however are less obvious.

Yes, to a certain extent your ISP is watching to see if you run a speedtest and will open up more bandwidth to you in the event you do. However, if they were throttling you, which is not guaranteed, the most likely reason for throttling you is that the network hub for your neighborhood was under an heavy load, and they throttled everyone on your backbone hub to ensure that critical operations don't experience interruptions.

What you might consider critical operations might not agree with what your ISP considers critical of course, but they're a for profit business rather than a utility. So, what you consider a valid reason for throttling you doesn't count for squat. The reason they sped you up is that despite the fact that they carefully crafted their service contract to say they legally can throttle you for any reason or no reason at all, it reduces the number of complaints calls they don't intend to do anything about.

It also gives them plausible deniablity, and weirdly enough a useful metric for identifying hardware problems. They can simply make you think that the problem was the individual web service you were connecting to, or maybe just all of cloud flare. But more than that, they are able to track the number of times in a day that people visited a speedtest site, and whether they were able to get your speed up to the expected level. This functions as an early warning sign that they need to increase the total available bandwidth in an area if those numbers become what your ISP considers unacceptable.

Again, what your ISP considers unacceptable and what you consider unacceptable are not the same, but they're a for profit business rather than a regulated non-profit utility.


Seems we should be making internet a utility then. The second things like court meetings or paying the bills for other utilities were happening over the internet is the second internet becomes a utility.


This is real. I have had it confirmed to me by a Comcast employee. And really, this shouldn't surprise anyone.


How frequently do you need to ping speedtest to be prioritized?


Just because they can do it should not be a reason for doing it. You're broken in, defatist or conformist.


Not being surprised is not the same as being defeatist/conformist. There are things I would love to see changed that don’t surprise me, and there are things that surprise me that I see no reason to change. One does not imply the other.


this is true, in my country, it's a well known thing that the internet gets un-throttled when accessing speedtest, and someone ingenious made a script to keep running speedtest-cli in background to keep the internet working at full speed

https://github.com/orz811017/boost_bandwidth_via_speedtest


Yes, that is common. Back when I used to work for an ISP, all of our main competitors have whitelisted speedtest servers to sidestep their shaping and sometimes even ran their own, just to look good on the results page. The fact that other webs instantly load might be just an implementation quirk; the QoS might get reset for a second when loading the speedtest page.

You can see what ISP traffic shaping actually does when you have a server on reliable backbone link outside of given ISP network by trying to download or upload large file to it and measuring. Bandwidth changes usually aren't too smooth and often you can see "stairs" on measurement graph as the speed drops the more data you transmit.

This shows how the whole "speed UP TO xyz mbits" marketing trick really works.


Try one they probably don't have on their whitelist, like https://proof.ovh.net/


I had the same thing with my previous internet provider, I even set up iperf3 test to confirm that and it was clear when watching anything from youtube. It was so bad that it could barely keep up with 144p on YouTube and when speed test was running i could easily watch anything in 4k, but imidietely when test was finished everything was back to snail speed. I ended up with doing download only speed test in loop from cli version, and even with this extra traffic everything was much faster and with better latencies...


agree iperf to a server is a good test as long as the 5 tuple is rotated so they dont figure it out.


Impossible to know for sure, but the ROI on “accelerate packets that match an IP whitelist that is drawn nightly from the DNS records for well-traveled speed test sites” must be quite high.


Could speed test sites do something about this? Like obfuscate the traffic somehow to get more genuine results?


It's possible to build a speed test site out of media file fetches from otherwise public assets from third party sites, the quality of the measurement will just be a little noisier. In many ways this would actually be a better test than some well-placed IP address, since there is a chance to randomly sample many different routes from the ISP to real world destinations for practical object sizes (i.e., not just an idealized unidirectional stream with no request roundtrips in between).

For example you could do something with Image.onload event, but even better are the recent JS web performance APIs, they provide a huge amount of detail - including DNS lookup latency, TCP connect time, TTFB etc.

Actually now I want to build this

edit: I'm probably not going to build this, but you should. For large object sources, there are Flickr full size images, HLS URLs for video on demand systems (each HLS segment is generally around 1-3MB), etc.

I'd want a giant list of these to pick from at random, and probably want to bucket the results along (domain name, destination BGP ASN, destination GeoIP) axes as well as whatever the JS performance API offers. You could maybe present each axis as some kind of confidence interval, then geometric mean to produce an overall score presented graphically somehow

You could then collect up all these scores and publish stats bucketed by Internet provider. And that's the point where I realized I don't have the energy to build all this ;)


This sounds so much closer to a "real" test than our single-server tests.


This sounds awesome!


I believe this is the idea behind the https://fast.com speed test.

It just downloads data from Netflix servers, so the results you get are basically guaranteed to be the speeds you would get with Netflix.


> I believe this is the idea behind the https://fast.com speed test.

Or maybe the idea was that Netflix would get the speedtest treatment.


I think the bigger problem isn’t so much the traffic type as it is traffic destination. As long as there are only a few hosts or domains providing speed test services (and there always will be: upload speeds are expensive) ISPs will be able to whack-a-mole with their whitelists.

I’d say that this needs some sort of regulation, but as long as ISPs are the gatekeepers, they can cheese $Government all day long too. This word gets overused perhaps, but the closest I can come up with is a decentralized monitoring setup with random speed test hosts (especially hosted @home style). Care would have to be taken to avoid how-are-these-still-legal data caps though.


This is one of the biggest selling points of fast.com -- it's served from the same Netflix domain + servers that also stream video (nflxvideo.net).

If ISPs want to make fast.com fast, then they need to do so for all of Netflix.


This is one feature I like from my Wifi router. It does regular speed tests and tracks the results over time. I like to think it reminds my ISP that I want the speed I'm paying for.


Is that with dd-wrt or something similar? I heavily doubt you'd get that feature on an ISP router :-)


Unifi routers do this, for sure.


if this is theoretically saturating your bandwidth doesn't this lead to intermittent poor performance from your connected devices?


Saturating download can prevent acks from getting through in some cases on some mediums. So throttling download a bit speeds everything else up.


It wouldn't surprise me one bit if the ISP was doing something like this. It's an easy way to make their service look better and there are few consequences for getting caught. It's also hard to prove conclusively that it's happening.


Another weird one: Open fast.com and check your speed.

Now reload the page and and run it again but as soon as it starts, open another tab and do some browsing.

The second test gives me higher speeds consistently.


Of course, the pipes are unclogged


I was having issues with slow load times, high latency, dropped packets, the works. I had been profiling it and logging my bandwidth with (iirc) iperf on a raspi, over the past 48h. When I felt like I had collected enough data, I called Spectrum to give em a piece of my mind. One of the first things they had me do was use their online speedtest. Came up with 100Mbps...I had been getting 10 on iperf.

Suddenly, my traffic quality was mysteriously better.

No idea what was actually going on, but the whole thing was a bit sus.

I definitely recommend using your own server and profiling tools if you are suspicious of speedtest.


Speed tests often test last mile. That’s only part of real world performance. Some examples of other factors to real world stuff:

1. If you internet provider has limited bandwidth when it connects to other parts of the internet. I used to have a problem where my ISP has one connection out of its network. That bandwidth was limited and caused real world issues.

2. TCP can and often does involve multiple round trips. As web pages have grown in size it’s had an impact on performance at the network level

Increased bandwidth in the last mile doesn’t solve for these types of issues


You can often deduce the pipeline sizes from a trace route - if the next step out of your ISP is LA-NYC-1G.carrier.net you’re probably never going to see more than 1G until they upgrade the equipment and leave the name the same.


Well, I've had this experience as well, where I note the performance is so bad that I go to the speed test page of my internet provider and then it ends up being ok. However I have a pretty solid suspicion that they have not done anything to monitor usage and stop throttling when seeing a check because:

1. I used to consult (for two years) in the department of my internet provider that would have had to do the reporting to the people who would have controlled throttling and even worked on the speed test page at times and there was no project to do that.

2. There was evidently no known and talked about throttling project among the hundreds of techies working at the business.

3. This place would not have been able to pull off the coordination between departments required to do this and also keep it a secret, and I'm betting most Internet providers would be even more incapable of doing something so nefarious.

4. The department providing customer support would have had to be in league with whatever department providing throttling, however KPIs for the customer support would of course have been negatively impacted by the many people calling in to report negative bandwidth usage. Thus they would have had to be in league with another division of the company to make sure they underperformed and thus did not get good bonuses etc. That would be an enviable level of dedication I must say.


Study the traffic in Wireshark, and also sniff upstream of your router to compare. I'd bet it's your router before I'd bet it's your isp.

I gave up with cots routers a while back, and life has been significantly improved. I wrote it up here https://res.rag.pub/2020-11-1-an-home-router.html


As other commentets have pointed out: you are not insane :)

Another speedtester you can try: https://www.dslreports.com/speedtest


unfortunately, the sad truth about dslreports, who for years has remained an unbiased provider of a truly accurate speedtest, is that their test no longer functions appropriately in modern browsers.

however, due to this, https://www.waveform.com/tools/bufferbloat was born. they mention this along with their specific reasons in the FAQ on the linked page.

so, update your bookmarks and plan to continue spreading the good word with this new provider in place.


It would be very expensive for ISPs to implement this kind of thing.

So Occam’s Razor makes me doubt it. ISPs do all kinds of tricks to make speed tests work well, co-locating, prioritising that path etc. But I don’t think they’ve complex profiling equipment that slows everything, then removes the slowdown when they see DNS/SNI for speedtest, then apply it again after.

Also been working for ISPs for 20+ years and never heard of it. Only exception may be in cellular with their fancy PCRF boxes, but even there I think it’s much more common to throttle one class, and not another, than to slow everything but switch the throttle on/off if you access a particular site.

If this is true you could easily game it by running speed test all the time. All you gotta do is throttle bandwidth to the speed test servers yourself somehow (so the speed test goes slow and doesn’t eat your available bandwidth.) Then you’ll be permanently in the “they’re doing speedtest give them full speed” category.


This functionality is built into Cisco IOS. Can you tell me what exactly is expensive about implementing it alongside the other configuration?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: