Netflix (fast.com): 790 Mbps down / 950 Mbps up / Latency 8 ms unloaded, 12 ms loaded / Server: Ashburn via IPv6.
Ookla (speedtest.net): 928 Mbps down / 938 Mbps up / Ping 1 ms / Server: Raleigh via IPv4.
DSL Reports (http://www.dslreports.com/speedtest): 611 Mbps down / 929 Mbps up / ping 16-41ms / Servers: Houston, Dallas, Newcastle DE, Nashville TN, Dallas.
Location: Raleigh. Provider: AT&T fiber.
Test run using Safari, macOS 10.15.4, Thunderbolt Ethernet.
Edit: the small file sizes used for some of the tests seem to drag down the overall speed measurement quite a bit. It's biased against upload measurements too since there's download files sizes of 25MB and 100MB whereas upload tests only up to 10MB file size. But even there, something seems off. The upload measurements are much smaller for the same file sizes (e.g. 170 Mbps avg vs 7 Mbs average for a 10 kB file).
I question this methodology. I care most about my 1Gps when I'm downloading the latest version of Xcode or some other huge file. I guess the smaller sizes are to better emulate downloading web pages, but in that case, the latency is probably what matters more. Even with 1Gps, when I'm out in CA, sites typically feel faster.
speedtest.googlefiber.net: 800-900 Mbps down / 800-900 Mbps up (multiple tests to servers in Charlotte, Raleigh, Atlanta, seems to bounce around each time I reload the page).
Speedtest (Ookla) with server manually set to Windstream in Ashburn, VA: 886 Mbps down, 900 Mbps up. Confirmed that my router is measuring the same amount, so Ookla isn't just making up these numbers.
A lot of people forget or don't understand the "net" part of "internet". The internet isn't a monolithic service you connect to. When your ISP offers you "1Gbps" internet, they're not guaranteeing that whatever your activity, you will get 1Gbps, they're just giving you a 1Gbps connection to their network.
Their network will then interconnect with other networks. Depending on the specifics of those interconnections, you will see different performance characteristics depending on what you're doing.
For example my ISP peers with a particular network that hosts servers ~600km away that I exchange a ton of data with. I get 1Gbps when I use my ISP. However when I use a different local ISP that doesn't peer with the remote network, I see significantly reduced speeds, because the route taken to exchange data isn't optimal.
I know the HN crowd is fairly technically competent but I've seen plenty of competent people who I think knew this in an abstract sense but it just hadn't clicked for them what the practical consequences were.
- Especially for users with a very fast Internet connection, speed.cloudflare.com reports upload speeds much lower than expected figures. We don't yet know what is causing this but will disable the upload part of the test until we know more.
- In general reported download speeds are little lower than figures coming from other speed tests. We will revisit our methodology to understand the discrepancy.
- Re: the speed test automatically starting: we appreciate the feedback and understand why some users may not want this as default behavior. We will disable the auto-start for now.
In the meantime, we appreciate any and all feedback, please keep it coming: you can reach me at achiel [at] cloudflare.com
For what it's worth, Cloudflare shows me at 10mbps down, and Speedtest shows me at 160mbps (much closer to my expected 200mbps). This is a large difference.
Personally I prefer the single TCP connection results as these tell me what I'm likely to see in a real-world situation such as web browsing with HTTP/2 or a large download.
I've confirmed the 500mbps speed I pay for in Montreal is accurate with my own iperf and iperf3 tests to physical servers I own in NYC, so it's not a "your ISP is colluding with the speedtest sites" thing. I've also confirmed I have a 1ms rtt to 18.104.22.168, and a 10ms rtt to 22.214.171.124 with ~2ms jitter by pinging them in terminal.
These CF test results worry me somewhat because I host a bunch of traffic on servers in my closet over Cloudflare Argo tunnels, does this mean those services are only able to push ~45mbps with 60ms ping via Cloudflare? Or is this just an artifact of something weird going on with the test methodology?
fast.com likely has the exact same issue, as Netflix has content boxes at ISPs, although I can't say for sure.
Plus, even then, "speed" isn't absolute. It's all about agreements, routes, capacities and load. Test to specific targets of interest if you can, e.g. http://speedtest-nyc1.digitalocean.com/.
I get 860-910 Mbps down/ 180-372 Mbps up to NYC[1-3], TOR1, but I have to test using Chrome. Running the DO test under Safari is pegging the CPU on a Macbook Pro 2.7 Ghz i7. (The other speed tests run fine under Safari.)
I just wanted to ensure that we correctly discredited useless numbers and focused on the interesting ones instead.
I get download measurements anywhere from 400-800 Mbps, but mostly under 500 Mbps. I have yet to see an upload measurement above 100 Mbps.
This test just doesn't reflect real-world results for my Internet connection.
I'm not a low-latency gamer so none of the differences really matter unless they're near zero. I've been streaming video and fetching/pushing git repos that I use since having a 5Mbps connection.
I'm paying for 100 Mbps down / 100 Mbps up, and every speedtest always comes very close. Upload usually a little higher. From experience the actual maximum speed is also roughly 12 to 13 MBps.
Cloudflare is reporting 667 Mbps down / 220 Mbps up.
More likely, Netflix has better infrastructure and better peering agreements, than Cloudflare. Which is kind of surprising, since Netflix is supposedly a media company and Cloudflare is a "internet" company.
I think there's something about the methodology of Cloudflare's test.
A thought, though: Cloudflare reports your p90 time as "your speed". I don't know what the other sites report. Is it the same?
44 Mbps down
12.5 ms jitter
What’s it like having internet that actually goes?
26.9 Mbps down 11.4ms latency 1.96ms jitter
(my synch speed is only 24.73 Mbps, so those numbers dont seem right...)
-- Tony Abbott, probably.
Paying A$89 for TPG 100Mbps plan
Server location: Sydney
Before students went back to school, upload bandwidth was throttled at 4~8Mbps (due to the congestion caused by nation wide WFH), now speedtest / fast.com reports 30~40Mbps...
Surprisingly, download bandwidth has not been affected like Optus Cable before (experienced 10Mbps or less for 100Mbps/2Mbps plan), facepalm.
M-Labs for me has the highest real-world accuracy, because it's designed to obfuscate the testing servers so that your ISP can't cheat or make itself look better by prioritizing M-Labs servers.
Ookla also is generally run out of the isp datacenter.
Cloudflare has servers running at a PoP and doesn’t get the same performance
I thought it was pretty well known at this point that ISPs optimize traffic for speed-test sites and throttle a lot of regular traffic, especially during heavy-traffic times like the after-dinner Netflix bump. Since this is a new tool, a plausible explanation for lower speeds is that not all of the ISPs' network engineers have had a chance to prioritize Cloudflare's speed test traffic on their routers along with the other speed test tools.
Part of me wonders if this is part of a ploy by Cloudflare to avoid throttling by ISPs. If they design it right, the ISP can't tell the difference beween the speed test and regular CDN content, which would be extremely clever.
There could be capacity issues between an ISP and some content providers but that is different than throttling or optimizing traffic.
Error fetching https://speed.cloudflare.com/__down?measId=4466372954803167&...: TypeError: i is undefined
Then it pauses itself. When I resume it, it runs for a few seconds (printing more errors) and pauses again.
I am using Firefox 76.0.1 on Fedora 32 x86_64. None of the adblocker addons I'm using are blocking anything.
Edit: For the reference, I believe that the default should be at least as locked down as my thing (all third-party cookies blocked and so on).
It just blinks "Running..." for 2 seconds, zooms in the map and goes into "Paused" state without any other changes to the page. There's a stream of smaller GET requests towards speed.cloudflare.com/__down, all resulting in 200.
Somehow I think both are true, I'm just not sure how it's possible...
Then Google Fiber came out and ISPs started running fiber to the streets of most houses, but not running fiber directly to each house. This was in defense of Google Fiber, so they could switch people to fiber gigabit easily with little transition. Now there is DOCSIS 3.0 which does channel bonding, so a single person can have up to the equivalent of 32 TV channels of data, getting up to 1.2 gbps if unshared. Today, the average home shares their bandwidth with between 0 and 3 other houses. The fiber is right outside. If you order gigabit cable internet they will run the fiber to your front yard but not let you connect to it directly.
On comcast the lowest speed you can order where you get fiber internet is 2gbps, and because DOCSIS 3.1 now allows 10+ gbps cable internet, Comcast fiber may become a thing of the past. In a way, DOCSIS 3 has been a set back for consumers. Sure you can get multi 100mbps even gbps internet, but it allows cable companies to not have to run fiber that last 30 feet.
With fiber your ping time is lower, so your internet feels quite a bit faster. Fast internet today isn't mbps, it's ping time.
Two thoughts about this:
1. On 100/10 coax, ping times to 126.96.36.199, 188.8.131.52, and 184.108.40.206 are all under 20 msec. What could a lower last mile latency possibly get me? Even cutting it to zero would seemingly not make a noticeable difference for page load times and the like.
2. I really wish this speed test measured your real latency, but it does not. It tests for latency pre-load, not during the load. Because of bufferbloat, most users will find that when they're downloading a file (whether that's someone on Netflix, or fetching a 2 MB image for a news article) their latency goes up dramatically, possibly hitting in the hundreds of msecs. It's really regrettable that this test doesn't try to measure this in any way. Fast.com does, but it's the only free test I can find that does.
After a lot of tinkering, I managed to get my latency under load down to 50 msecs without a significant drop to my maximum speeds, but this required quite a lot of tinkering with queue management on a BSD based router, and I suspect most people will see worse results.
On fiber when I'm downloading at the full 126 MB/s my ping barely goes up, unlike on cable.
They are running fiber lines although to be able to serve the the increased bandwidth demands.
In a lot of places, they're also adding lots of equipment in the last mile to shrink the "node" sizes so that the number of households within any particular shared medium on Comcast is much smaller today than it was 10 years ago. It's not the same level of build out as bringing fiber drops to every home, but it's also not nothing.
My computer in 2000 could not handle a 100 Mbps connection, and only had 32 MB of ram as I recall. The 100 Mbps may have been targeted towards businesses operating servers in a home address that were and are relatively common in the bay.
And we are stuck with Gigabit Ethernet still. I know 10Gbps is too much to ask, but how about 5Gbps?
Yes, the switches and NICs cost more than 1 Gbps gear, but that was also true of 1 Gbps when 100 Mbps was the norm.
And if reusing existing cabling is not a constraint, used fiber+SFP switches on eBay are well within the reach of a hobbyist.
You can get faster stuff, but its all business level and it gets very expensive quickly. Until 8k video becomes a real thing (and thus far, 4k video isn't even much of a thing), I don't see any real need for faster network speeds. Do you?
And I often dont like the idea running into limits with Gigabits Ethernet, which is fine on itself. I just prefer to have headrooms.
For wireless that is entirely different. WiFi, even 802.11ax just aren't suited for multiple user with constant connection. People now prefer to use 4G / LTE for their speed and reliability. And that is why I would have liked to see LTE on Un-Licensed spectrum.
It is still not very common. The main use case for it is backhaul for access points in commercial settings.
I seem to be getting pretty accurate results unlike others here, not sure if it's been fixed or my peerings are better:
Cloudflare: 78, 73, 73, 73
Netflix: 74, 78, 71, 69
Google: 70, 70, 70, 65
Speedtest.net: 68, 67, 70, 71
I know this doesn't mean too much alone, but for me seemed to produce on average the highest speed.
The name is fairly descriptive; it does what's advertised without any subterfuge. And there's a Pause button.
This isn't bad behaviour, people just may not expect it based on what they are used to.
If I had I would have probably brought at least one video conference to a halt.
On the online gaming end of things it can be much more important than other metrics, depending on the genre. But there is an unfortunate amount of misconceptions floating around out there, and people tend to default to "higher speed = better" speed test results, when bandwidth doesn't really matter beyond a certain baseline.
Cloudflare: 244 Mbps
Speedtest.net: 83 Mbps
Cloudflare seems to consistently report higher than cap speeds due to 100k and 1mb files not engaging the traffic shaper, while the 10MB files seems to be in line with the other speed tests which they seem to do a continuous download.
speed.cloudflare.com: 57.7Mbps (Server location: Madrid)
Not that you'd know it, browsing the web.
Netflix - 700/280
DslReports - 400/638
Speedtest (Okla) - 602/621
All of these are on a Dell using a USB3 Ethernet adapter. The only thing that I currently have running gigabit not through an adapter is my AppleTV4 running the speedtest app.
AppleTV 4/SpeedTest (Okla) - 925/751
Meanwhile in Australia (down-under), I've waited for so long to get NBN (can only get HFC at my place which was like over 20 years old? At least at my hometown Shanghai, had something better than that in 2001 (GFW wasn't in shape back then) - I remember it clearly because that's when I started Linux journey by install Mandrake 8.1 ;-).
I was quite excited when finally enjoying 100Mbps down / 40Mbps up (I was told here in Oz consumer Internet upload is always capped at 40% of download bandwidth), before the cable was 100Mbps/2Mbps, useless to the way I abuse network (mainly transferring data to North America).
Have been experiencing congestion and degraded upload bandwidth (only 4Mbps ~ 8Mbps) since the pandemic, it did not restore until public school students went back to school (last week...), finally the infrastructure issue got fixed by the lift of social restrictions LMAO.
Kept getting the test paused automatically on Firefox Dev Edition. Looking at the console:
Error fetching https://speed.cloudflare.com/__down?measId=2826350041244361&bytes=1000: TypeError: can't access property "transferSize", i is undefined
You might be located in Delhi, but maybe your ISP backhauls all your traffic to their head office in Mumbai for example. It wouldn't make sense to then ship your traffic back to Delhi just to serve it from the geographically closest location when Mumbai is your "network closest."
The Delhi POP might also be overloaded, or down for router maintenance, or they can serve your traffic cheaper from Mumbai.
CF's score is probably more relevant since there's a lot more sites that are hosted there than netflix (which is just netflix).
Fast.com: 620 Mbps Down, 650 Mbps Up, DFW/ATL
Dslreports: 889 Mbps down, 944 Mbps Up, LAX/IAH
Ookla: 919 Mbps down, 934 Mbps Up, DFW/Richardson
Speed.cloudflare.com: 118 Mbps down, DFW/Arlington (closest)
I then tried to run the speed test from cURL by sending a GET to the same URLs this test uses, but it seems that I can't ask for anything larger than a 100MB payload. This is unusual since other speed tests allow you to download/upload up to 1GB in each direction.
I'm wondering if this speed test should use larger files for fast connections.
I wondered why map markers were always airports:
> Data center locations are tracked as airport codes and may not be 100% accurate.
The number that your ISP gives you is likely what they provision for your connection between you and your ISP
The number that a speed test gives you is the measured bandwidth between you and the server on the other end. This is expected to be different for different routes across the internet.
If you want to test just the pipe between you and your ISP, many ISPs provide their own speed test servers for this purpose.
I have fios 1Gb service. From clouldflare I see ~500Mb/s down and 90Mb/s up, and fast.com shows 600Mb/s down and 900Mb/s up.
It only shows a download test for me...
Yes, use iperf or netperf instead.
Netflix: ▼ 890 Mbps ▲ 160 Mbps Payload 720Mb Latency 13ms
Netflix Open Connect is inside my ISP network
And still a latency of 13ms? Here, Cloudflare, Netflix, Google, and Ookla Speedtest all have latencies of 4-6ms.
SpeedTest is an alternative to speedtest-cli for the Ookla speed test, written in C++.
There are many speed test tools out there. Our mission is to help build a better Internet. To do so, we believe in giving users a choice of different services: you shouldn’t be tied to one provider and you should be able to compare results across different tools."
This tool would be more useful when it allows people to "compare" their speed with others.
Average speed in their country
One network with another in the same country
Compare a country's average speed with another
Add the benchmarks and put a share to social media button and HOPEFULLY.
One other thing - add (Megabytes per second)
Regardless, it's showing me as connecting to the LA server, which is almost undoubtedly the clostest PoP, so I'm not too concerned.
In my region it seems only Azure manages to pump full gigabit up and down.
Under "About" section. That's a really weak argument to invent a new tool.
Why need a new programming language? Well, because we think that our users deserve more choices.
That does not fly well when someone needs to fork out $$$ to fund the activity.
Back to the topic, what users really need is a way to run all available tools (there are many speed test tools out there) and provide them a "full picture" in one shot. Sure, the test will take 3 mins to run but it would more useful if you're concerned about usefulness.
From a speed-test service, difference is indeed in who you test against: Location is a feature. No different than programming languages providing different features.
Without this service, you would not be able to speedtest towards cloudflare infrastructure at all. You'd need this at the very least to try to create the "monster" test-suite you suggest.
However, it is entirely impossible to provide "a full picture", simply because there is no such thing. In a given moment, there is a given possible throughput and latency for any given point A to point B, which holds no value for any other point, or any other time.
wifi: 70 :'(
- It starts as soon as you visit, other speed tests don't, which violates a user's consent.
- It collects and saves your data, also without asking.
- It gives wildly different results than other speedtests, with no explanation as to why.
Of course CloudFlare would screw up a speedtest. They screw up at everything else they do, so I'm 100% not surprised.
1) fast.com, 2) this feels like a reach, isn't "going to a speed test site" a pretty clear indicator of what the user intends to do?
> It gives wildly different results than other speedtests, with no explanation as to why.
I'm confused what your platonic ideal of a speed test is. Off the top of my head, I may care about:
* "speed to my ISP"
* "speed to something I care about that's 'nearby'" (fast.com to a Netflix box peered close to me)
* "speed to a CDN" (speed.cloudflare.com)
* "speed to a random data center somewhere" (speedof.me and friends)
* "speed to a box I own"
I'd be amazed if I didn't see significant deviation between all of those measures.
Trust me, if you have a gigabit fiber connection like mine, you would notice that their results are way, way off the mark. See: https://news.ycombinator.com/item?id=23314368
/me feels im/depressed.
edit: Latency varies but goes down to 6 to 5.5ms, similar for jitter, but even lower to just above 4ms. Line almost maxed out. Just missing 5up/2down. On really ancient HW :-)
They appear to be putting me exactly where MaxMind's GeoIP database thinks I am.