Hacker News new | past | comments | ask | show | jobs | submit login
Run an Internet Speed Test from the Command Line (putorius.net)
98 points by dragondax 18 days ago | hide | past | web | favorite | 47 comments



Note that some ISPs prioritise traffic to known speedtest targets, turning off traffic shaping rules that might otherwise slow bulk transfers. When this happens it means you are testing the likely maximum throughput of your connection not necessarily the throughput you will see more generally.

This is why Netflix started fast.com - because it draws data from the same distribution points as their video streaming apps it means you can't prioritise the speedtest without also doing so for the video traffic or (more likely) you can't de-prioritise the video traffic without also getting bad scores in that particular speedtest. From Netflix's point of view it is an answer to people contacting support with "my speedtest results are fine, the problem must be your servers" when they are experiencing video lag/drops and other such problems and the issue is due to ISP traffic shaping or the ISP simply not having enough backhaul bandwidth.

A more reliable test might be taking part in a busy public torrent: that way you are testing against arbitrary locations so your ISP can't be setting different shaping rules for them. Just remember to throttle upstream when testing downstream and vice-versa or saturation in the other direction will slow control packets that will in turn give you lower results for the one you are testing. This may fall into another trap though: unless you limit the number of active streams it may be an unrealistic test as more generally most processes use a small number of streams (or just a single one), and if you limit the number of streams too much you might get a lower result because each swarm member you connect to may be fairly saturated and sharing its bandwidth amongst many connections.


A busy public torrent is a bad predictor for me because my main bottleneck for most traffic seems to be after my ISP (i.e. their uplink or their international connection - possibly due to shaping and not a hard limit) not the 'last mile', so if a torrent has a single peer that's local then I can get multiple times larger speed than usually. Though with fiber in most cases speed can be ignored, as it's simply "enough" or limited by the server capacity on the other side.


Another option would be to spin up a few VMs on various pay-per-use cloud providers, perhaps VMs in a couple of Azure regions, and test by pulling data from those. The variety of regions will rule out local traffic problems at the other end, the pay-by-hour/minute/second nature of such VMs would make it a cheap test to run. Especially if you can leave the VMs configured but stopped between tests and have no cost in that state, so you pay nothing most of the time but don't have to wait for provisioning & configuration each time.

Or if you have external servers for other uses, and/or have friends who do, you could use those instead of needing to spin up anything new.


My free Google Fiber connection (5 Mbps down) always scored ~100 Mbit speeds at fast.com, but was 5 Mbps on speedtest.net.

https://twitter.com/mholt6/status/999425756198387713


I've seen similar on a 100mbit line that was actually a faster line throttled to that speed, the effect was particularly upstream rather than down. Essentially some content (or more likely content from some locations) is allowed to bypass the throttle.

This might be deliberate but not malicious: through Netflix's Open Connect program ISPs can host a local cache of Netflix content to reduce their peering costs while still offering full service to many concurrent users. If the main throttling effect you are experiencing is applied topologically close to their peering lines such that it would apply to the Open Connect equipment too, rather that the Open Connect kit setting between the throttle point and the peering lines, then to make it useful in the case of new or otherwise not recently accessed data the traffic shaping rules would need to let Netflix traffic (including your speedtests) through relatively unhindered. The path between the Open Connect equipment and your line is unlikely to need throttling because that will be an internal matter with very different cost dynamics than external peering.


So Google Fiber is throttling your connection, as you paid for, but the network connection to netflix is unthrottled. Netflix is also deploying OCA (Open Connect Appliance) servers directly inside most big ISPs - https://openconnect.netflix.com/en/#sample-architectures


That said, I can and do get 200mbps+ on fast.com (my preferred/goto speedtest), and still experience constant buffering and the "sorry we can't play that video right now" message that kicks me out but then just clicking play again works.

I'm not saying it's their servers or network, but I haven't found a reason for it based on my fast.com results.


> but then just clicking pay again works

freudian slip much?


Good catch!


It may be that your ISP, or another organisation that your traffic flows through, has implemented an efficient and accurate-enough way to identify speed-test traffic from amongst the actual video traffic. Possibly just by assuming that after a few minutes of transfer or a certain amount of traffic, your current use is not likely to be just for speed testing.

I assume this is why the tester does not run an upstream test by default: otherwise the lack of a block of upstream traffic would be a good signal that video playing is going on rather than throughput testing.


Possibly. I wouldn't put anything past optimum after they started injecting advertisements for new channels and packages on my non-https traffic.


I wrote my own console-based client and server to get around this problem. It uses its own protocol on its own ports and runs on servers that you control.

https://github.com/chrissnell/sparkyfish


IME the best way to do it is to configure a WPT (WebpageTest) instance with geo-distributed clients, and trigger test runs from your local terminal.


Surprised no-one has mentioned `iperf` on linux [1,2] for what that is worth.

It lets you roll your own speed testing infrastructure basically: Run it as a server on one end you want to test, then run it as client against that server IP/hostname on the other end.

It is very handy for measuring throughput between two boxes in a network (or anywhere for that matter).

Several big ISPs have endpoints for testing it as well for a speedtest like experience [1].

[1] - https://iperf.fr/iperf-doc.php

[2] - https://manpages.ubuntu.com/manpages/bionic/man1/iperf.1.htm...


Iperf is really useful to test your own LAN. Didn't know you could use it to test your ISP speed (in some cases)


Well, you need a endpoint outside your LAN to test. But there are some public servers[0] but not that many so it's hard to get good average speed unless you are willing to pay for some EC2 bandwidth.

[0] https://iperf.fr/iperf-servers.php



Was just looking through the thread for this comment.

Because of this issue, I spent an obscene amount of time trying to diagnose a network problem that didn't exist when I upgraded to 1Gb. Hopefully others can avoid the same mistake.


Closing the issues with an explanation of some sort would be one thing. But closing and locking them without any explanation is very strange. It makes it very difficult for anyone with the same problem to find any sort of explanation.


Counterpoint: it’s accurate for me on a 1Gbps symmetric connection - it actually gives me a better result than the web speed test which I assume is limited by Javascript performance.


He locked all above issues.


this project is the first time i've ever seen this kind of thing in problem reports. i've already seen enough to stay far away from that software.


there is a standard package available in the debian repository

apt install speedtest-cli

Description: Command line interface for testing internet bandwidth using speedtest.net Speedtest.net is a webservice that allows you to test your broadband connection by downloading a file from one of many Speedtest.net servers from around the world. . This utility allows you to use the Speedtest.net service from the command line. . Note: This tool accesses speedtest.net over http, while the web-based client uses websockets. This tool has shown to become increasingly inacurate with high-speed connections. For more information, see the readme on: https://github.com/sivel/speedtest-cli


Might concern some users:

I had an issue on my 250/250Mbps-line using speedtest-cli in that it showed the correct download speed, but only 4Mbps upload.

Googling suggests it could be an issue with slow CPU/low RAM.

My (now old) server was by no means a beast as it had a throttled i3-3220T with 8GB RAM, but it should be plenty fast for that task.

The solution I came across on reddit/stack was to use this [0] instead, which solved the discrepancy for me:

[0] https://github.com/taganaka/SpeedTest


Indeed, my own raspberry pi was returning weird results with the python-based one; I switched to this one and it is more consistent. I had to add an "--output json" option to it to not have to redo all my tooling around tracking the speedtest results, but forgot to make a pull request to the repo, and I've just done that: same json format as the python-based version.


Thank you! You're awesome!

I used to use speedtest-cli to monitor my Internet speed regularly, in fact every 5 minutes, while also logging DOCSIS signal levels, when debugging connection issues.

It didn't give accurate results, and that caused problems for me.

I wanted to switch to the C++ version, but needed the JSON output to be able to log the data and analyze it.


a raspberry pi 3/3b+ or whatever (not v4) should by no means be used for any sort of speed test, not only is the wifi weak/slow but the wired ethernet interface is also hanging off a usb2.0 bus.


> A raspberry pi 3/3b+ or whatever (not v4) should by no means be used for any sort of speed test

You make a valid point though you could test the speed of the Raspberry Pi in question. As long as you take the caveats you mentioned into account, that's an OK context. Which is the problem with GP's post to begin with.


My pi is connected directly to the router via ethernet, and the bandwidth available to my home connection isn't enough to saturate it "unfortunately" (~95/20) so this isn't massively a problem for me. YMMV.


With the Ubuntu packaged version 2.0.0:

  Testing download speed...
  Download: 917.52 Mbit/s
  Testing upload speed...
  Upload: 4.16 Mbit/s
Yeah, there's something wrong with that. There doesn't seem to be high CPU use.

Version 0.3.4, which is the default from Apt on a test server, seems to be OK:

  Download: 2239.51 Mbit/s
  Upload: 882.54 Mbit/s
(Maximum would be 20Gbit/s both ways, but I suspect achieving that to a single server would require some network parameter tweaking.)


maybe it's not as accurate, but on a lot of systems where wget is already installed one can do with a lot less hassle:

wget -O /dev/null http://speedtest-ams2.digitalocean.com/100mb.test

Nota bene:

- change the datacenter to a closer one when appropriate.

- Unit is MB/s, rather than Mb/s, which is more common for this kind of test


To get a list of all the other locations go to the DO page (strip the filename): http://speedtest-ams2.digitalocean.com/

It's not https, for performance? ... (My download|upload: 33Mb/s | 8Mb/s)


I've been using the wget method for a long time also.

I think the only downside is that you need to know what you're fetching and from where.

Edit: And for latency, there is /bin/ping


Can't use wget for upload speeds...


While strictly true, if you can download at link capacity it usually means the upstream is okay too.


A few suggestions:

1. look into what /usr/local/{,s}bin/ and ${HOME}/.local/bin/ are for (and how to add them to your ${PATH}, if necessary)

2. look into why you shouldn't recommend things like "sudo wget ..." (at the least, use "wget" followed by a "sudo mv").


For those who are more familiar with npm, there are two packages for this by Sindre Sorhus. You can run them without preinstalling them with npx, if you wish:

    npx speed-test
    npx fast-cli
The second one uses fast.com


Also speedtest-net is a great one, it also has an programmatic API.


Users in Sweden might be better served by this:

http://www.bredbandskollen.se/en/bredbandskollen-cli/


Should work fine for most of europe or at least the north. All servers are in sweden


If you need to do this, you may also need to get your external IP address from the CLI. Two ways I like are:

  dig +short myip.opendns.com

  curl icanhazip.com


A little bit easier to remember: curl ifconfig.me


Run speed test with ssh and pv:

  $ yes | pv | ssh your_server "cat > /dev/null"


You can also

  pv /dev/zero | ssh your_server "cat > /dev/null"
Both yours and this one check upload speed.

To check download speed:

  ssh your_server 'cat /dev/zero' | pv > /dev/null
or based on your suggestion:

  ssh your_server yes | pv > /dev/null
And if you don't have a server to login to, you can find a big file on the web and use that to check download speed:

  curl -s http://some-place.com/big-file | pv > /dev/null
Though curl also reports speed, so I guess you may as well just:

  curl http://some-place.com/big-file > /dev/null


Hmm, it reports about a 10 times lower upload speed allthough my subscription is 50/50 symetrical. I can confirm that I usually do get to the correct speeds (~50 Mbit/s) when downloading things of my server... Where could this mismatch come form?


Maybe MB/s rather than Mb/s. The ratio is 8.


Quite obviously the server you are testing against.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: