
Manually Throttle the Bandwidth of a Linux Network Interface - sndean
http://mark.koli.ch/slowdown-throttle-bandwidth-linux-network-interface
======
choocroot
« Note the download isn’t going to hover exactly at 1.0K/sec — > the actual
download speed as reported by wget is an average over time. In short, you’ll
see numbers closer to an even 1.0K/sec the longer the transfer. In this
example, I didn’t wait to download an entire 4.2GB file, so the 10.5K/s you
see above is just wget averaging the transfer speed over the short time I left
wget running. »

Wrong: you see 10KB/s download speed because you are not throttling the
incoming packets but the outgoing packets!

So, what you are really doing is rate limiting outgoing ACK packets to 1KB/s,
which delays your outgoing ACKs, hence preventing the remote server from
sending you data at full throttle.

You can see that with a tool like "bwm-ng" to see Rx/Tx speed, and you'll see
exactly 1KB/s on Tx, and something variable on Rx.

The incoming rate will fluctuates depending on the TCP window setting used by
the server which translate to how many non-acknowledged packets are allowed to
be sent by the server.

~~~
gmazza
> Wrong: you see 10KB/s download speed because you are not throttling the
> incoming packets but the outgoing packets!

Yep. TC's default is to policy _outgoing_ traffic, which in OP's example is a
bunch of TCP ACKs essentially. Instead, they should be using _ingress_
keyword, something like described here:

[http://blog.stevedoria.net/20050906/ingress-policing-with-
li...](http://blog.stevedoria.net/20050906/ingress-policing-with-linux-and-tc)

Caveat emptor: _ingress_ rate-limiting is hard. Long story short, it all boils
down to what you do with non-confirming packets: There are two alternatives,
and both are rather sub-optimal. You can either buffer/delay packets in kernel
space (default, which leads to bufferbload and memory waste), or drop (which
author linked above opted for, which leads to excessive retransmits and
bandwidth waste).

~~~
wtallis
The drop vs buffer decision is no harder for outgoing or incoming packets. In
either case: if you're trying to simulate a different kind of network, do what
that network does. If you're just trying to get good QoS on your gateway
router, then use a smart AQM that will buffer only to the extent that is
reasonable, and then drop or ECN mark when buffering threatens to add too much
latency.

------
gmazza
If you don't need hierarchical classes[1], just use _tbf_ (token bucket
filter) instead of _htb_ (hierararchy token bucket) - it's more efficient,
more compact, and gives you access to delay in the same discipline as well.
Compare:

    
    
      # htb
      tc qdisc add dev eth0 handle 1: root htb default 11
      tc class add dev eth0 parent 1: classid 1:1 htb rate 1kbps
      tc class add dev eth0 parent 1:1 classid 1:11 htb rate 1kbps
    

vs.

    
    
      # tbf
      tc qdisc add dev eth0 root tbf rate 1kbit latency 42ms burst 2k
    

"man htb" and "man tbf" are quite usable too.

[1] i.e. stuff like "I would like to have tcp/80 limited to 10 mbit/s, tcp/443
limited to 15 mbit/s, while sum of above should never exceed 20mbit/s, and
tcp/80 should get priority when competing for that shared 20mbit/s"

EDIT: changed to " _burst 2k_ ". Having burst lower than interface MTU will
delay large packets essentially forever.

------
secure
[https://github.com/urbenlegend/netimpair](https://github.com/urbenlegend/netimpair)
is a tool which implements the techniques that are described in the article,
and more. In particular, jitter (variance) is often forgotten about when
testing.

~~~
walrus01
Huge variations in jitter that occurs randomly is a hard thing to reliably
simulate. For example simulating a US/48-states consumer grade Ku or Ka-band
VSAT service ($85-115/mo) which is a highly oversubscribed TDMA network.

When capacity in your particular spot beam is good, latency could be 550 to
600 ms end to end. When it's bad it could be 1300 or 1700ms and will jitter
around randomly anywhere in between those two figures.

~~~
wtallis
I think your standard for huge variation in latency could use some updating. A
factor of 2-4 increase over baseline latency would be a huge _improvement_ for
many terrestrial WiFi systems. And services like Gogo in-flight WiFi have been
shown to degrade to over 10 seconds of latency when congested.

------
tyingq
If you're doing this for web related testing, Chrome has a very handy
bandwidth throttle built into devtools:
[https://developers.google.com/web/tools/chrome-
devtools/netw...](https://developers.google.com/web/tools/chrome-
devtools/network-performance/imgs/throttle-selection.png)

~~~
amelius
Does that work also for websocket connections? I'm asking because last time I
checked it, I strongly got the impression this feature only works for simple
http requests, but I could be wrong.

~~~
tyingq
It does not:
[https://bugs.chromium.org/p/chromium/issues/detail?id=423246](https://bugs.chromium.org/p/chromium/issues/detail?id=423246)

------
JelteF
The tc command is great except for the weird command structure. I really like
the comcast tool as a wrapper for tc:
[https://github.com/tylertreat/comcast](https://github.com/tylertreat/comcast)

It makes it much easier to do throttle the way you want.

~~~
wtallis
The comcast wrapper is very easy to use because its capabilities are so
limited that it is essentially useless. It can accomplish some throttling, but
it cannot produce a usefully accurate simulation of a Comcast connection or
any other commonly congested bottleneck.

tc is complicated because it does more things.

------
FabHK
BTW, I've no idea of the technicalities of this, but I travel frequently to
places with _terrible_ internet (and of course billions of people live in such
places), and many web experiences degrade considerably.

If some supercool app or videos don't work, sure, no problem, but if reading
some programming documentation (with maybe 30KB of actual text) or a bank
statement (with maybe 2KB of actual information) or even getting a restaurant
address/phone number doesn't work because of monstrously huge sites with much
back and forth (what's the technical term here...), that's frustrating.

So please please please do try to make your sites usable over bad connections
:-)

~~~
walrus01
When working in locations with really terrible net connections, one of the
things I have resorted to is a VNC session that is a 1920x1200 desktop (256
color) tunneled inside SSH. In this setup the workstation's VNC client
connects to a port on localhost that is SSH forwarded to the remote host. With
the right SSH settings for timeout and keepalive it can be surprisingly
usable. Then open up whatever application you have that is terrible on high-
jitter/high-latency connections inside Chrome or Firefox, or as a native
desktop app on the machine that is hosting the VNC session.

~~~
kovek
How would you get to having the right SSH settings for timeout and keep alive?

What I've done when using a slow connection is run emacs with w3m on a remote
server and get the rendered text through SSH. I'm realizing more and more that
a lot of our interfaces are more visual so I might try your method.

~~~
walrus01
Either by pushing the settings from the ssh client ([http://www.gsp.com/cgi-
bin/man.cgi?topic=ssh_config](http://www.gsp.com/cgi-
bin/man.cgi?topic=ssh_config)) or by changing the settings on the
/etc/ssh/sshd_config file of the machine that is both hosting the VNC server
and hosting the ssh daemon.

[https://www.google.com/search?q=ssh+timeout+keepalive+sshd_c...](https://www.google.com/search?q=ssh+timeout+keepalive+sshd_config&ie=utf-8&oe=utf-8)

------
wtallis
I never see tools or articles like this discussing emulating bufferbloat, only
static high latency. In the real world, satellite connections are relatively
uncommon, and 500ms latency is usually instead due to excess buffering on a
link that can deliver low latency when not saturated.

Also, don't the instructions in this article only apply to outbound traffic?

If you're trying to simulate poor network conditions, you need to have a
better understanding than this of what causes poor network performance, and
how to properly emulate it.

~~~
betaby
There is a way to do ingress traffic manipulation, you have to divert it via
virtual device. For years this was
[https://github.com/imq/linuximq/wiki/WhatIs](https://github.com/imq/linuximq/wiki/WhatIs)
a most common way to do it.

~~~
wtallis
Yes, and since 2006 there's been IFB in the upstream kernel to do most of the
same work. My point was more that the one-way nature of qdiscs is a really
huge thing to overlook.

------
betaby
Also for more details [http://lartc.org/howto/](http://lartc.org/howto/)

------
signa11
i generally found this :
[http://wiki.linuxwall.info/doku.php/en:ressources:dossiers:n...](http://wiki.linuxwall.info/doku.php/en:ressources:dossiers:networking:traffic_control)
to be pretty useful overall.

~~~
wtallis
It's a decent introduction, but a few years out of date. The discussion of
bufferbloat needs to be updated to account for the BQL mechanism to
automatically manage driver buffering, and the sections on more recent AQMs
need to be fleshed out. In particular, fq_codel or cake should be recommended
over the Rube Goldberg HTB+SFQ+PFIFO_FAST setup described.

------
FabHK
Julia Evans (@b0rk) _just_ had an article on that:

[http://jvns.ca/blog/2017/04/01/slow-down-your-internet-
with-...](http://jvns.ca/blog/2017/04/01/slow-down-your-internet-with-tc/)

------
basemi
Maybe netem is more complete
[[https://wiki.linuxfoundation.org/networking/netem](https://wiki.linuxfoundation.org/networking/netem)]

It has delay, loss, duplication, corruption, re-ordering, rate control.

And take a look at this SO thread:
[[http://stackoverflow.com/questions/130354/how-do-i-
simulate-...](http://stackoverflow.com/questions/130354/how-do-i-simulate-a-
low-bandwidth-high-latency-environment)]

------
saagarjha
For those on iOS and macOS, Apple provides Network Link Conditioner for this
purpose.

~~~
feld
Ooh I wonder if this is based on Dummynet

------
daemonna
if you want something complicated to study, here is my old DaemonShape,
dynamic shaper..
[https://github.com/daemonna/bashtools/blob/master/daemon_sha...](https://github.com/daemonna/bashtools/blob/master/daemon_shape.sh)
..should be self-explanatory when you run with help.

------
grizzles
I know it's easy with a server, but is there a codeless way to tell a client
app which network interface you want it to use?

------
networktesting
It's more fun to simulate latency and packet loss for VOIP connections... it's
been a while since I did that.

------
throwayedidqo
This does a bad job of simulating a slow network. First, as others mentioned,
it only throttles outbound. It also doesn't simulate buffer bloat.

Linux network stack isn't designed for this. The best easy thing to use is
BSD's Dummynet pipes.

~~~
wtallis
The Linux network stack is perfectly capable of simulating bufferbloat and
other network degradation, in either direction. This article simply fails to
mention the relevant modules.

~~~
pjmlp
Which are....?

