
Hellcat: netcat that takes unfair advantage of traffic shaping systems - luu
https://github.com/matildah/hellcat
======
nisa
This should work pretty well for tc/iptables shaping where you count the bytes
in the connection and move to a lower class.

But I guess most of the time something like aria2[1] works better for real
downloads - if you shape only for a single tcp stream this should also defeat
this or at least speedup the download enough that the rate limit doesn't
matter.

On the server side it's probably easy to stop this - nginx seems to have all
you need[2],[3]. Just set a unique cookie for the download and deny access
otherwise - not sure what the shared hosters are doing but likely something
similar (mandatory waiting time before the cookie or url-hash is set, limit
access based on connections/hash)

    
    
        limit_conn_zone $uid_got zone=cookie:10m;
    
        server {
            mp4;
            limit_conn cookie 1;
           	limit_rate_after 10m; 
            limit_rate       512k;
    
        }
    

On the other hand it probably screws users for mostly no reason most of the
time.

1: [https://aria2.github.io/](https://aria2.github.io/)

2:
[http://nginx.org/en/docs/http/ngx_http_limit_conn_module.htm...](http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_zone)

3:
[http://nginx.org/en/docs/http/ngx_http_userid_module.html](http://nginx.org/en/docs/http/ngx_http_userid_module.html)

~~~
sleepychu
Can you use aria2 to abuse the network shaping for something like SFTP? Or
does the handshake dwarf the performance benefit?

~~~
nisa
Good question. I think the only thing aria2 does is using multiple connections
- that should work on SFTP.

~~~
sleepychu
I could TIAS later but does it somehow carry your login between connections?

------
sleepychu
The commit messages for this are gold.
[https://github.com/matildah/hellcat/commits/master](https://github.com/matildah/hellcat/commits/master)

~~~
Etheryte
"mkdir src, put stuff in it."

"we need 4 argz"

"i fail at operators"

------
kpcyrd
I've seen other scripts utilizing curls --speed-limit and --continue-at for
http to restart the download after the throttling kicks in and resume the
download on a new connection. Really nice!

------
BrandiATMuhkuh
I wrote a while ago a shell script utilizing curl to do something similar. But
many pages stop high speed download after about 100mb. Not just a couple of
kb. So using this approach is actually quite nice.
[https://github.com/BrandiATMuhkuh/downloadAccelerator](https://github.com/BrandiATMuhkuh/downloadAccelerator)

------
rosstex
Could anyone with a custom OS make their own TCP stack that violates all rate
limiting and always sends data as fast as possible / reports a humongous
window size?

~~~
superuser2
>violates all rate limiting

Rate limiting is performed upstream; the network isn't just asking nicely.
Your packets will be either queued or dropped if you send them faster then
allowed.

Some ISPs allow high speeds for the first N megabytes (often 20 or so), then
throttle the shit out of the rest. This trick presents an entire stream as a
bunch of little streams, so that you get the "first 20MB" treatment for the
whole thing.

Depending on circumstances, the "upstream" place where rate limiting is
performed may be your cable modem. People have been known to reflash their
cable modems with cracked firmware that doesn't honor the rate limit. This is
easy enough to detect and punish further upstream.

------
leggomylibro
Is it really faster to do a whole FIN / ACK / FIN / ACK / SYN / SYNACK / ACK
every N bytes? How much does this sort of traffic shaping typically throttle,
and how long does it typically give a connection?

Can this be easily parallelized? I'll bet it could.

~~~
ch4ch4
Comcast had a feature called "Powerboost"
([http://www.dslreports.com/faq/14520](http://www.dslreports.com/faq/14520))
that would let you burst up 100mbit or something for the first 20MB of a file.
I'm not sure if this is still being offered, but I bet it would be awesome
with Hellcat.

~~~
dylz
I think that might be a bit outdated since it refers to DOCSIS1, and I haven't
ever received a package offer from Comcast before below 200Mbps or so

~~~
waspleg
In my area, I have the highest tier of Comcast service (residential) at 75
mbps.

------
MoSal
Is this a problem where multiple connections wouldn't help?

I can add an option to saldl[1] to use a new connection with each chunk. But
I'm not sure there are real world examples where this would help.

[1] [https://github.com/saldl/saldl](https://github.com/saldl/saldl)

------
uudecode
The HN comments suggest there are other ways to do this. Is this another one?

[http://lftp.yar.ru/lftp-man.html](http://lftp.yar.ru/lftp-man.html)

See pget.

I really like the simplicity of hellcat, but IMO original netcat's brilliance
is partly due to its portability and resistance to bit rot; it does not use
getaddrinfo and I cannot think of any good reason one needs to use it.

