Hacker News new | comments | ask | show | jobs | submit login

The idea that Python is so slow that it's confusing TCP sounds wrong to me. I think it's more likely that your packet capture scheme is slow. It looks like you're using scapy, which I assume is in turn using libpcap... which may be buffering (in high-performance network monitoring, the packet capture interface goes out of its way to buffer). Which is something you can turn off.

About 13 years ago, I wrote my own programming language expressly for the purpose of implementing network stacks, and had a complete TCP in it; I didn't have this problem. But I do have this problem all the time when I write direct network code and forget about buffering.

Similarly: "Python is so slow that Google reset the network connection" seems a bit unlikely too. Google, and TCP in general, deals with slower, less reliable senders than you. :)

What's the time between your SYN and their SYN+ACK?

my favorite thing about writing blog posts is comments like this. Thank you! I didn't consider that the packet capture interface might do buffering. That might explain a lot of the problems I was having :)

This is a great project. Keep playing with it! You might find that the serverside of TCP is more useful to have in Python than the clientside (having a userland IP/TCP serverside allows you to create fake hosts out of thin air).

This sounds super handy for situations like honeypots. Take a /24, assign the hosts you actually have, and then spoof the rest to another server which fakes open SMTP, HTTPS, etc. connections, or which replies to every connection attempt (slowly!) with a successful (if slow!) open.

There's an iptables module called tarpit[1], which takes advantage of some peculiarities of the TCP protocol to essentially prevent the remote host from closing the connection, which can force (conforming?) TCP clients to take 12-24 minutes to timeout every connection. It can make portscanning unpleasantly expensive and time-consuming.

[1] http://linux.die.net/man/8/iptables

this is in fact exactly how honeyd works: http://www.honeyd.org/

Thanks for mentioning Honeyd. Are there any more software like this.

Your "userland TCP/IP" comment got me excited and I started digging around. I wonder if there's some benefit in trying to port something like TCP Daytona [1,2] in a high-level language like Python. I'm curious if that will somehow advance the adoption of SDNs.

[1] https://github.com/jamesbw/tcp-daytona [2] http://nms.lcs.mit.edu/~kandula/data/daytona.pdf

Like writing your own emulator, writing a full TCP stack is a project that is intrinsically worth doing.

You can probably also come up with real-world applications for it (as a security tester, there are lots of applications for having full control over a TPC stack, regardless of how performant it is), but just having done it offers a huge learning return on a modest investment.

You probably don't fully grok what TCP even is until you've made congestion control work.

I just want to second this, I didn't fully understand networking until I wrote my own stack and threw packets on the wire. A simple stack with ARP/ICMP/IP/UDP can be written in a day and seeing your very own ping packets fly across the globe is pretty damn awesome.

> (as a security tester, there are lots of applications for having full control over a TPC stack, regardless of how performant it is)

And sometimes you even want your stack to be slow, eg in a slow loris attack.

Wrong. In these cases you want your stack to be as fast as possible. But you're introducing artificial delay on selected keypoints. If your attack code is slow, then you're basically attacking your self. Target is to consume (a lot if possible) more resources on the target side, than you're using on your own side.

Software Defined Networking is about moving the control plane out of switching/routing devices and into general purpose servers, so that they can make better forwarding decisions such as improved convergence time. In the sense that I understand SDN, I don't think doing TCP in userspace is part of it.

But if it can be done easily in a high-level language, then the control plane can be integrated better with a lot of applications written in the same language, no? I was thinking if that can serve as an impetus for SDN adoption.

You should have a look over here :)


The python is quite fine at doing the TCP indeed, muXTCP is the proof.

Being implemented around Twisted, it actually allows you to fiddle with low-level TCP stuff, while e.g. offloading the SSL to the existing stack. It saved my bacon a few times when I wanted to reproduce a complicated network breakage scenarios.

https://www.youtube.com/watch?v=BEAKtqiL0nM - Video about muXTCP from 22C3

https://github.com/enki/muXTCP - github repo with it.

I was surprised by this as well. The common quote is that Python is 10 - 100 times slower than C, yet computers have been doing TCP for ages, and Moores law has meant that my phone is probably more powerful / faster than my laptop from ten years ago. It didn't quite add up.

Python may be slow, but it's only relatively slow. It's still fast enough for the vast majority of use cases.

As with most things in software, speed can't be measured only by the running program -- the time required to write the program should be included in nearly all cases.

Python: fast to write, slow to run.

C/C++: slow to write, fast to run.

To put it succinctly, you can write fast programs, and you can write programs fast, but you can't write fast programs fast.

I entirely agree, for most situations the trade off is in favor of high-level languages like Python. The scale tips even further in favor of CPython when you start using C extensions like Numpy. All of the benefits of Python, with speed approaching that of pure C (there's some overhead when calling from Python). For use-cases like web applications, the language is almost never the bottleneck anyways. Network latency and database queries usually eat up far more time than the glue code holding it all together. And this isn't even touching on the security benefits of memory safe languages.

> To put it succinctly, you can write fast programs, and you can write programs fast, but you can't write fast programs fast.

Haskell, Clojure, and Ocaml do pretty well at writing fast programs fast for what I consider appealing values of fast.

Now I'm genuinely interested in knowing which parts of python are 'slowing down' the application. With the help of Cython[0], one can give hints to the compiler to improve performance for most of the cases.


The Google homepage is only about 20000 bytes... if we assume an maximum segment size of ~1400 bytes, then 14 or 15 packets is about right.

I wouldn't be surprised if Google is sending the packets all at once and ignoring the ACKs altogether.

Heck, there's even a 2010 paper from Google on the subject of sending a bunch of packets at the beginning of the connection: An Argument for Increasing TCP's Initial Congestion Window[0]


If a microcontroller and a Commodore 64 can do TCP/IP, Python on a modern PC can handle it.



Our tcp stack is written in tcl. The window size is set to 1. It works fine.

As a heavy user of Tcl, I'm interested to know more. Is this related to hping3 or pktsrc or NS? Or are you F5?

It's pktsrc.

This seems likely, but also easy to work around by advertising a smaller window size.

When you send ACKs, not only do you send the acknowledgement number indicating which byte you expect next, but you also send a window size indicating how many bytes you're willing to receive before the remote end has to wait for another acknowledgement. Normally you want this to be somewhat large so you don't spend lots of idle time waiting around for ACKs. (But not so large that packets get dropped). This is the key to TCP flow control, which was kinda glossed over in the blog post in the interest of keeping things simple.

But perhaps by default, you're advertising a too-large window considering the circumstances. I bet you could make this a lot more reliable just by advertising something smaller.

Good TCP implementations have overcome a lot more than some unwanted buffering.

If it's the buffering delay I'm thinking of, and your packet capture filter is tight, the delay we're talking about can be multiple seconds long. Changing window sizes won't fix it.

I haven't read your code yet but I was curious about which packet you are ACK'ing?

Is your ACK sequence number the sum of all received data lengths? I think that is how that works?

Python being too slow for tcp/ip sounds silly. Afaik the c64 can run a webserver. At what... 4mhz? Less? I think you could probably emulate the c64 tcp/ip stack in python in realtime+ (or -...) on a multighz cpu...

1 MHz, and while I only implemented UDP in my stack (https://github.com/oliverschmidt/ip65), Jonno Downes took over and added TCP later. There's also Adam Dunkels uIP and Doc Bacardi's HTTP-Load.

So yes, if a C64 can handle it Python should have plenty of power.

I am curious if the author has tried it against other servers, such as a vanilla Apache server.

Google's webservers, including the TCP stacks themselves, may be very aggressively tuned to make sure you get the response absolutely as fast as possible, at the expense of re-sending packets more quickly than specified.

scapy is the slowest thing on earth. It shouldn't be used.

I agree that scapy is bad for just general packet capture and injection. However, it's a good tool to use for packet dissection and construction.

Indeed. I can negotiate a connection to google.com:80 successfully on a 1000ms latency pipe just fine. Slowly, but fine.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact