Hacker News new | comments | show | ask | jobs | submit login

On the face of it this is a nonsensical problem; a 10Gbps ethernet connection is not going to need 10M concurrent tcp connections; ethernet + tcp/ip protocol overhead is approx 4% (62 bytes overhead from 1542 bytes max per packet, no jumbo packets over the wider internet yet), the average UK broadband connection is now over 12Mbps (http://media.ofcom.org.uk/2013/03/14/average-uk-broadband-sp...), that gives approximately 800 connections to fill the pipe. Even at a paltry 1% bandwidth usage per connection and 4x10Gbps adaptors in the server that is still 320,000 connections, I have done 150k connections on a high end desktop Linux machine. Available memory (4kx2 socket buffers; you do want connections to be as fast as possible no?) and bandwidth will be a limit to the number of connections long before you get to 10M connections. You are far better off buying multiple machines (redundancy) in more than one location (even more redundancy) before heading off to load yourself up with technical debt that could choke a zoo full of elephants.

The Linux kernel has come a long way in the last few years with improvements to SMP scaling of sockets, zero copy, and large numbers of file handles; make use of it! The level of technical skill being applied to fix kernel problems is probably more expensive than you can afford.




If you look at real network traffic flowing through e.g. a mobile operator's core network, you'll find that for 1Gbps of traffic there will be anywhere from 200k to 500k open TCP connections (depending a bit on the traffic mix, and on what idle connection thresholds you use for the analysis). So if you're making TCP-aware network appliances targeting a 10Gbps traffic rate, it's not a bad idea to plan for simultaneous 10M connections. You might get away with a factor of 2 or 5 less, but probably not with a factor of 10. 150k connections would be a joke.


There are usecases that use minimal to no bandwidth but a lot of connections. IMAP-IDLE comes to mind as a premier example. Essentially anything that requires "push" capability these days relies on open connections, other examples are websockets and instant messaging. Also the number of concurrent users is on the rise because of the always-on nature of cellphones. While 10M sounds bit far fetched today, I think that using available bandwidth as a measure for connection count is misguided.


UDP, or better still SCTP, would be a far better protocol for this use case; having said that there are legacy issues with NAT, protocol support (e.g. IMAP, websockets). Having said that multi-homing is coming down the pipeline, the real pity is that the devices that could most benefit from it, mobile computing, largely refuse to support it; i.e. iOS and Android. Maybe pushing the firefox guys to support SCTP on their OS / websockets implementation would force the other parties to the table... Maybe a user space implementation on top of UDP would be the way to go.


I think the problem with UDP is all the people who think that NAT is a perfectly valid firewall/use case and not a hack.


Agreed a serious "real world" problem. having said that people are working on it. I have a search around earlier and found there is a draft RFC for the UDP encapsulation of SCTP (http://tools.ietf.org/html/draft-ietf-tsvwg-sctp-udp-encaps-...), this combined with a soul destroying use of a zero data payload keep alive to fend off NAT stupidity, and maybe a server side end point abuse of port 53 to keep "Carrier Grade" NAT quiet might be the trick. All this should work on mobile platforms.

In general doing this "properly" is an exercise in icky compromise.


>Available memory (4kx2 socket buffers; you do want connections to be as fast as possible no?)

When 1U commodity x86 server built on C602 Intel chipset can be packed with 768 GB RAM..


Your numbers assume TCP will always be the dominant connection-oriented protocol.


Your math assumes all of your connections are saturated to the max (on user's end).

If that's the case, then yeah, 10 million connections would require a 14.3 terabytes/sec connection, which doesn't exist for regular PCs, as far as I know.

There are many applications when most of your connections are sitting idly and checking something like memcache for status updates once a second.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: