Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm slightly surprised cloudflare isn't using a userspace tcp/ip stack already (faster - less context switches and copies). It's the type of company I'd expect to actually need one.





Nice, they know better. But it also makes me wonder, because they're saying "but what if you need to run another app", I'd expect for things like loadbalancers for example, you'd only run one app per server on the data plane, the user space stack handles that, and the OS/services use a different control plane NIC with the kernel stack so that boxes are reachable even if there is link saturation, ddos,etc..

It also makes me wonder, why is tcp/ip special? The kernel should expose a raw network device. I get physical or layer 2 configuration happening in the kernel, but if it is supposed to do IP, then why stop there, why not TLS as well? Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process? It sounds like "that's just the way it's always been done" type of a scenario.


AFAIK Cloudflare runs their whole stack on every machine. I guess that gives them flexibility and maybe better load balancing. They also seem to use only one NIC.

why is tcp/ip special? The kernel should expose a raw network device. ... Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process?

Check out the MIT Exokernel project and Solarflare OpenOnload that used this approach. It never really caught on because the old school way is good enough for almost everyone.

why stop there, why not TLS as well?

kTLS is a thing now (mostly used by Netflix). Back in the day we also had kernel-mode Web servers to save every cycle.


Was it Tux? I've only used it, a looong time ago, on load balancers.

https://en.wikipedia.org/wiki/TUX_web_server


You can do that if you want, but I think part of why tcp/ip is a useful layer of abstraction is it allows more robust boundaries between applications that may be running on the same machine. If you're just at layer 2 you are basically acting in behalf of the whole box.

TCP/IP is, in theory (AFAIK all experiments related to this fizzled out a decade or two ago), a global resource when you start factoring in congestion control. TLS is less obviously something you would want kernel involvement from, give or take the idea of outsourcing crypto to the kernel or some small efficiency gains for some workloads by skipping userspace handoffs, with more gains possible with NIC support.

why can't it be global and user space? DNS resolution for example is done by user space, and it is global.

DNS isn't a shared resource, that needs to be managed and distributed fairly, among programs that don't trust and cooperate with each others.

DNS resolution is a shared resource. The DNS client is typically a user-space OS service that resolves and caches DNS requests. What is resolved by one application is cached and reused by another. But at the app level, there are is no deconflicting happening like transport layer protocols. However, the same can be said about IP, IP addresses like name servers are configured system wide and shared by all apps.

It can be shared access to a cache, but this is an implementation detail for performance reasons. There is no problem with having different processes resolve DNS with different code. There is a problem if two processes want to control the same IP address, or manage the same TCP port.

Yeah, but there is still no reason why an "ip_stack" process can't ensure a different IP isn't used and a "gnu_tcp" or whatever process can't ensure tcp ports are assigned to only one calling process. An exclusive lock on the raw layer 2 device is what you're looking for I think. I mean right now, applications can just open a raw socket and use a conflicting tcp port. I've done to kill TCP connections matching some criteria by sending the remote end an RST pretending to be the real process (legit use case). Which approach is more performant, secure, and resilient? that's the what i'm asking here.

You do want to offload crypto to dedicated hardware otherwise your transport will get stuck at a paltry 40-50 Gb/s per core. However, you do not need more than block decryption; you can leave all of the crypto protocol management in userspace with no material performance impact.

> faster - less context switches and copies

Aren't neither required these days with the "async" like and zero-copy interfaces that are now available (like io_uring, where it's still handled by the kernel), along with the nearly non-existence of single core processors in modern times?


> > faster - less context switches and copies

This is very much newbie way of thinking. How do you know? Did you profile it?

It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).

Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.


> This is very much newbie way of thinking. How do you know? Did you profile it?

Does it matter? less syscalls is better. Whatever is being done in kernel mode can be replicated (or improved upon much more) in a user-space stack. It is easier to add/manage api's in user space than kernel apis. You can debug, patch, etc.. a user space stack much more easily. You can have multiple processes for redundancy, ensure crashes don't take out the whole system. I've had situations where rebooting the system was the only solution to routing or arp resolution issues (even after clearing caches). Same with netfilter/iptables "being stuck" or exhibiting performance degradation over time. if you're lucky a module reload can fix it, if it was a process I could have just killed/restarted it with minimal disruption.

> Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.

I won't disagree with that, but one optimization does not preclude the other. if ip/tcp were user-space, they could be optimized better by engineers to fit their use cases. The type of load matters too, you can optimize your app well, but one corner case could tie up your app logic in cpu cycles, if that happens to include a syscall, and if there is no better way to handle it, those context switch cycles might start mattering.

In general, I don't think it makes much difference..but I expected companies like CF that are performance and outage sensitive to strain every last drop of performance and reliability out of their system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: