> The problem with packets is they go through the Unix kernel. The network stack is complicated and slow. The path of packets to your application needs to be more direct. Don’t let the OS handle the packets.
Why don't you focus on simplifying the network stack instead?
The stack is complicated and slow for a reason: it takes care of many things. I will believe that you can do better when you provide the same amount of functionality (QoS, firewall, probing, tracing). If you say that you can do without all these additional features, why don't you go and optimize the stack so that the most basic code path is smaller and faster?
If you have one specialized need (i.e. one specific path for packets to travel), then it is just as valid an approach to trash every other path.
To turn your own question around: Why bother trying to optimize a stack that contains a lot of stuff you don't strictly need? The talk doesn't deal with general-purpose servers that perform multiple roles; it says "if you have a singular role, here is how you hyper-optimize to support that role at a huge scale."
> If you have one specialized need (i.e. one specific path for packets to travel), then it is just as valid an approach to trash every other path.
You only have one specialized need at the very beginning. Then you start seeing the need for "just another little feature". And soon you will start replicating big parts of the network stack. Call it Greenspun's tenth rule for the network, if you want.
Also, most of the time, what you "just need" happens to be also the common need of many other users out there. Joining forces to fix a single code path is a much better investment than redoing things in userland for the sake of it.
Well, the whole point of the article is that for some needs even the option of extra functionality hurts - you want to remove every extra 'if', every unneeded (for you) field in a data structure to make it fit in cache better, etc.