What most consumers and sysadmins think of "Firewalls" and what the presentation are talking about are two different things. Simple packet filters like "don't allow communication on port 123 unless it's from IP a.b.c.d)" will always be part of a defense in depth strategy, but things like stateful packed inspection tools from big-name firewall vendors do not scale when the number of cycles they have to inspect a packet keeps getting lower, especially when they have fewer cycles to actually do basic I/O to get the packet through to the destination.
High performance networking means networking hardware has to get the packets moved faster, so there's less time to do processing on them.
> adding a tiny fixed latency is independent of total system bandwidth
Incorrect on both counts.
A) it's not a tiny latency, not compared to the overall system latency in many cases. This is explained in the article. The speed of light isn't getting any faster, whereas communication rates continue to increase. Which means you have more data on the line at once, which brings me to:
B) most data flows are finite - any reliable communications (such as, for instance, anything over TCP, and a good chunk of things over UDP as well), take a certain number of round trips to come up to speed. Which brings me to:
As such, the overall bandwidth of a TCP (or ghetto TCP via any other means - pretty much any reliable protocol suffers from this) connection is held up more by a fixed delay the faster the link is.
High performance networking means networking hardware has to get the packets moved faster, so there's less time to do processing on them.