My understanding is that this is just a qsfp28 transceiver using a new optical format. There have have been 100GbE server NICs for almost two years (Mellanox ConnectX 4, using mlx5 driver) using QSFP28.
We've been using them with great success at Netflix in our flash storage appliances. We are able serve well over 90Gb/s from single machines with these NICs using our tuned/enhanced FreeBSD and nginx.
We're pretty much the opposite of DPDK. Rather than moving stuff out of the kernel, we move stuff into it. We're actually moving stuff like TLS encryption into the kernel -- see our kernel TLS papers at AsiaBSDCon
We do all of our stack "traditionally", in the kernel, whereas DPDK moves things into userspace. By using a traditional stack with async sendfile in the kernel, we benefit from the VM page cache, and reduce IO requirements at peak capacity. There are no memory to memory copies, very little kernel/user boundary crossing, and no AIO. Using single-socket Intel Xeon E5-2697A v4, we serve at 90Gb/s using roughly 35-50% CPU (which will increase as more and more clients adopt HTTPS streaming).
There is no question that FreeBSD is lacking in a number of areas. For example, device support is a constant struggle.
Just wondering, what kind of things do you think FreeBSD is lacking in?
Regardless of those things, the OpenConnect boxes are doing a pretty small subset of possible server tasks, it's basically serving static content from disk, and updating the static content occasionally. This is a task that FreeBSD has been excelling at, since basically forever. FreeBSD is a pretty stable target to tweak on as well, Netflix moved TLS bulk encryption into sendfile(), which helps avoid the transitions from kernel to user space, but by putting more stuff in kernel space, rather than the DPDK method of putting more stuff in user space. They've continued to tweak sendfile, which I imagine helped them get up to nearly 100Gbps out.
I haven't had the pleasure of running 100G network, but I had to do very little tuning to saturate 10G with FreeBSD on http download site, and TLS cpu was the primary thing holding me back when moving it to https. Bulk download was moved away from my team before I got new servers with 2x10G and fancier processors, so I was never able to see if I could saturate that too :(
We've been using them with great success at Netflix in our flash storage appliances. We are able serve well over 90Gb/s from single machines with these NICs using our tuned/enhanced FreeBSD and nginx.