
Ixy – a userspace network driver in 1000 lines of code - ttflee
https://github.com/emmericp/ixy
======
iamtew
There was a talk about this at 34c4 earlier today.

Demystifying Network Cards

\-
[https://events.ccc.de/congress/2017/Fahrplan/events/9159.htm...](https://events.ccc.de/congress/2017/Fahrplan/events/9159.html)

\-
[https://streaming.media.ccc.de/34c3/relive/9159](https://streaming.media.ccc.de/34c3/relive/9159)

~~~
bogomipz
Thanks for posting these links. On page 20 of the slide deck the author
enumerates problems with existing user mode frameworks and one bullet point
states:

• Limited support for interrupts

• Interrupts not considered useful at >= 0.1 Mpps

Does anyone have any insight into why Interrupts aren't considers "useful" at
this rate? Is this a reference to NAPI and the threshold at which its better
to poll?

~~~
emmericp
The core of the problem is an optimization problem between saving power and
optimizing for latency.

Lets say you are receiving around 100k packets. You can fire 100k interrupts
per second. Sure, no problem. But you'll probably be running at 100% CPU load.
NAPI doesn't really help here. You'll see a quite high CPU load with NAPI (if
measuring it correctly, CONFIG_IRQ_TIME_ACCOUNTING is often disabled), just
not the horrible pre-NAPI live locks.

What will help is a driver that limits the interrupt rate (Intel drivers do
this by default, the option is called ITR). Now you've got a high latency
instead (for some definitions of high, you'll see 40-100 µs with ixgbe).

Note that the interrupts are now basically just a hardware timer. We don't
need to use a NIC for hardware timers.

This is of course only true at quite high packet rates (0.1 Mpps was maybe the
wrong figure, let's say 1 Mpps). Interrupts are great at low packet rates.

I've looked into this in some more detail in the Linux kernel a few years ago,
see this paper for lots of details:
[https://www.net.in.tum.de/fileadmin/bibtex/publications/pape...](https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/SPECTS15NAPIoptimization.pdf)

All the DPDK stuff about power saving is also quite interesting. Power saving
is a relevant topic and DPDK is still quite bad at it. I think the most
promising approach is dynamically adding and removing worker threads as well
as controlling the CPU frequency from the application (that knows how loaded
it actually is!) is currently the most promising approach. Unfortunately, most
DPDK allocate threads and cores statically at the moment.

~~~
bogomipz
Thank you for the detailed explanation. Those bullets points make sense to me
now. Cheers.

------
abecedarius
From the draft paper: "It should be noted that the Snabb project [16] has
similar design goals, ixy tries to be one order of magnitude simpler than
Snabb. For example, Snabb targets 10,000 lines of code [15], we target 1,000
lines of code and Snabb builds on Lua with LuaJIT instead of C limiting
accessibility."

Snabb aims at practicality while Ixy aims at education, as I understand it,
though I haven't read either one yet.

------
dvdplm
Is it possible to try this out in a VM? I.e. are there any VMs out there that
emulates the hardware well enough to be usable with Ixy?

~~~
en4bz
DPDK, which this library seems to be inspired by, has a virtio driver that
will work for VMs.

You can get into Intel 10GbE hardware for $100 USD [1].

[1] [https://www.ebay.com/itm/Intel-
Dell-X520-DA2-10Gb-10Gbe-10-G...](https://www.ebay.com/itm/Intel-
Dell-X520-DA2-10Gb-10Gbe-10-Gigabit-Network-Adapter-NIC-
Dual-E10G42BTDA/182583302702?epid=1272538247&hash=item2a82d01a2e:g:nxoAAOSwIQdZHwEX)

~~~
voltagex_
How much is the cabling going to cost you? A switch?

~~~
en4bz
You can do physical loopback by connecting the two ports together. No switch
required. A DAC cable is ~25USD.

~~~
signa11
> You can do physical loopback by connecting the two ports together.

this ! and this is waay better, because you can run all simulations (perf
etc.) whatever on the same machine. for example, you can run client on 1 cpu-
core, and server on another, and have packets looped over the physical
interface.

simplify man simplify :)

------
userbinator
_Implement at least one other driver beside ixgbe_

Other NICs with documented and relatively simple interfaces include the
(rather old) Ne2000, 8254x, and some of the Realtek ones:

[http://wiki.osdev.org/Category:Network_Hardware](http://wiki.osdev.org/Category:Network_Hardware)

~~~
emmericp
The problem with these NICs is that they are PCI or even ISA. Also, they are
quite rare compared to, e.g., an igb/e1000e NIC. A driver for an igb/e1000e
NIC would be quite similar to the ixgbe driver.

However, since the main goal of using these NICs seems to be support in
virtual environments: just using virtio is better.

------
feelin_googley
I listened to this talk and enjoyed it.

Question for the author. It reminded me of something else, a number of years
ago.

Did you ever consider this:

[http://wiki.rumpkernel.org/Repo:-drv-netif-
dpdk](http://wiki.rumpkernel.org/Repo:-drv-netif-dpdk)

