Hacker News new | comments | ask | show | jobs | submit login
XDP: 1.5 years in production [pdf] (kernel.org)
58 points by eloycoto 60 days ago | hide | past | web | favorite | 14 comments



Here's a slide deck that explains what XDP is: https://people.netfilter.org/hawk/presentations/driving-IT20...


If anyone has any questions about XDP or eBBF in production, I'd be happy to answer. I've been working on an XDP deployment with Rust as the control plane language, and have found it to be an incredibly technology that makes high performance networking extremely accessible.

Also, Katran is very cool. Facebook is doing really cutting edge work in the networking space.


Have you thought of adding an eBPF backend for Rust?

I'm curious where you work. I've been doing eBPF in production for a while as well, but our control plane is Go / C.



It's already done. LLVM IR can be compiled to eBPF already, and Rust compiles to LLVM IR. It just took a little bit of effort to make it not totally hacky, but that code isn't OS.


can you give an ELI5 of the benefits of XDP over current technologies?


eBPF programs are bytecode that runs on a virtual machine in the kernel. The programs are limited by number of instructions and computational complexity (they must halt). These programs can transfer data back to user space using maps, which are simple data structures for passing things like lists of ips, for example. These eBPF programs are attached to probes in the kernel where they can gather information at the probe point. XDP is essentially a probe before any of the networking stack has happened. An XDP eBPF program can read and manipulate raw packet data, allowing you to process the packet as fast as your network card will allow, before that data is ever seen by the slow kernel networking code.

Unlike other options like DPDK, eBPF programs are tightly integrated in the kernel. They allow you to write kernel code that the kernel can safely run. They are simple to write, easy to install (it's just a syscall) and there are tools in Rust and other languages (check out BCC) that allow you to compile to eBPF bytecode from various languages. You're also not just limited to networking. eBPF programs can hook in to many places in the kernel, and there are already efforts to build new system performance tools using eBPF.


What are the downsides of eBPF(XDP)programs compared to the previous options (DPDK etc.)?

Do you consider eBPF(XDP) to be the clear next step in this space?


https://t.co/UjQE8Z6HlD

Check this out, it spells out why DPDK isn't a great solution.

The downsides to XDP is that it requires newer kernel releases, also XDP eBPF programs do not provide chaining and are limited in size (they can be chained, you just have to do it yourself).


Maybe I missed something, but the argument that was stated in that talk, "DPDK is not linux," is not very clear to me. How is "not linux" a bad thing?

I'm not saying DPDK is perfect. As an user, here are some of the most annoying things about DPDK:

- 100% CPU, even in down time. There has been works done recently in power management for DPDK but it still is quite limited.

- Debugging is difficult. Valgrind doesn't even work ootb.

- Very limited tool set compared to linux. For example no tcpdump.

- Setting up is cumbersome. Allocating hugepages must be done soon after reboot, NICs must be bound to uio,...

- Ad-hoc Layer 4-7.

That said, some of the packet processing libraries that come with DPDK is awesome. Once you get through the first few hurdles the dev experience is actually quite nice. I think combining DPDK with XDP is very promising.


There is actually a patch to enable AF_XDP for DPDK: http://mails.dpdk.org/archives/dev/2018-August/109792.html

And also a recent talk on how AF_XDP can be optimised further to get closer to DPDK speed (http://vger.kernel.org/lpc-networking2018.html#session-11) where it concludes "DPDK still faster for 'notouch', but AF XDP on par when data is touched". And latter is what matters.

I think combining the two definitely has huge potential:

- All the Linux tooling can be used again as drivers in kernel are used.

- No 100% busy polling needed, it's definitely not a must for workloads.

- Easy setup as simply kernel drivers are used as is.

- Vendors only have to maintain their kernel drivers, but not DPDK ones, so less cost

- Users can simply switch from one NIC to another without any hassle

- DPDK library for application development can be fully reused.

Sounds like a big win to me for both worlds.


If you Like XDP you may check AF_XDP! It's like PCAP but EXTREMELY fast!

It require kernel 4.18+ and you can use my guide to build test application very fast: https://github.com/pavel-odintsov/fastnetmon/wiki/af_xdp-tes...


These slides seem to be rather incomprehensible without the accompanying talk.

Edit: Oh boy. That's so obviously not a criticism of the talk.


No video yet, but here you have the paper.

http://vger.kernel.org/lpc-networking2018.html#session-10

On the other hand, Cilium.io made a good summary of that. https://cilium.io/blog/2018/11/20/fb-bpf-firewall




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: