Give it a test and let me know if you encounter any issues. Except the chrome/chromium with static binaries, that have BoringSSL shipped inside. The entire SSL/TLS code flow is a motherfucking spaghetti to provide acceleration and fast page loads. They even offload to system OpenSSL lib for some TLS parts and even with debug symbols (not you google that doesn't include them in repo) it is a headache to trace it
I think (not 100% sure) Cillium [0][1] kinda already does this. This loophole is good for packet processing/routing and even introducing XDP based ACL to bypass any ip/nf tables and get that almost wire speed benefit. I use Cilium with these features for custom made k8s clusters with Talos OS without any kube-proxy.
Cilium is definitely the gold standard if you’re working with Kubernetes clusters and need a full CNI, but if you want to extend CNI functionality without replacing it, then this approach is the only option.
It works quite well because Cillium (and all CNIs that I’m aware of) don’t use XDP like the blog post mentions, they use Netkit instead which is an alternative to veth designed for netfilter-like use cases.
This means XDP can work alongside Cillium (with enough tweaking) which is what we wanted to be able to do.
If you’re using pure containers and no CNI, then of course this provides a significant speed up even beyond netkit devices.
It is nice to see people thinking and working on low level networking stuff that everyone will benefit. I think even single node clusters/container hosts will benefit a lot from XDP loophole. I'll keep an eye on it.
The code has the infrastructure for XDP hardware offload:
- XDP_MODE_OFFLOAD enum exists in bpf_loader.h:61
- XDP_FLAGS_HW_MODE flag mapping in bpf_loader.c:789
But it's not usable in practice because:
1. No CLI option – There's no way to enable offload mode; it defaults to native with SKB fallback
2. BPF program isn't offload-compatible – The XDP program uses:
- Complex BPF maps (LRU hash, ring buffers)
- Helper functions not supported by most SmartNIC JITs
- The flow_cookie_map shared with sock_ops (can't be offloaded)
3. SmartNIC limitations
– Hardware offload typically only supports simple packet filtering/forwarding, not the stateful flow tracking spliff does
What would be needed for SmartNIC support:
- Split XDP program into offloadable (simple classification) and non-offloadable (stateful) parts
- Use SmartNIC-specific toolchains (Memory-1, Netronome SDK, etc.)
- Me having a device with SmartNIC and full driver support to play with. I've done all my testing on Fedora 43 on my device
For now this could be a future roadmap item, but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Here is a sample debug output when you run spliff -d and it tries to detect all your NICs:
---
[DEBUG] Loaded BPF program from build-release/spliff.bpf.o
[XDP] Found program: xdp_flow_tracker
[XDP] Found required maps: flow_states, session_registry, xdp_events
[XDP] Found optional map: cookie_to_ssl
[XDP] Found map: flow_cookie_map (for cookie caching)
[XDP] Found optional map: xdp_stats_map
[XDP] Initialization complete
[XDP] Discovered interface: enp0s20f0u2u4u2 (idx=2, mtu=1500, UP, physical)
[XDP] Discovered interface: wlp0s20f3 (idx=4, mtu=1500, UP, physical)
[XDP] Discovered interface: enp0s31f6 (idx=3, mtu=1500, UP, physical)
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on enp0s20f0u2u4u2, falling back to SKB mode
[XDP] Attached to enp0s20f0u2u4u2 (idx=2) in skb mode
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on wlp0s20f3, falling back to SKB mode
[XDP] Attached to wlp0s20f3 (idx=4) in skb mode
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on enp0s31f6, falling back to SKB mode
[XDP] Attached to enp0s31f6 (idx=3) in skb mode
[XDP] Attached to 3 of 3 discovered interfaces
XDP attached to 3 interfaces
[SOCKOPS] Using cgroup: /sys/fs/cgroup
[SOCKOPS] Attached socket cookie caching program
sock_ops attached for cookie caching
[XDP] Warm-up: Seeded 5 existing TCP connections
[DEBUG] Warmed up 5 existing connections
---
I *built Spliff, a high-performance L7 sniffing and correlation engine in pure C23. The goal is to build a fully working, Linux-native EDR that isn't a resource-hogging black box.
The core innovation – "Golden Thread" correlation:
Most eBPF sniffers capture SSL data OR packets. Spliff correlates both:
Linux-only – Requires kernel 5.x+ with BTF, XDP, libbpf.
---
The project is GPL-3.0 and we're inviting anyone interested to contribute—whether it's code, architecture feedback, security research, or ideas for EDR features that actually matter (not compliance theater).