Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NeVerMore: Exploiting RDMA Mistakes in NVMe-Of Storage Applications (2022) (arxiv.org)
20 points by transpute on May 30, 2024 | hide | past | favorite | 3 comments


It looks like RDMA is kind of like IP4, in the sense that it wasn't originally designed with security in mind. Was this vulnerability a big deal when the paper was submitted in 2022, or more a case of doing cool research on a protocol vulnerability? The attack scenario looks pretty limited:

"We consider an adversary that is on one of the endpoints of the victim connection (i.e., it is co-located with either the NVMe-oF target or client). The attacker is an unprivileged user and is assumed to have obtained access to the machines using legitimate means. We assume that the attacker shares the same physical RNIC as the NVMe-oF entity and both can use it for communication. We assume that the attacker and the NVMe-oF entity are not separated through RNIC virtualization. The TLU model is prevalent in private clusters that use RDMA and NVMe-oF to accelerate their workloads."

An attacker is pretty deep into your infrastructure if they can even get a whiff of your storage fabric like this.


Remote, fast, secure. Choose two.


https://docs.nvidia.com/datacenter/cloud-native/gpu-operator...

> GPUDirect RDMA is a technology in NVIDIA GPUs that enables direct data exchange between GPUs and a third-party peer device using PCI Express. The third-party devices could be network interfaces such as NVIDIA ConnectX SmartNICs or BlueField DPUs, or video acquisition adapters.

> GPUDirect Storage (GDS) enables a direct data path between local or remote storage, like NFS server or NVMe/NVMe over Fabric (NVMe-oF), and GPU memory. GDS leverages direct memory access (DMA) transfers between GPU memory and storage, which avoids a bounce buffer through the CPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: