Hacker Newsnew | past | comments | ask | show | jobs | submit | someguy2026's commentslogin

DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.


Yes.

In general systems usually have PCIE version with bandwidth better than RAM of that system.

For example a system with DDR4 (27Gbs) usually has at least PCIE4 (32Gbs at 16x).

But you can bottleneck that by building a DDR5 (40Gbs) system with PCIE4 card.


My impression is that that is limited to assets and really needs to fit into the DirectX framework. From what I can tell, the gpu-nvme-direct is mostly similar to https://github.com/enfiskutensykkel/ssd-gpu-dma and https://github.com/ZaidQureshi/bam


Actually this idea was fueled by those since I went to check if there was anything near to what I wanted to achieve, pretty useful tho


nvmlib/ssd-gpu-dma and BaM (based on the same code base) are pretty cool as they allow you to initiate disk reads/writes directly from a CUDA kernel (so not only reading/writing directly to GPU memory but also allowing the GPU to initiate IO on its own). Sometimes called GPU-initiated I/O or accelerator-initiated I/O.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: