Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The mention of DMA-BUF makes me suspect that it involves rendered video being copied from the GPU to the main RAM and back to the GPU, which wastes energy. Does someone have details about how the integration works?


It's the opposite. The point of using DMA-BUFs is that those are the handles which are passed around, and the data never leaves GPU memory. The handles can be directly accessed from EGL/Vulkan and used as render source/targets.


This is quite unclear to me. Is DMA-BUF so generic that the memory it refers to can be in different types of RAM, including GPU RAM and CPU RAM?


That is the whole point, one of the original use cases was passing around buffers between different GPUs on dual-graphics laptops (PRIME)


Do you maybe know, when frames rendered by GPU 1 (e.g. fast GPU) need to be sent to GPU 2 (e.g. slow GPU doing the compositing and outputting to the monitor), how does the data actually get transferred? I can imagine the following possibilities:

1) GPU 1 writes to CPU RAM, GPU 2 reads from CPU RAM

2) GPU 1 writes to GPU 2 RAM via PCI Express (DMA between devices)

3) GPU 2 reads from GPU 1 RAM via PCI Express (DMA between devices)


AFAIK, GPU 1 writes directly to GPU 2 via PCIe.

Since GPU 2 is most often an Intel iGPU, that happens to also be CPU RAM.


It's supposed to refer to an object that is in GPU RAM. You can map it to CPU RAM to access it, but obviously that is slow. The userspace function you normally want to call to do this is "gbm_bo_map".


Yes.


The linked blog post makes it reasonably clear that this isn’t happening.

https://mastransky.wordpress.com/2020/03/03/webgl-and-fgx-ac...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: