Hacker News new | past | comments | ask | show | jobs | submit login

Yes. Probably the single most effective use of the GPU for audio is convolutional reverb, for which a number of plugins exist. However, the problem is that GPUs are optimized for throughput rather than latency, and it gets worse when multiple applications (including the display compositor) are contending for the same GPU resource - it's not uncommon for dispatches to have to wait multiple milliseconds just to be scheduled.

I think there's potential for interesting things in the future. There's nothing inherently preventing a more latency-optimized GPU implementation, and I personally would love to see that for a number of reasons. That would unlock vastly more computational power for audio applications.

There's also machine learning, of course. You'll definitely be seeing (and hearing) more of that in the future.




Already on it! https://github.com/webonnx/wonnx - runs ONNX neural networks on top of WebGPU (in the browser, or using wgpu on top of Vulkan/DX12/..)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: