Hacker News new | past | comments | ask | show | jobs | submit login

From "PyTorch for WebGPU" (2023) https://news.ycombinator.com/item?id=36009478 :

> Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: