Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

UPDATED TO V5.0.0 Now outperforming CuPy by 50% on Transformer Blocks. Benchmarks show 1.5x speedup over CuPy on Transformer blocks and up to 1800x on small-batch MatMul. No CUDA, no vendor lock-in, just pure SPIR-V efficiency.

pip adamah still legacy, use : pip install git+https://github.com/krokodil-byte/ADAMAH.git



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: