Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Writing high-performance matrix multiplication kernels for Blackwell (jax.dev)
63 points by lairv 35 days ago | hide | past | favorite | 6 comments


This is essentially the same as https://siboehm.com/articles/22/CUDA-MMM

I wonder if the same person wrote it.


The interesting part is this is done in Pallas!

Seems like the Pallas of old has completely been upgraded


Pallas has a couple backends, this is the new-ish Mosaic GPU one. AAUI it provides a bunch of low-level APIs for interacting directly with NVIDIA-specific and new Blackwell features like SMEM, TMEM, collective MMA, etc.

What's interesting is that the MGPU team has achieved SOTA Blackwell GEMM performance before Triton (which IIUC is trying to bring up Gluon to reach the same level). All the big players are coming up with their own block-based low-level-ish DSLs for CUDA: OpenAI, NVIDIA, and now Google.


So OpenAI has Triton and Google has Pallas. What's the NVIDIA counterpart?


Tilus/CUTLASS I assume


Interesting: https://github.com/NVIDIA/tilus Thanks for the pointer!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: