Pallas has a couple backends, this is the new-ish Mosaic GPU one. AAUI it provides a bunch of low-level APIs for interacting directly with NVIDIA-specific and new Blackwell features like SMEM, TMEM, collective MMA, etc.
What's interesting is that the MGPU team has achieved SOTA Blackwell GEMM performance before Triton (which IIUC is trying to bring up Gluon to reach the same level). All the big players are coming up with their own block-based low-level-ish DSLs for CUDA: OpenAI, NVIDIA, and now Google.
I wonder if the same person wrote it.