Hacker News new | past | comments | ask | show | jobs | submit login
ThunderKittens: Simple, fast, and adorable AI kernels (stanford.edu)
87 points by lnyan 5 days ago | hide | past | favorite | 18 comments





This is super cool! Especially matrix mult getting similar or better perf than cuBLAS! If anyone is interested on other kernels like swiglu, geglu, RMS layernorm, I coded some at https://github.com/unslothai/unsloth/tree/main/unsloth/kerne...

Neat! Are these compatible with torch compile?

CUDA + ThunderKittens 4.5 hour tutorial

https://www.youtube.com/watch?v=xcpEl0cGCC4


How easy is it to run on older GPUs (think 1080Tis)? The reason I ask this is because torch.compile refuses to support that, and that alone makes things much slower.

The other issue is Pascal cards don't have tensor cores, so there much slower than those with them. You could try Unsloth for 2x faster llama fine-tuning - someone made P40s and P100s work. Although I would suggest upgrading to at least RTX 20x series.

The project is very much focused on maxing out tensor cores and since older GPUs don’t have them it’s not where the project shines best

> torch.compile

torch.compile is a pt2.0 feature and has nothing to do with handwritten cuda kernels

> How easy is it to run on older GPUs

this is a torch cpp extension

https://github.com/HazyResearch/ThunderKittens/blob/8daffc9c...

so you're going to have the same exact issue (whatever issue you're having)


I’m working on an inference platform that allows for tokens to be appended to the context after some tokens have been generated. If there’s other sequences in the batch, it means they’ll have to be padded. Currently this means I can’t use FlashAttention because it doesn’t support arbitrary masks/padding masks… can ThunderKittens help me?

so, these are hand optimized primitives for specific model of nvidia gpus? do you still have to make launch/scheduling decisions to maximize occupancy? how does this approach scale to other target devices with specialized instruction sets and different architecture?

"Coming soon -- ThunderKittens on AMD hardware!"

Any update on this?


hi! We're the devs - we're planning the livestream for 1pm and we'll post the link here, twitter, and in the discord tonight

I hate to be that guy, but Metal support?

coming!

I'm late to the party, but also wondering about Metal, and I have a question for you all. Do you happen to know how energy use relates to utilization? If I run a super duper duper fast thunder kitten kernel on an iphone (pending Metal, blah blah), would I expect it to also cost less battery? My naive guess is yes, but without logic to back it up.

It would be wild if some of these time-efficiency boosts you're getting with TK turned out to be energy-efficiency boosts too!


I dont want to use the Platform Formerly Known as Twitter, but does anyone have a way to get the link to their livestream tomorrow?

Simran Arora: "Join us for a livestream this Thursday, Halloween/Diwali, and join our channel on the GPU Mode Discord server to hang out with us/get involved:"

https://discord.com/login?redirect_to=%2Fchannels%2F11894982...


Livestream link: https://youtube.com/live/IAwLzkldxUk?feature=share! Come ask questions!

Thanks!



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: