How easy is it to run on older GPUs (think 1080Tis)? The reason I ask this is because torch.compile refuses to support that, and that alone makes things much slower.
The other issue is Pascal cards don't have tensor cores, so there much slower than those with them. You could try Unsloth for 2x faster llama fine-tuning - someone made P40s and P100s work. Although I would suggest upgrading to at least RTX 20x series.
I’m working on an inference platform that allows for tokens to be appended to the context after some tokens have been generated. If there’s other sequences in the batch, it means they’ll have to be padded. Currently this means I can’t use FlashAttention because it doesn’t support arbitrary masks/padding masks… can ThunderKittens help me?
so, these are hand optimized primitives for specific model of nvidia gpus? do you still have to make launch/scheduling decisions to maximize occupancy? how does this approach scale to other target devices with specialized instruction sets and different architecture?
I'm late to the party, but also wondering about Metal, and I have a question for you all. Do you happen to know how energy use relates to utilization? If I run a super duper duper fast thunder kitten kernel on an iphone (pending Metal, blah blah), would I expect it to also cost less battery? My naive guess is yes, but without logic to back it up.
It would be wild if some of these time-efficiency boosts you're getting with TK turned out to be energy-efficiency boosts too!
Simran Arora: "Join us for a livestream this Thursday, Halloween/Diwali, and join our channel on the GPU Mode Discord server to hang out with us/get involved:"
reply