Hacker News new | past | comments | ask | show | jobs | submit login
55x Speedup of Andrej Karpathy's Minbpe LLM Tokenizer with PyTorch/CUDA (github.com/kuprel)
19 points by kuprel 3 months ago | hide | past | favorite | 9 comments



This adds PyTorch/CUDA training support to Andrej Karpathy's minbpe. It takes 2min 28sec (148 seconds) on an RTX4090 to train the BasicTokenizer with a vocab_size of 512 on 307MB of Enron emails. The original code takes 2hrs 15min (8076 seconds) on an M2 Air with Python 3.11 to do this. That is a 55x speedup.


Am I reading this right? A 55x improvement while also going from an M2 Air to an RTX 4090?

If so this doesn’t seem like a logical comparison and the 55x claim would likely not translate when using the same hardware.


Why is it surprising? CPU-only M2 probably has under 1 teraops while RTX 4090 has 77. M2's GPU was not used, but even it only provides around 4 teraops, so would have been ~20x slower than 4090.


The M2 Air was actually much faster than whatever CPU was on the cloud RTX4090 machine I rented. I chose the stronger benchmark to compare to


Using int16 and an H100 the speedup is actually 108x over the M2 air


> 307MB of Enron emails

Wait what?

Is that some sort of inside joke?


Nope!

See for example: https://www.cs.cmu.edu/~./enron/


> This data is valuable; to my knowledge it is the only substantial collection of "real" email that is public.

Interesting. Something good came out of Enron after all


Now someone needs to do a Mojo version, and write up the blog post.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: