Just added Llama-3.1 support! Unsloth
https://github.com/unslothai/unsloth makes finetuning Llama, Mistral, Gemma & Phi 2x faster, and use 50 to 70% less VRAM with no accuracy degradation.
There's a custom backprop engine which reduces actual FLOPs, and all kernels are written in OpenAI's Triton language to reduce data movement.
Also have an 2x faster inference only notebook in a free Colab as well! https://colab.research.google.com/drive/1T-YBVfnphoVc8E2E854...