Show HN: Finetune Llama-3.1 2x faster in a Colab
1 by danielhanchen | 0 comments on Hacker News.
Just added Llama-3.1 support! Unsloth https://ift.tt/X62eL5V makes finetuning Llama, Mistral, Gemma & Phi 2x faster, and use 50 to 70% less VRAM with no accuracy degradation. There's a custom backprop engine which reduces actual FLOPs, and all kernels are written in OpenAI's Triton language to reduce data movement. Also have an 2x faster inference only notebook in a free Colab as well! https://ift.tt/zHyn5Cq...