Show HN: Open-source fine-tuning in a Colab notebook
2 by danielhanchen | 0 comments on Hacker News.
Posted before, but wanted to share if you want an open source alternative to OpenAI fine-tuning, give Unsloth a try! Phi 3.5 was just released, and is distilled from GPT4. Unsloth makes finetuning 2x faster, uses 70% less VRAM + has no accuracy degradations. We rewrite all backprop steps and reduce FLOPs and write everything in Triton (JIT low level CUDA). If you want to own the weights after fine-tuning, give Unsloth a spin! I have free Colabs and Kaggle notebooks as well at https://ift.tt/UwHn9Mu
2 by danielhanchen | 0 comments on Hacker News.
Posted before, but wanted to share if you want an open source alternative to OpenAI fine-tuning, give Unsloth a try! Phi 3.5 was just released, and is distilled from GPT4. Unsloth makes finetuning 2x faster, uses 70% less VRAM + has no accuracy degradations. We rewrite all backprop steps and reduce FLOPs and write everything in Triton (JIT low level CUDA). If you want to own the weights after fine-tuning, give Unsloth a spin! I have free Colabs and Kaggle notebooks as well at https://ift.tt/UwHn9Mu