· Ruby Jha · project-deep-dives
LoRA Hit 96% of Full Fine-Tuning. The Default Learning Rate Almost Killed It.
I fine-tuned all-MiniLM-L6-v2 on dating profiles, flipped Spearman from -0.22 to +0.85, and found LoRA hit 96.2% of that with 0.32% of parameters.