GeoLoRA: Geometric integration for parameter efficient fine-tuning
Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tunin...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Low-Rank Adaptation (LoRA) has become a widely used method for
parameter-efficient fine-tuning of large-scale, pre-trained neural networks.
However, LoRA and its extensions face several challenges, including the need
for rank adaptivity, robustness, and computational efficiency during the
fine-tuning process. We introduce GeoLoRA, a novel approach that addresses
these limitations by leveraging dynamical low-rank approximation theory.
GeoLoRA requires only a single backpropagation pass over the small-rank
adapters, significantly reducing computational cost as compared to similar
dynamical low-rank training methods and making it faster than popular baselines
such as AdaLoRA. This allows GeoLoRA to efficiently adapt the allocated
parameter budget across the model, achieving smaller low-rank adapters compared
to heuristic methods like AdaLoRA and LoRA, while maintaining critical
convergence, descent, and error-bound theoretical guarantees. The resulting
method is not only more efficient but also more robust to varying
hyperparameter settings. We demonstrate the effectiveness of GeoLoRA on several
state-of-the-art benchmarks, showing that it outperforms existing methods in
both accuracy and computational efficiency. |
---|---|
DOI: | 10.48550/arxiv.2410.18720 |