Hydra: Multi-head low-rank adaptation for parameter efficient fine-tuning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 63
  • Download : 0
The recent surge in large-scale foundation models has spurred the development of efficient methods for adapting these models to various downstream tasks. Low-rank adaptation methods, such as LoRA, have gained significant attention due to their outstanding parameter efficiency and no additional inference latency. This paper investigates a more general form of adapter module based on the analysis that parallel and sequential adaptation branches learn novel and general features during fine-tuning, respectively. The proposed method, named Hydra, combines parallel and sequential branch to integrate capabilities, which is more expressive than existing single branch methods and enables the exploration of a broader range of optimal points in the finetuning process. In addition, the proposed method explicitly leverages the pre-trained weights by performing a linear combination of the pre-trained features. It allows the learned features to have better generalization performance across diverse downstream tasks. Furthermore, we perform a comprehensive analysis of the characteristics of each adaptation branch with empirical evidence. Through an extensive range of experiments, we substantiate the efficiency and demonstrate the superior performance of Hydra. This comprehensive evaluation underscores the potential impact and effectiveness of Hydra in a variety of applications. The source code of this work is publicly opened on https://github.com/extremebird/Hydra.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
Issue Date
2024-10
Language
English
Article Type
Article
Citation

NEURAL NETWORKS, v.178

ISSN
0893-6080
DOI
10.1016/j.neunet.2024.106414
URI
http://hdl.handle.net/10203/323170
Appears in Collection
MA-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0