Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models
概要
In this study, we consider the transferability of LoRA adapters in quantized foundation models. Specifically, we investigate whether LoRA adapters trained on a low-bit-width foundation model can still function effectively when merged into a higher-bit-width foundation model. By leveraging this transferability, it becomes possible to construct models with performance comparable to conventional LoRA using QLoRA adapters trained under resource-constrained conditions. Our method can be utilized to not only improve the performance of trained QLoRA models without additional training but also accelerate the construction of LoRA.
引用情報
Yuto Kanda, Kenji Hatano, , Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models, ICLR 2025 Workshop on Sparsity in LLMs (SLLM), 2025-04-27.