You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LoRA has been widely used for parameter-efficient fine-tuning (PEFT), especially in domain adaptation. However, in multi-task learning (MTL) scenarios, LoRA struggles with task interference due to the projection of high-dimensional task-specific features into the same low-dimensional space. This results in suboptimal performance.
To address this challenge, MTL-LoRA is proposed, an enhancement to LoRA that preserves its efficiency while improving multi-task adaptation. MTL-LoRA introduces task-adaptive parameters to better differentiate between tasks while capturing shared knowledge. This leads to improved performance across various MTL benchmarks with a comparable or even reduced number of trainable parameters.
🌟 New adapter setup
Model description
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
LoRA has been widely used for parameter-efficient fine-tuning (PEFT), especially in domain adaptation. However, in multi-task learning (MTL) scenarios, LoRA struggles with task interference due to the projection of high-dimensional task-specific features into the same low-dimensional space. This results in suboptimal performance.
To address this challenge, MTL-LoRA is proposed, an enhancement to LoRA that preserves its efficiency while improving multi-task adaptation. MTL-LoRA introduces task-adaptive parameters to better differentiate between tasks while capturing shared knowledge. This leads to improved performance across various MTL benchmarks with a comparable or even reduced number of trainable parameters.
Open source status
Tasks
ForwardContext
managetask_ids
forward parameter as discussed in Support for Passing `task_ids` to `forward_context` for Multi-Task Learning #783, Support for Passingtask_ids
toforward_context
for Multi-Task Learning #784MTLLoRA
Adapter Add MTL-LoRA Adapter #792The text was updated successfully, but these errors were encountered: