Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

MTL-LoRA #791

Open
5 tasks
FrLdy opened this issue Feb 8, 2025 · 0 comments
Open
5 tasks

MTL-LoRA #791

FrLdy opened this issue Feb 8, 2025 · 0 comments
Labels
enhancement New feature or request

Comments

@FrLdy
Copy link
Contributor

FrLdy commented Feb 8, 2025

🌟 New adapter setup

Model description

MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning

LoRA has been widely used for parameter-efficient fine-tuning (PEFT), especially in domain adaptation. However, in multi-task learning (MTL) scenarios, LoRA struggles with task interference due to the projection of high-dimensional task-specific features into the same low-dimensional space. This results in suboptimal performance.

To address this challenge, MTL-LoRA is proposed, an enhancement to LoRA that preserves its efficiency while improving multi-task adaptation. MTL-LoRA introduces task-adaptive parameters to better differentiate between tasks while capturing shared knowledge. This leads to improved performance across various MTL benchmarks with a comparable or even reduced number of trainable parameters.

Open source status

Tasks

@FrLdy FrLdy added the enhancement New feature or request label Feb 8, 2025
@FrLdy FrLdy mentioned this issue Feb 8, 2025
18 tasks
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant