-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Lora module splicing #274
Comments
mlora/model/modules/lora.py |
it seems that multi-lora modules are placed in a list without concatenating lora_a1 and lora_a2 into a new lora_A, and each group's lora_a and lora_b were separately extracted during each training. |
![]() more, I find during the task1 and task2 running they use the same pid. May I ask if this understanding is correct? Thank you for your reply!!! @yezhengmao1 |
if you have one GPU, two tasks (task1 and task2), and set mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.). also, if you set |
Can you point out the code that is used to create the two tasks input to a large batch? |
any questions, you can contact me by WeChat: 18280486636 |
Your work is outstanding. I would like to ask which key code modules are used for the assembly of Lora modules.
Thank you for your reply. @yezhengmao1
The text was updated successfully, but these errors were encountered: