Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Lora module splicing #274

Open
FortuneBush opened this issue Nov 29, 2024 · 6 comments
Open

Lora module splicing #274

FortuneBush opened this issue Nov 29, 2024 · 6 comments

Comments

@FortuneBush
Copy link
Contributor

Your work is outstanding. I would like to ask which key code modules are used for the assembly of Lora modules.
Thank you for your reply. @yezhengmao1

@yezhengmao1
Copy link
Contributor

mlora/model/modules/lora.py

@FortuneBush
Copy link
Contributor Author

FortuneBush commented Dec 4, 2024

it seems that multi-lora modules are placed in a list without concatenating lora_a1 and lora_a2 into a new lora_A, and each group's lora_a and lora_b were separately extracted during each training.
May I ask if this understanding is correct? Could you please introduce the technical code details of Lora splicing and uninstallation again
Thank you for your reply!!! @yezhengmao1

@FortuneBush
Copy link
Contributor Author

FortuneBush commented Dec 4, 2024

a1981b13571b7f3a330797e90f8ac04

more, I find during the task1 and task2 running they use the same pid.
It seems that two Lora training tasks are not parallel, but serial training.
In each layer of an iteration network, it seems that task A is trained first, followed by task B, and only enters the training of the next layer of the network after both tasks A and B have been trained.

May I ask if this understanding is correct?

Thank you for your reply!!! @yezhengmao1

@yezhengmao1
Copy link
Contributor

if you have one GPU, two tasks (task1 and task2), and set concurrency_num: 2:

mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).

also, if you set concurrency_num: 1, mLoRA will train task1 and task2 serial.

@FortuneBush
Copy link
Contributor Author

FortuneBush commented Dec 4, 2024

if you have one GPU, two tasks (task1 and task2), and set concurrency_num: 2:

mLoRA will concat task1 and task2's input to a large batch and train simultaneously (in only one process, so have same pid.).

also, if you set concurrency_num: 1, mLoRA will train task1 and task2 serial.

Can you point out the code that is used to create the two tasks input to a large batch?
The code show it stores the two lora modules in a List, not a larger batch.

@yezhengmao1
Copy link
Contributor

tokens = torch.tensor(

any questions, you can contact me by WeChat: 18280486636

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants