-
-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[Feature]: Support Internlm2 Lora loading #4160
Comments
@newportchen Your approach seems promising,I'm not familiar with internLM2, not sure about the reason for loadling wqkv |
When will InternLM2ForCausalLM support lora loading? eager waiting for that |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you! |
I tried to modify the source code to support Lora loading of the internlm2 model, load lora is fine, but inference result is not correct.
the specific modifications include:
1. add supported_lora_modules:
models/internlm2.py:
`class InternLM2ForCausalLM(nn.Module):
packed_modules_mapping = {
"wqkv":["wqkv"],
"gate_up_proj": [
"w1",
"w3",
],
}
2. add vocab_size 92544 support:
bgmv_config.h
f(in_T, out_T, W_T, narrow, 92544) \
I don't know where the problem is,some one can help me?
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: