-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
请教下训练的显存需求 #20
Comments
这取决于您添加的层数,以及训练的设置,根据我的经验8卡A100-40G是能够支持ctx-length=4096的预训练的,我试过将LoRA的rank调大到1024,使得lora和我们可训练的参数量相近,此时显存占用也是差不多的 |
噢我理解是llama-pro在预训练时仅需调整新加的block,所以应该远小于全参数训练所需的显存? |
是的,但是如果新增加的要训练的层很多,同样也会带来很大的显存占用,并且训练的时候其实原有模型的参数也需要load进去,尽管不需要微调 |
噢噢。感谢回答!~ |
请问为什么14B模型,可以在L20 40G * 2的机器上使用lora进行预训练,但是改为LLAMA_PRO之后,在A800 80G * 3的机器上,会显存溢出? |
想请教下,llama-pro训练的显存需求是多少,和lora比要多多少
The text was updated successfully, but these errors were encountered: