-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
How to finetuning with lower memory fp16 version for p100 GPUs? #39
Comments
This is quite delicate and doesn't quite seem to work out of the box. I'm going to need more time to look into this. |
The background to this problem is that we have a large number of p100 machines, but they cannot run on the fp32 version. Thank you for your improvement. |
Is there some update on this matter? We have the same problem, unfortunately. |
Yes same here for us. Both huggingface and this repo seem to have the same OOM error when running on Google Colab free GPU like p100. Any fix or workaround yet? |
The problem still persists, unfortunately. Fine-tuning doesn't really work with collab resources.. |
For finetuning with lower memory fp16 version(for fp32 version , OOMs occur. ), How should I modify the training.py script?
The text was updated successfully, but these errors were encountered: