Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

size issue with AnimateLCM I2V model #35

Open
dreamyou070 opened this issue Sep 3, 2024 · 2 comments
Open

size issue with AnimateLCM I2V model #35

dreamyou070 opened this issue Sep 3, 2024 · 2 comments

Comments

@dreamyou070
Copy link

when inference AnimateLCM I2V, the recommended size is (768,512)
However, it is not possible to inference on A100 GPU.
In the paper, you trained on A800 GPU..
is there any way of reducing the size and preserving the quality?
(I cannot use once..)

@G-U-N
Copy link
Owner

G-U-N commented Sep 3, 2024

Do you mean you face GPU memory overflow? This is not a normal case. In my testing, you won't need over 20 GB for inference. If you use pytorch<2.0, please make sure that xformers is properly installed, which greatly reduces the GPU memory need. Additionally, you can set up enable_vae_slicing to reduce the GPU memory needed for decoding.

@dreamyou070
Copy link
Author

dreamyou070 commented Sep 3, 2024 via email

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants