You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when inference AnimateLCM I2V, the recommended size is (768,512)
However, it is not possible to inference on A100 GPU.
In the paper, you trained on A800 GPU..
is there any way of reducing the size and preserving the quality?
(I cannot use once..)
The text was updated successfully, but these errors were encountered:
Do you mean you face GPU memory overflow? This is not a normal case. In my testing, you won't need over 20 GB for inference. If you use pytorch<2.0, please make sure that xformers is properly installed, which greatly reduces the GPU memory need. Additionally, you can set up enable_vae_slicing to reduce the GPU memory needed for decoding.
when inference AnimateLCM I2V, the recommended size is (768,512)
However, it is not possible to inference on A100 GPU.
In the paper, you trained on A800 GPU..
is there any way of reducing the size and preserving the quality?
(I cannot use once..)
The text was updated successfully, but these errors were encountered: