-
Notifications
You must be signed in to change notification settings - Fork 6.1k
make enable_sequential_cpu_offload more generic for third-party devices #4191
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Merged
sayakpaul
merged 2 commits into
huggingface:main
from
ji-huazhong:refactor_enable_sequential_cpu_offload
Jul 21, 2023
Merged
make enable_sequential_cpu_offload more generic for third-party devices #4191
sayakpaul
merged 2 commits into
huggingface:main
from
ji-huazhong:refactor_enable_sequential_cpu_offload
Jul 21, 2023
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The documentation is not available anymore as the PR was closed or merged. |
patrickvonplaten
approved these changes
Jul 21, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me!
pcuenca
approved these changes
Jul 21, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
orpatashnik
pushed a commit
to orpatashnik/diffusers
that referenced
this pull request
Aug 1, 2023
…es (huggingface#4191) * make enable_sequential_cpu_offload more generic for third-party devices * make style
orpatashnik
pushed a commit
to orpatashnik/diffusers
that referenced
this pull request
Aug 1, 2023
…es (huggingface#4191) * make enable_sequential_cpu_offload more generic for third-party devices * make style
orpatashnik
pushed a commit
to orpatashnik/diffusers
that referenced
this pull request
Aug 1, 2023
…es (huggingface#4191) * make enable_sequential_cpu_offload more generic for third-party devices * make style
yoonseokjin
pushed a commit
to yoonseokjin/diffusers
that referenced
this pull request
Dec 25, 2023
…es (huggingface#4191) * make enable_sequential_cpu_offload more generic for third-party devices * make style
AmericanPresidentJimmyCarter
pushed a commit
to AmericanPresidentJimmyCarter/diffusers
that referenced
this pull request
Apr 26, 2024
…es (huggingface#4191) * make enable_sequential_cpu_offload more generic for third-party devices * make style
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR make
enable_sequential_cpu_offload
more generic for third-party devices.I noticed that in #4114,
enable_sequential_cpu_offload
has been refactored to be more generic for other devices.But inside the function
enable_sequential_cpu_offload
, we usetorch.cuda.empty_cache
to release all unoccupied cache memory, which has no effect for other devices (such asxpu
)diffusers/src/diffusers/pipelines/pipeline_utils.py
Lines 1128 to 1130 in 7a47df2
We could change
torch.cuda.empty_cache()
with another from outside, like+ torch.cuda.empty_cache = torch.xpu.empty_cache device = torch.device("xpu") pipeline.enable_sequential_cpu_offload(device=device)
but it looks a little weird.
I think a better way is
empty_cache
method.Now, we can use
enable_sequential_cpu_offload
more conveniently withxpu
, likeBefore submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@patrickvonplaten and @sayakpaul