-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Store cached models in location compatible with huggingface-cli
#1663
Comments
Yes we could indeed change the cache here! This was a copy-paste back then from the |
I support the idea to get everything under Also note that if there are some stuff cached but not from the Hub (preprocessed data, community weights,...) it would make sense to use |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
cc @pcuenca would you like to tackle this one? Otherwise, I should be able to find some time |
Yeah, I'll take a look today. |
Think this is still relevant |
@patrickvonplaten is there an elegant mechanism for loading models that have been cached before? For example, something happened with the hub https://twitter.com/huggingface/status/1655760648926642178 and the normal |
@alexcoca In general |
@Wauplin , this is a great idea and I had tried it but passing the
There should be an easy way to default to using an existing cache and I imagine that is what I can of course pass that cache path myself as the |
🤔 |
with |
like I can't see how this would work ... the code ends up in @Wauplin , I did some further stepping through the code, some unexpected findings:
Why is this the case? Update: turns out this is because of setting |
@alexcoca given the update, does it mean this is now solved on your side? Meaning |
@Wauplin, yes, worked for me because I had the cached files handy 👍 :) |
Is it possible to choose the location of the cache? |
@alexblattner Yes you can configure it by setting |
@Wauplin is there a way to do that without setting the environment variable? I essentially want to be able to store some loras on external disk A and some in external disk B. Changing the environment variable all the time seems wrong to me. of course, these external disks are connected |
@alexblattner Then |
@Wauplin thanks a lot! could you give me very basic diffusers example that uses that? thanks in advance! |
@alexblattner Something like this: from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", cache_dir="path/to/cache") should work :) |
@Wauplin thanks a lot! |
Reference: huggingface/huggingface_hub#1259
huggingface-cli scan-cache
doesn't see cached diffusers models. Are there any drawbacks to changing the cache folder?The text was updated successfully, but these errors were encountered: