-
Notifications
You must be signed in to change notification settings - Fork 28k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Error in loading AutoTokenizer when correct token passed #33897
Comments
cc @Wauplin 🤗 |
how to start for this |
Hi @sanketsudake, I can't tell why you get a |
Thanks, @Wauplin for checking out. It seems I had a few permission issues on the NFS side, one of the files/commits had a different user which seems to be causing the issue. Thanks for the fix huggingface/huggingface_hub@2a9efcc |
System Info
transformers 4.45.1
huggingface-hub 0.25.1
Python 3.10.14
I observed this while doing fine tune for
meta-llama/Llama-3.2-1B
with autotrain.Who can help?
text models: @ArthurZucker
autotrain: @abhishekkrthakur
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Without token following works fine. I have model downloaded in cache
When a valid HF token
Similar call happens from here https://github.com/huggingface/autotrain-advanced/blob/ed8e0c11e251531c319e47d01f5a47b2809d1ce2/src/autotrain/trainers/clm/utils.py#L588
Getting below error,
I tried token=True and token="invalid_token" both work fine for loading tokenizer. But only in case of passing valid token I get above error.
Expected behavior
Code should work fine and give preference to cache
The text was updated successfully, but these errors were encountered: