Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

onnx_export.py 报错 #3072

Open
BigFaceBoy opened this issue Nov 5, 2024 · 1 comment
Open

onnx_export.py 报错 #3072

BigFaceBoy opened this issue Nov 5, 2024 · 1 comment
Labels
question Further information is requested

Comments

@BigFaceBoy
Copy link

BigFaceBoy commented Nov 5, 2024

https://hf-mirror.com/下载的模型 ./hfd.sh IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 --tool wget -x 6.
使用 MNN/transformers/diffusion/export/onnx_export.py 转换出错:

[root@localhost export]$ python onnx_export.py --model_path ../../../../huggingface/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/ --output_path ./
Loading pipeline components...:  14%|██████████████████▌                                                                                                               | 1/7 [00:00<00:00, 46.34it/s]
Traceback (most recent call last):
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/transformers/modeling_utils.py", line 586, in load_state_dict
    return torch.load(
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/torch/serialization.py", line 1369, in load
    raise RuntimeError(
RuntimeError: mmap can only be used with files saved with `torch.save(../../../../huggingface/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/safety_checker/pytorch_model.bin, _use_new_zipfile_serialization=True), please torch.save your checkpoint with this option in order to use mmap.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/transformers/modeling_utils.py", line 595, in load_state_dict
    if f.read(7) == "version":
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 196: invalid start byte

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/home/root/Code/MNN/transformers/diffusion/export/onnx_export.py", line 212, in <module>
    convert_models(args.model_path, args.output_path, args.opset, args.fp16)
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/home/root/Code/MNN/transformers/diffusion/export/onnx_export.py", line 80, in convert_models
    pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 896, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 704, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/transformers/modeling_utils.py", line 4020, in from_pretrained
    state_dict = load_state_dict(resolved_archive_file, weights_only=weights_only)
  File "/data/home/root/mambaforge/envs/taiyi/lib/python3.9/site-packages/transformers/modeling_utils.py", line 607, in load_state_dict
    raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '../../../../huggingface/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/safety_checker/pytorch_model.bin' at '../../../../huggingface/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/safety_checker/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

系统版本:CentOS Linux release 7.9.2009
显卡:nvidia-l40 NVIDIA-SMI 550.54.15

@jxt1234 jxt1234 added the question Further information is requested label Nov 6, 2024
@jxt1234
Copy link
Collaborator

jxt1234 commented Nov 6, 2024

是不是模型没有下载完全?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants