-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
AttributeError: 'Qwen2VLForConditionalGeneration' object has no attribute 'quantize'. Did you mean: 'dequantize'? #2039
Comments
fixed |
I have update the ms-swift and transformers to the latest, but this error still exist: [INFO:swift] Start quantizing the model... |
please use ms-swift>=2.5 |
$ pip list | grep ms-swift I have used the latest version from source code |
please use gptq awq not support qwen2vl |
[INFO:swift] Qwen2VLForConditionalGeneration: 8291.3756M Params (0.0000M Trainable [0.0000%]), 234.8828M Buffers.
[INFO:swift] system: You are a helpful assistant.
[INFO:swift] Quantization dataset: ['ms-bench']
[INFO:swift] Start quantizing the model...
Traceback (most recent call last):
File "/meta/cash/llm/swift/swift/cli/export.py", line 5, in
export_main()
File "/meta/cash/llm/swift/swift/utils/run_utils.py", line 32, in x_main
result = llm_x(args, **kwargs)
File "/meta/cash/llm/swift/swift/llm/export.py", line 252, in llm_export
awq_model_quantize(model, template.tokenizer, args.quant_batch_size)
File "/meta/cash/llm/swift/swift/llm/export.py", line 138, in awq_model_quantize
awq_model.quantize(tokenizer, quant_config=quant_config, n_parallel_calib_samples=batch_size)
File "/home/ahs/anaconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1729, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'Qwen2VLForConditionalGeneration' object has no attribute 'quantize'. Did you mean: 'dequantize'?
The text was updated successfully, but these errors were encountered: