-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Optimum profiling #70
Comments
PyTorch ONNX Conversion Error Report
Error message: Traceback (most recent call last):
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 933, in export
onnx.checker.check_model(onnx_program.model_proto, full_check=True)
File "/Users/justinc/Documents/GitHub/torch-onnx/venv/lib/python3.11/site-packages/onnx/checker.py", line 176, in check_model
raise ValueError(
ValueError: This protobuf of onnx model is too large (>2GB). Call check_model with model path instead. AnalysisPyTorch ONNX Conversion Analysis Model InformationThe model has 636968960 parameters and 0 buffers (non-trainable parameters). defaultdict(<class 'int'>, {torch.float32: 636968960}) Number of buffers per dtype: defaultdict(<class 'int'>, {}) Inputs:
Outputs:
The FX graph has 2290 nodes in total. Number of FX nodes per op:
Of the call_function nodes, the counts of operators used are:
ONNX Conversion InformationAll operators in the model have registered ONNX decompositions. Profiling result
|
|
torch.onnx
|
torch.onnx
|
Profiling result dynamo
|
Profiling result dynamo
|
torch.onnx memory
|
dynamo memory usage
|
There may be shared tensors, as safetensors is 3GB but the external data is 6,11, and 16GB? |
optimum-cli export onnx --model openai/whisper-large- v3 whisper/
mprof run optimum-cli export onnx --model open ai/whisper-large-v3 whisper/
The text was updated successfully, but these errors were encountered: