-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Impossible to import ONNX model #2592
Comments
I think If it seems ok, I could do a pr. Anyway, importing the onnx model doesn't work correctly even after this change because of another node ( |
Hmm hat doesn't seem correct either, the
We're always opened to PRs!
We can keep the same issue open to represent your ONNX model. I know there are many things that are far from perfect with the current ONNX import approach, but iirc @antimora was working on some improvements recently |
@laggui thank you for your reply! |
A potential issue with Describe the bug To Reproduce
Expected behavior Desktop (please complete the following information):
Additional context |
Hi all!
I tried to use an .onnx model from Hugging Face with burn when I found this potential issues.
Describe the bug
One node (
Where
to be precise) throw error:After debugging I found that one of the previous node (
Unsqueeze
) took two Int64 input arguments and return a Float32 tensor. So, it causes the error above and doesn't correspond to onnx graph from Netron.To Reproduce
Expected behavior
Correct import from .onnx files.
Screenshots
Desktop (please complete the following information):
Potential fixing
It seems the type here which is default right now should match the type of the inputs.
The text was updated successfully, but these errors were encountered: