Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Faster&smaller shape inference #393

Merged
merged 6 commits into from
Oct 13, 2021
Merged

Faster&smaller shape inference #393

merged 6 commits into from
Oct 13, 2021

Conversation

maltanar
Copy link
Collaborator

Use RandomNormal instead of Const for shape inf with custom ops, which removes the requirement to allocate a dummy tensor with filled values. This caused slow runtimes and ONNX out-of-memory errors for networks that use large activations.

See also Xilinx/finn-base#51

@maltanar maltanar merged commit f98cc21 into dev Oct 13, 2021
@maltanar maltanar deleted the feature/faster_shape_inf branch October 13, 2021 11:29
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant