-
-
Notifications
You must be signed in to change notification settings - Fork 56.2k
DNN: make MatMul support 3D or 4D with broadcast #22828
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Conversation
e72deee
to
05508ee
Compare
c731aaf
to
34da3c0
Compare
7b504b5
to
c5b5a00
Compare
Because the upstream has supported |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution. LGTM! 👍
Use git rebase instead of merge commits to have clear changes. GitHub has issues with handling PRs which includes merge commits. |
I'm sorry I forgot to squash it. I will squash it later.
|
70593a3
to
2a3853a
Compare
2a3853a
to
4891818
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@@ -921,6 +921,7 @@ TEST_P(Test_ONNX_layers, MatMul_init) | |||
testONNXModels("matmul_4d_init"); | |||
|
|||
testONNXModels("matmul_init_2"); | |||
testONNXModels("matmul_init_bcast"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is failed OpenCL FP16 test:
[ RUN ] Test_ONNX_layers.MatMul_init/1, where GetParam() = OCV/OCL_FP16
[ INFO:0@189.433] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_2d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1
[ INFO:0@189.433] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17
[ INFO:0@189.433] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx'
[ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_3d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1
[ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17
[ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx'
[ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_4d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1
[ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17
[ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx'
[ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_init_2'. Number of nodes = 2, initializers = 2, inputs = 3, outputs = 1
[ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17
[ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!outputY) from domain='ai.onnx'
[ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Add]:(onnx_node_output_0!output) from domain='ai.onnx'
[ INFO:0@189.435] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_init_bcast'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1
[ INFO:0@189.435] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17
[ INFO:0@189.435] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx'
/build/precommit_opencl_linux/4.x/opencv/modules/dnn/test/test_common.impl.hpp:74: Failure
Expected: (normL1) <= (l1), actual: 1.22411 vs 0.004
|ref| = 6.9979562759399414
/build/precommit_opencl_linux/4.x/opencv/modules/dnn/test/test_common.impl.hpp:77: Failure
Expected: (normInf) <= (lInf), actual: 6.99796 vs 0.02
|ref| = 6.9979562759399414
[ INFO:0@189.435] global ts.cpp:850 testTearDown Memory_usage (OpenCL): 3960 (base=0 current=0)
[ FAILED ] Test_ONNX_layers.MatMul_init/1, where GetParam() = OCV/OCL_FP16 (2 ms)
Merge with extra: opencv/opencv_extra#1018
This PR follows the #22775
The main purpose of this PR is making
MatMul
support the broadcast that the second input has less dimention than the first one. And let the operation support SIMD and multi-thread. Beacuse it doesn't support 1D Mat, only support MatMul likeInnerProduct
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.