Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Yunet quantized model is slower #203

Open
MathieuLalaque opened this issue Jul 26, 2023 · 1 comment
Open

Yunet quantized model is slower #203

MathieuLalaque opened this issue Jul 26, 2023 · 1 comment
Assignees
Labels
question It is not an issue but rather a user question

Comments

@MathieuLalaque
Copy link

Hello ! Thanks for the great work.

I have been using the Yunet model and tried the quantized version to speed up inference but I got slower results, both in my own code and trying your demo. I checked the benchmarks and the quantized model is announced to be slower too : is this an expected behavior ?

For context I use default backend and I have a Intel(R) Core(TM) i5-10300H CPU. To be clear, I loaded the int8 onnx file from the github and did not use the quantization script.

@fengyuentau fengyuentau self-assigned this Jul 26, 2023
@fengyuentau fengyuentau added the question It is not an issue but rather a user question label Jul 26, 2023
@fengyuentau
Copy link
Member

is this an expected behavior ?

Yes for now. Optimization on the int8 inference on default backend is still in progress opencv/opencv#23689.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
question It is not an issue but rather a user question
Projects
None yet
Development

No branches or pull requests

2 participants