Accuracy difference with tflite quant8 model #191
-
Before opening an issueIssue Checklist
Describe the bugI am using "AXIS M4317-PLR Panoramic Camera" for detection of an object detection model. I trained a mbilenet ssd model, exported to tflite quant8 and able to successfully infer on the Axis Camera" Issue: Issue is I'm seeing the difference in detection accuracy when I'm running the same model on camera and my local.
for Inference on axis camera, I'm using
Axis camera is using tfserve and to serve the model and from this I'm guessing the issue might come. I have tried to mimic the pre-processing steps from To reproducePlease provide as much context as possible and describe the reproduction steps that someone else can follow to recreate the issue on their own. A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps. Bugs without steps will not be addressed until they can be reproduced. Steps to reproduce the behavior:
ScreenshotsIf applicable, add screenshots to help explain your problem. Environment
Additional contextAdd any other context about the problem here. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
From your code snippet I see you are converting BGR to RGB, unless I misunderstood your intent that doesn't seem correct. In particular, you can try it on a fixed image both on tensorflow and on the camera. |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo , I have another question, Which tracker will be the best for tracking person that can run on this camera. |
Beta Was this translation helpful? Give feedback.
Hi @akash4562800
From your code snippet I see you are converting BGR to RGB, unless I misunderstood your intent that doesn't seem correct.
The VideoCaptureClient will return an RGB image.
Maybe that is the reason of the performance difference?
If you want to dig more, take a look at this discussion: AxisCommunications/axis-model-zoo#50
It has some guideline on how to debug another model, but you might find it useful.
In particular, you can try it on a fixed image both on tensorflow and on the camera.
You can save the image as a binary file and use the larod-client to test the model in isolation.
Here is a guide that shows how to do it: https://developer.axis.com/computer-vision/computer-v…