Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Error running custom model on STM32MP257 #49

Closed
yoppy-tjhin opened this issue Oct 8, 2024 · 4 comments
Closed

Error running custom model on STM32MP257 #49

yoppy-tjhin opened this issue Oct 8, 2024 · 4 comments

Comments

@yoppy-tjhin
Copy link

Hello,

Some progress since our issues #47.
We have finished training a custom dataset for ssd_mobilenet_v2_fpn according to the wiki guide.
And also finished quantizing the model, with the following parameter:

quantization:
quantizer: TFlite_converter
quantization_type: PTQ
quantization_input_type: float
quantization_output_type: int8
granularity: per_tensor #per_tensor
optimize: True #can be True if per_tensor
export_dir: quantized_models

But when we run the model on the STM32MP257 eval kit, got the following errors:

RuntimeError: [TFLITE] Failed reading output: Unsupported output tensor type.
Traceback (most recent call last):
File "/usr/local/x-linux-ai/object-detection/stai_mpu_object_detection.py", line 396, in new_sample
self.app.nn_result_locations, self.app.nn_result_classes, self.app.nn_result_scores = self.nn.get_results()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/x-linux-ai/object-detection/ssd_mobilenet_pp.py", line 113, in get_results
anchors = self.stai_mpu_model.get_output(index=2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/stai_mpu/stai_mpu/network.py", line 48, in get_output
output_tensor: NDArray = self._exec.get_output(index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

What could be the problem?

Thank you.

@abrSTM
Copy link

abrSTM commented Oct 8, 2024

Hello,

First of all, I'd like to make sure I understand the operations you've performed:

  • You have trained a custom ssd_mobilenet_v2_fpn.
  • You have quantized it using the model zoo script.
  • Finally, you have a custom ssd_mobilenet_v2_fpn quantized per-tensor.
  1. Is that correct?

To be able to run the model on our MP257 target using the object detection Python application, you need to modify the post-processing file to match the output of your model.
2. Did you modify the post-processing file?
3. In the "standard" ssd_mobilenet_v2_fpn, we have 3 outputs. Is it the same in your custom ssd_mobilenet_v2_fpn?

If the number of outputs is different, please modify the /usr/local/x-linux-ai/object-detection/ssd_mobilenet_pp.py file to get the correct number of outputs.

Regards,
ABR

@yoppy-tjhin
Copy link
Author

Hi,

  1. Yes correct. I trained and quantized the model using the model zoo script..
  2. I did not modify the post-processing file.

input output graph h5 model
input output graph quantized tflite

Above are the graph of the .h5 model and the quantized .tflite model.

The quantized model has 3 outputs too, right?

Please advise, what modification of post-processing need to be done. Thank you.

@abrSTM
Copy link

abrSTM commented Oct 9, 2024

Hello,

I managed to reproduce your issue, currently the Int8 outputs and/or inputs are not supported.
Can you please use UInt8 or Float32 instead of Int8 in the quantization parameters?

Thanks,
ABR

@yoppy-tjhin
Copy link
Author

Hi,

Thank you.
Yes it works with float output.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants