-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Does STM32MP2 support YOLOv5 (or newer) object detection? #50
Comments
Hello, We have already successfully run YOLO models on our NPU, including v4, v5, v8, and Tiny YOLO v2. To run it on the NPU, you need to convert it to NBG format, you will find more information in these wiki:
Then the generated model will be able to run on the NPU. However, you need to develop your own application for your specific use case based on our demo application as an example. Regards, |
Hello, Thank you for your prompt response. Thank you. |
Hello, We have the Semantic Segmentation demo application that is based on YoloV8-pose model. Currently, we do not have an example of Yolo model for Object Detection. But, you can look at the Semantic Segmentation post-processing file to reproduce the NMS function in a Object Detection use case. Regards, |
Hello, I think that the problem is not related to the input/output type but to the neural network model format. To understand how to deploy a model on our hardware, please follow this wiki article: https://wiki.st.com/stm32mpu/wiki/How_to_deploy_your_NN_model_on_STM32MPU To be able to run a neural network model with hardware acceleration on STM32MP2, you need to convert your TFLite or ONNX model to the Network Binary Graph (.nb) format. To convert it, please follow this wiki article: https://wiki.st.com/stm32mpu/wiki/ST_Edge_AI:_Guide_for_MPU Once the .nb file is generated, you will be able to benchmark both the .nb model and the .tflite model with the x-linux-ai-benchmark tool to check if the model is running on the CPU, GPU, or NPU and at what framerate. Please check this wiki article to benchmark your model: https://wiki.st.com/stm32mpu/wiki/How_to_benchmark_your_NN_model_on_STM32MPU Finally, you will be able to use the .nb model in your application. Regards, |
Hi,
Previously we have been successfully running our custom YOLOv5 model on the NPU of Rockchip 3588 platform.
Now we want to porting our platform to STM32MP2. From the wiki guide, object detection with YOLOv5 or newer version is not mentioned.
We wish that our custom YOLOv5 model can be run the NPU of STM32MP2.
If now is not supported yet, are there any plan to move to that direction?
We have trained the ssd_mobilenet_v2_fpn model using our same dataset as for YOLOv5. (yet, still trying to run the model #49)
And the mAP (as stated in the training logs) of ssd_mobilenet_v2_fpn is much lower than YOLOv5.
Thank you.
The text was updated successfully, but these errors were encountered: