- 1. Description
- 2. Current Support Platform
- 3. Pretrained Model
- 4. Convert to RKNN
- 5. Python Demo
- 6. Android Demo
- 7. Linux Demo
- 8. Expected Results
This model used for object detection.
The model used in this example comes from the following open source projects:
https://github.com/airockchip/yolov6
RK3566, RK3568, RK3588, RK3562, RK1808, RV1109, RV1126
Download link:
./yolov6n.onnx
./yolov6s.onnx
./yolov6m.onnx
Download with shell command:
cd model
./download_model.sh
Note: The model provided here is an optimized model, which is different from the official original model. Take yolov6n.onnx as an example to show the difference between them.
- The comparison of their output information is as follows. The left is the official original model, and the right is the optimized model. As shown in the figure, the original one output is split into three groups. For example, in the set of outputs ([1,4,80,80],[1,80,80,80],[1,1,80,80]), [1,4,80,80] is the coordinate of the box, [1,80,80,80] is the confidence of the box corresponding to the 80 categories, and [1,1,80,80] is the sum of the confidence of the 80 categories.
- Taking the the set of outputs ([1,4,80,80],[1,80,80,80],[1,1,80,80]) as an example, we remove the subgraphs behind the two convolution nodes in the model (the framed part in the figure), keep the outputs of these two convolutions ([1,4,80,80],[1,80,80,80]), and add a reducesum+clip branch for calculating the sum of the confidence of the 80 categories ([1,1,80,80]).
Usage:
cd python
python convert.py <onnx_model> <TARGET_PLATFORM> <dtype(optional)> <output_rknn_path(optional)>
# such as:
python convert.py ../model/yolov6n.onnx rk3588
# output model will be saved as ../model/yolov6.rknn
Description:
<onnx_model>
: Specify ONNX model path.<TARGET_PLATFORM>
: Specify NPU platform name. Support Platform refer [here](#2 Current Support Platform).<dtype>(optional)
: Specify asi8
,u8
orfp
.i8
/u8
for doing quantization,fp
for no quantization. Default isi8
.<output_rknn_path>(optional)
: Specify save path for the RKNN model, default save in the same directory as ONNX model with nameyolov6.rknn
Usage:
cd python
# Inference with PyTorch model or ONNX model
python yolov6.py --model_path <pt_model/onnx_model> --img_show
# Inference with RKNN model
python yolov6.py --model_path <rknn_model> --target <TARGET_PLATFORM> --img_show
Description:
-
<TARGET_PLATFORM>
: Specify NPU platform name. Support Platform refer [here](#2 Current Support Platform). -
<pt_model / onnx_model / rknn_model>
: Specify the model path.
Note: RK1808, RV1109, RV1126 does not support Android.
Please refer to the Compilation_Environment_Setup_Guide document to setup a cross-compilation environment and complete the compilation of C/C++ Demo.
Note: Please replace the model name with yolov6
.
With device connected via USB port, push demo files to devices:
adb root
adb remount
adb push install/<TARGET_PLATFORM>_android_<ARCH>/rknn_yolov6_demo/ /data/
adb shell
cd /data/rknn_yolov6_demo
export LD_LIBRARY_PATH=./lib
./rknn_yolov6_demo model/yolov6.rknn model/bus.jpg
-
After running, the result was saved as
out.png
. To check the result on host PC, pull back result referring to the following command:adb pull /data/rknn_yolov6_demo/out.png
-
Output result refer Expected Results.
Please refer to the Compilation_Environment_Setup_Guide document to setup a cross-compilation environment and complete the compilation of C/C++ Demo.
Note: Please replace the model name with yolov6
.
- If device connected via USB port, push demo files to devices:
adb push install/<TARGET_PLATFORM>_linux_<ARCH>/rknn_yolov6_demo/ /userdata/
- For other boards, use
scp
or other approaches to push all files underinstall/<TARGET_PLATFORM>_linux_<ARCH>/rknn_yolov6_demo/
touserdata
.
adb shell
cd /userdata/rknn_yolov6_demo
export LD_LIBRARY_PATH=./lib
./rknn_yolov6_demo model/yolov6.rknn model/bus.jpg
-
After running, the result was saved as
out.png
. To check the result on host PC, pull back result referring to the following command:adb pull /userdata/rknn_yolov6_demo/out.png
-
Output result refer Expected Results.
This example will print the labels and corresponding scores of the test image detect results, as follows:
bus @ (97 137 553 437) 0.949
person @ (109 236 222 535) 0.938
person @ (212 239 286 511) 0.934
person @ (479 230 561 522) 0.919
person @ (79 325 119 516) 0.456
stop sign @ (80 150 99 192) 0.357
tie @ (160 282 169 299) 0.258
- Note: Different platforms, different versions of tools and drivers may have slightly different results.