|
1 | 1 | ## Get Started
|
2 | 2 |
|
3 |
| -MMDeploy provides some useful tools. It is easy to deploy models in OpenMMLab to various platforms. You can convert models in our pre-defined pipeline or build a custom conversion pipeline by yourself. This guide will show you how to convert a model with MMDeploy! |
| 3 | +MMDeploy provides some useful tools. It is easy to deploy models in OpenMMLab to various platforms. You can convert models in our pre-defined pipeline or build a custom conversion pipeline by yourself. This guide will show you how to convert a model with MMDeploy and integrate MMDeploy's SDK to your application! |
4 | 4 |
|
5 | 5 | ### Prerequisites
|
6 | 6 |
|
@@ -28,15 +28,16 @@ python ${MMDEPLOY_DIR}/tools/deploy.py \
|
28 | 28 | ${CHECKPOINT_DIR}/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
|
29 | 29 | ${INPUT_IMG} \
|
30 | 30 | --work-dir ${WORK_DIR} \
|
31 |
| - --device cuda:0 |
| 31 | + --device cuda:0 \ |
| 32 | + --dump-info |
32 | 33 | ```
|
33 | 34 |
|
34 |
| -`${MMDEPLOY_DIR}/tools/deploy.py` is a tool that does everything you need to convert a model. Read [how_to_convert_model](./tutorials/how_to_convert_model.md) for more details. The converted model and other meta-info will be found in `${WORK_DIR}`. |
| 35 | +`${MMDEPLOY_DIR}/tools/deploy.py` is a tool that does everything you need to convert a model. Read [how_to_convert_model](./tutorials/how_to_convert_model.md) for more details. The converted model and other meta-info will be found in `${WORK_DIR}`. And they make up of MMDeploy SDK Model that can be fed to MMDeploy SDK to do model inference. |
35 | 36 |
|
36 |
| -`two-stage_tensorrt_dynamic-320x320-1344x1344.py` is a config file that contains all arguments you need to customize the conversion pipeline. The name is formed as |
| 37 | +`detection_tensorrt_dynamic-320x320-1344x1344.py` is a config file that contains all arguments you need to customize the conversion pipeline. The name is formed as |
37 | 38 |
|
38 | 39 | ```bash
|
39 |
| -<task name>_<backend>_[backend options]_<dynamic support>.py |
| 40 | +<task name>_<backend>-[backend options]_<dynamic support>.py |
40 | 41 | ```
|
41 | 42 |
|
42 | 43 | It is easy to find the deployment config you need by name. If you want to customize the conversion, you can edit the config file by yourself. Here is a tutorial about [how to write config](./tutorials/how_to_write_config.md).
|
@@ -68,19 +69,149 @@ python ${MMDEPLOY_DIR}/tools/test.py \
|
68 | 69 |
|
69 | 70 | Read [how to evaluate a model](./tutorials/how_to_evaluate_a_model.md) for more details about how to use `tools/test.py`
|
70 | 71 |
|
71 |
| -### Add New Model Support? |
| 72 | +### Integrate MMDeploy SDK |
72 | 73 |
|
73 |
| -If the model you want to deploy has not been supported yet in MMDeploy, you can try to support it with the `rewriter` by yourself. Rewriting the functions with control flow or unsupported ops is a good way to solve the problem. |
| 74 | +Make sure to turn on `MMDEPLOY_BUILD_SDK` to build and install SDK by following [build.md](./build.md). |
| 75 | +After that, the structure in the installation folder will show as follows, |
74 | 76 |
|
75 |
| -```python |
76 |
| -@FUNCTION_REWRITER.register_rewriter( |
77 |
| - func_name='torch.Tensor.repeat', backend='tensorrt') |
78 |
| -def repeat_static(ctx, input, *size): |
79 |
| - origin_func = ctx.origin_func |
80 |
| - if input.dim() == 1 and len(size) == 1: |
81 |
| - return origin_func(input.unsqueeze(0), *([1] + list(size))).squeeze(0) |
82 |
| - else: |
83 |
| - return origin_func(input, *size) |
84 | 77 | ```
|
| 78 | +install |
| 79 | +├── example |
| 80 | +├── include |
| 81 | +│ ├── c |
| 82 | +│ └── cpp |
| 83 | +└── lib |
| 84 | +``` |
| 85 | +where `include/c` and `include/cpp` correspond to C and C++ API respectively. |
| 86 | + |
| 87 | +**Caution: The C++ API is highly volatile and not recommended at the moment.** |
| 88 | + |
| 89 | +In the example directory, there are several examples involving classification, object detection, image segmentation and so on. |
| 90 | +You can refer to these examples to learn how to use MMDeploy SDK's C API and how to link ${MMDeploy_LIBS} to your application. |
| 91 | + |
| 92 | +### A From-scratch Example |
| 93 | + |
| 94 | +Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch. |
| 95 | + |
| 96 | +#### Create Virtual Environment and Install MMDetection. |
| 97 | + |
| 98 | +Please run the following command in Anaconda environment to [install MMDetection](https://mmdetection.readthedocs.io/en/latest/get_started.html#a-from-scratch-setup-script). |
| 99 | + |
| 100 | +```bash |
| 101 | +conda create -n openmmlab python=3.7 -y |
| 102 | +conda activate openmmlab |
| 103 | + |
| 104 | +conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y |
| 105 | + |
| 106 | +# install the latest mmcv |
| 107 | +pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8.0/index.html |
| 108 | + |
| 109 | +# install mmdetection |
| 110 | +git clone https://github.com/open-mmlab/mmdetection.git |
| 111 | +cd mmdetection |
| 112 | +pip install -r requirements/build.txt |
| 113 | +pip install -v -e . |
| 114 | +``` |
| 115 | + |
| 116 | +#### Download the Checkpoint of Faster R-CNN |
| 117 | + |
| 118 | +Download the checkpoint from this [link](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) and put it in the `{MMDET_ROOT}/checkpoints` where `{MMDET_ROOT}` is the root directory of your MMDetection codebase. |
| 119 | + |
| 120 | +#### Install MMDeploy and ONNX Runtime |
| 121 | + |
| 122 | +Please run the following command in Anaconda environment to [install MMDeploy](./build.md). |
| 123 | +```bash |
| 124 | +conda activate openmmlab |
| 125 | + |
| 126 | +git clone https://github.com/open-mmlab/mmdeploy.git |
| 127 | +cd mmdeploy |
| 128 | +git submodule update --init --recursive |
| 129 | +pip install -e . |
| 130 | +``` |
| 131 | + |
| 132 | +Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to [install ONNX Runtime](./backends/onnxruntime.md): |
| 133 | +```bash |
| 134 | +pip install onnxruntime==1.8.1 |
| 135 | +``` |
| 136 | + |
| 137 | +Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime: |
| 138 | +```bash |
| 139 | +wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz |
| 140 | + |
| 141 | +tar -zxvf onnxruntime-linux-x64-1.8.1.tgz |
| 142 | +cd onnxruntime-linux-x64-1.8.1 |
| 143 | +export ONNXRUNTIME_DIR=$(pwd) |
| 144 | +export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH |
| 145 | + |
| 146 | +cd ${MMDEPLOY_DIR} # To MMDeploy root directory |
| 147 | +mkdir -p build && cd build |
| 148 | + |
| 149 | +# build ONNXRuntime custom ops |
| 150 | +cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} .. |
| 151 | +make -j$(nproc) |
| 152 | + |
| 153 | +# build MMDeploy SDK |
| 154 | +cmake -DMMDEPLOY_BUILD_SDK=ON \ |
| 155 | + -DCMAKE_CXX_COMPILER=g++-7 \ |
| 156 | + -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \ |
| 157 | + -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \ |
| 158 | + -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ |
| 159 | + -DMMDEPLOY_TARGET_BACKENDS=ort \ |
| 160 | + -DMMDEPLOY_CODEBASES=mmdet .. |
| 161 | +make -j$(nproc) && make install |
| 162 | +``` |
| 163 | + |
| 164 | +#### Model Conversion |
| 165 | + |
| 166 | +Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a `.onnx` model file which can be received by ONNX Runtime. Run following commands to use our deploy tools: |
| 167 | + |
| 168 | +```bash |
| 169 | +# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR} |
| 170 | +# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console. |
| 171 | + |
| 172 | +python ${MMDEPLOY_DIR}/tools/deploy.py \ |
| 173 | + ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \ |
| 174 | + ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \ |
| 175 | + ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \ |
| 176 | + ${MMDET_DIR}/demo/demo.jpg \ |
| 177 | + --work-dir work_dirs \ |
| 178 | + --device cpu \ |
| 179 | + --show \ |
| 180 | + --dump-info |
| 181 | +``` |
| 182 | + |
| 183 | +If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file `end2end.onnx` and three json files (SDK config files) will generate on the work directory `work_dirs`. |
| 184 | + |
| 185 | +#### Run MMDeploy SDK demo |
| 186 | + |
| 187 | +After model conversion, SDK Model is saved in directory ${work_dir}. |
| 188 | +Here is a recipe for building & running object detection demo. |
| 189 | +```Bash |
| 190 | +cd build/install/example |
| 191 | + |
| 192 | +# path to onnxruntime ** libraries ** |
| 193 | +export LD_LIBRARY_PATH=/path/to/onnxruntime/lib |
| 194 | + |
| 195 | +mkdir -p build && cd build |
| 196 | +cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \ |
| 197 | + -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy .. |
| 198 | +make object_detection |
| 199 | + |
| 200 | +# suppress verbose logs |
| 201 | +export SPDLOG_LEVEL=warn |
| 202 | + |
| 203 | +# running the object detection example |
| 204 | +./object_detection cpu ${work_dirs} ${path/to/an/image} |
| 205 | + |
| 206 | +``` |
| 207 | +If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects. |
| 208 | + |
| 209 | +### Add New Model Support? |
| 210 | + |
| 211 | +If the models you want to deploy have not been supported yet in MMDeploy, you can try to support them by yourself. Here are some documents that may help you: |
| 212 | +- Read [how_to_support_new_models](./tutorials/how_to_support_new_models.md) to learn more about the rewriter. |
| 213 | + |
| 214 | + |
| 215 | + |
85 | 216 |
|
86 |
| -Read [how_to_support_new_models](./tutorials/how_to_support_new_models.md) to learn more about the rewriter. And, PR is welcome! |
| 217 | +Finally, we welcome your PR! |
0 commit comments