Skip to content

Commit 02e033b

Browse files
grimoireSingleZombielvhan028
authoredDec 24, 2021
[Docs] Add zh_cn get_started (open-mmlab#327)
* start up * zh-cn v0.1 * [Docs] Add a from-scratch example for "Get Started" (open-mmlab#326) * Add a from-scratch example * Fix typo * resolve comment * bachslash * Resolve comments * Refine commands * add cn docs * Correct commands * fixing... * update zn-cn docs * update en link * add sdk's get-started (open-mmlab#331) * add sdk's get-started * add SDK build command * fix chinglish * add sdk get start zh_cn * update zh_cn cite * fix command * add selfsup/razor readme * Fix command Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: lvhan028 <lvhan_028@163.com>
1 parent 8e19a08 commit 02e033b

File tree

6 files changed

+371
-21
lines changed

6 files changed

+371
-21
lines changed
 

‎README.md

+2
Original file line numberDiff line numberDiff line change
@@ -100,3 +100,5 @@ If you find this project useful in your research, please consider cite:
100100
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
101101
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab FewShot Learning Toolbox and Benchmark.
102102
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab Human Pose and Shape Estimation Toolbox and Benchmark.
103+
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning Toolbox and Benchmark.
104+
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab Model Compression Toolbox and Benchmark.

‎README_zh-CN.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ MMDeploy 是一个开源深度学习模型部署工具箱,它是 [OpenMMLab](h
6868

6969
OpenVINO团队在MMDeploy中添加了OpenVINO部署后端,并开发了MMDetection在OpenVINO下的部署功能,为MMDeploy做出了重大贡献。对此我们表示衷心的感谢。
7070

71-
## Citation
71+
## 引用
7272

7373
如果你在研究中使用了本项目的代码或者性能基准,请参考如下 bibtex 引用 MMDeploy:
7474

@@ -81,7 +81,6 @@ OpenVINO团队在MMDeploy中添加了OpenVINO部署后端,并开发了MMDetect
8181
}
8282
```
8383

84-
8584
## OpenMMLab 的其他项目
8685

8786
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
@@ -99,12 +98,13 @@ OpenVINO团队在MMDeploy中添加了OpenVINO部署后端,并开发了MMDetect
9998
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
10099
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
101100
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
101+
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
102+
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
102103

103104
## 欢迎加入 OpenMMLab 社区
104105

105106
扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3)
106107

107-
108108
<div align="center">
109109
<img src="https://raw.githubusercontent.com/open-mmlab/mmcv/master/docs/en/_static/zhihu_qrcode.jpg" height="400" />
110110
<img src="https://raw.githubusercontent.com/open-mmlab/mmcv/master/docs/en/_static/qq_group_qrcode.jpg" height="400" />

‎docs/en/get_started.md

+148-17
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## Get Started
22

3-
MMDeploy provides some useful tools. It is easy to deploy models in OpenMMLab to various platforms. You can convert models in our pre-defined pipeline or build a custom conversion pipeline by yourself. This guide will show you how to convert a model with MMDeploy!
3+
MMDeploy provides some useful tools. It is easy to deploy models in OpenMMLab to various platforms. You can convert models in our pre-defined pipeline or build a custom conversion pipeline by yourself. This guide will show you how to convert a model with MMDeploy and integrate MMDeploy's SDK to your application!
44

55
### Prerequisites
66

@@ -28,15 +28,16 @@ python ${MMDEPLOY_DIR}/tools/deploy.py \
2828
${CHECKPOINT_DIR}/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
2929
${INPUT_IMG} \
3030
--work-dir ${WORK_DIR} \
31-
--device cuda:0
31+
--device cuda:0 \
32+
--dump-info
3233
```
3334

34-
`${MMDEPLOY_DIR}/tools/deploy.py` is a tool that does everything you need to convert a model. Read [how_to_convert_model](./tutorials/how_to_convert_model.md) for more details. The converted model and other meta-info will be found in `${WORK_DIR}`.
35+
`${MMDEPLOY_DIR}/tools/deploy.py` is a tool that does everything you need to convert a model. Read [how_to_convert_model](./tutorials/how_to_convert_model.md) for more details. The converted model and other meta-info will be found in `${WORK_DIR}`. And they make up of MMDeploy SDK Model that can be fed to MMDeploy SDK to do model inference.
3536

36-
`two-stage_tensorrt_dynamic-320x320-1344x1344.py` is a config file that contains all arguments you need to customize the conversion pipeline. The name is formed as
37+
`detection_tensorrt_dynamic-320x320-1344x1344.py` is a config file that contains all arguments you need to customize the conversion pipeline. The name is formed as
3738

3839
```bash
39-
<task name>_<backend>_[backend options]_<dynamic support>.py
40+
<task name>_<backend>-[backend options]_<dynamic support>.py
4041
```
4142

4243
It is easy to find the deployment config you need by name. If you want to customize the conversion, you can edit the config file by yourself. Here is a tutorial about [how to write config](./tutorials/how_to_write_config.md).
@@ -68,19 +69,149 @@ python ${MMDEPLOY_DIR}/tools/test.py \
6869

6970
Read [how to evaluate a model](./tutorials/how_to_evaluate_a_model.md) for more details about how to use `tools/test.py`
7071

71-
### Add New Model Support?
72+
### Integrate MMDeploy SDK
7273

73-
If the model you want to deploy has not been supported yet in MMDeploy, you can try to support it with the `rewriter` by yourself. Rewriting the functions with control flow or unsupported ops is a good way to solve the problem.
74+
Make sure to turn on `MMDEPLOY_BUILD_SDK` to build and install SDK by following [build.md](./build.md).
75+
After that, the structure in the installation folder will show as follows,
7476

75-
```python
76-
@FUNCTION_REWRITER.register_rewriter(
77-
func_name='torch.Tensor.repeat', backend='tensorrt')
78-
def repeat_static(ctx, input, *size):
79-
origin_func = ctx.origin_func
80-
if input.dim() == 1 and len(size) == 1:
81-
return origin_func(input.unsqueeze(0), *([1] + list(size))).squeeze(0)
82-
else:
83-
return origin_func(input, *size)
8477
```
78+
install
79+
├── example
80+
├── include
81+
│   ├── c
82+
│   └── cpp
83+
└── lib
84+
```
85+
where `include/c` and `include/cpp` correspond to C and C++ API respectively.
86+
87+
**Caution: The C++ API is highly volatile and not recommended at the moment.**
88+
89+
In the example directory, there are several examples involving classification, object detection, image segmentation and so on.
90+
You can refer to these examples to learn how to use MMDeploy SDK's C API and how to link ${MMDeploy_LIBS} to your application.
91+
92+
### A From-scratch Example
93+
94+
Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch.
95+
96+
#### Create Virtual Environment and Install MMDetection.
97+
98+
Please run the following command in Anaconda environment to [install MMDetection](https://mmdetection.readthedocs.io/en/latest/get_started.html#a-from-scratch-setup-script).
99+
100+
```bash
101+
conda create -n openmmlab python=3.7 -y
102+
conda activate openmmlab
103+
104+
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
105+
106+
# install the latest mmcv
107+
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8.0/index.html
108+
109+
# install mmdetection
110+
git clone https://github.com/open-mmlab/mmdetection.git
111+
cd mmdetection
112+
pip install -r requirements/build.txt
113+
pip install -v -e .
114+
```
115+
116+
#### Download the Checkpoint of Faster R-CNN
117+
118+
Download the checkpoint from this [link](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) and put it in the `{MMDET_ROOT}/checkpoints` where `{MMDET_ROOT}` is the root directory of your MMDetection codebase.
119+
120+
#### Install MMDeploy and ONNX Runtime
121+
122+
Please run the following command in Anaconda environment to [install MMDeploy](./build.md).
123+
```bash
124+
conda activate openmmlab
125+
126+
git clone https://github.com/open-mmlab/mmdeploy.git
127+
cd mmdeploy
128+
git submodule update --init --recursive
129+
pip install -e .
130+
```
131+
132+
Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to [install ONNX Runtime](./backends/onnxruntime.md):
133+
```bash
134+
pip install onnxruntime==1.8.1
135+
```
136+
137+
Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime:
138+
```bash
139+
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
140+
141+
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
142+
cd onnxruntime-linux-x64-1.8.1
143+
export ONNXRUNTIME_DIR=$(pwd)
144+
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
145+
146+
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
147+
mkdir -p build && cd build
148+
149+
# build ONNXRuntime custom ops
150+
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
151+
make -j$(nproc)
152+
153+
# build MMDeploy SDK
154+
cmake -DMMDEPLOY_BUILD_SDK=ON \
155+
-DCMAKE_CXX_COMPILER=g++-7 \
156+
-DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
157+
-Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
158+
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
159+
-DMMDEPLOY_TARGET_BACKENDS=ort \
160+
-DMMDEPLOY_CODEBASES=mmdet ..
161+
make -j$(nproc) && make install
162+
```
163+
164+
#### Model Conversion
165+
166+
Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a `.onnx` model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:
167+
168+
```bash
169+
# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
170+
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
171+
172+
python ${MMDEPLOY_DIR}/tools/deploy.py \
173+
${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \
174+
${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
175+
${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
176+
${MMDET_DIR}/demo/demo.jpg \
177+
--work-dir work_dirs \
178+
--device cpu \
179+
--show \
180+
--dump-info
181+
```
182+
183+
If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file `end2end.onnx` and three json files (SDK config files) will generate on the work directory `work_dirs`.
184+
185+
#### Run MMDeploy SDK demo
186+
187+
After model conversion, SDK Model is saved in directory ${work_dir}.
188+
Here is a recipe for building & running object detection demo.
189+
```Bash
190+
cd build/install/example
191+
192+
# path to onnxruntime ** libraries **
193+
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib
194+
195+
mkdir -p build && cd build
196+
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
197+
-DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
198+
make object_detection
199+
200+
# suppress verbose logs
201+
export SPDLOG_LEVEL=warn
202+
203+
# running the object detection example
204+
./object_detection cpu ${work_dirs} ${path/to/an/image}
205+
206+
```
207+
If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.
208+
209+
### Add New Model Support?
210+
211+
If the models you want to deploy have not been supported yet in MMDeploy, you can try to support them by yourself. Here are some documents that may help you:
212+
- Read [how_to_support_new_models](./tutorials/how_to_support_new_models.md) to learn more about the rewriter.
213+
214+
215+
85216

86-
Read [how_to_support_new_models](./tutorials/how_to_support_new_models.md) to learn more about the rewriter. And, PR is welcome!
217+
Finally, we welcome your PR!

‎docs/en/index.rst

+1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ You can switch between Chinese and English documents in the lower-left corner of
88
:caption: Get Started
99

1010
build.md
11+
get_started.md
1112

1213
.. toctree::
1314
:maxdepth: 1

0 commit comments

Comments
 (0)