From d9eb71cc2bac00288bb8e6b1879a7129a32ed362 Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Tue, 10 Jan 2023 21:05:05 +0800 Subject: [PATCH 1/8] Add files via upload Chinese document translation --- docs/zh_cn/user_guides/classification.md | 96 ++++++++++++------------ docs/zh_cn/user_guides/detection.md | 44 +++++------ docs/zh_cn/user_guides/segmentation.md | 41 +++++----- 3 files changed, 89 insertions(+), 92 deletions(-) diff --git a/docs/zh_cn/user_guides/classification.md b/docs/zh_cn/user_guides/classification.md index 63155bf8b..2f657c6f1 100644 --- a/docs/zh_cn/user_guides/classification.md +++ b/docs/zh_cn/user_guides/classification.md @@ -1,19 +1,18 @@ -# Classification +# 分类 -- [Classification](#classification) - - [VOC SVM / Low-shot SVM](#voc-svm--low-shot-svm) - - [Linear Evaluation and Fine-tuning](#linear-evaluation-and-fine-tuning) - - [ImageNet Semi-Supervised Classification](#imagenet-semi-supervised-classification) - - [ImageNet Nearest-Neighbor Classification](#imagenet-nearest-neighbor-classification) +- [分类](#classification) + - [VOC SVM/ Low-shot SVM](#voc-svm--low-shot-svm) + - [线性评估和微调](#linear-evaluation-and-fine-tuning) + - [ImageNet 半监督分类](#imagenet-semi-supervised-classification) + - [ImageNet 最近邻分类](#imagenet-nearest-neighbor-classification) -In MMSelfSup, we provide many benchmarks for classification, thus the models can be evaluated on different classification tasks. Here are comprehensive tutorials and examples to explain how to run all classification benchmarks with MMSelfSup. -We provide scripts in folder `tools/benchmarks/classification/`, which has 2 `.sh` files, 1 folder for VOC SVM related classification task and 1 folder for ImageNet nearest-neighbor classification task. +在MMSelfSup中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用MMSelfSup来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个用于与VOC SVM相关的分类任务,另一个用于ImageNet最近邻分类任务。 ## VOC SVM / Low-shot SVM -To run these benchmarks, you should first prepare your VOC datasets. Please refer to [prepare_data.md](./2_dataset_prepare.md) for the details of data preparation. +为了运行这些基准,你首先应该准备好你的VOC数据集。请参考[prepare_data.md](./2_dataset_prepare.md)来获取数据准备的详细信息。 -To evaluate the pre-trained models, you can run the command below. +为了评估这些预训练的模型, 你可以运行如下指令。 ```shell # distributed version @@ -23,7 +22,7 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_pretrain.sh ${SELFS bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_pretrain.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${PRETRAIN} ${FEATURE_LIST} ``` -Besides, if you want to evaluate the ckpt files saved by runner, you can run the command below. +此外,如果你想评估由runner保存的ckpt文件,你可以运行如下指令. ```shell # distributed version @@ -33,30 +32,29 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_epoch.sh ${SELFSUP_ bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_epoch.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${EPOCH} ${FEATURE_LIST} ``` -**To test with ckpt, the code uses the epoch\_\*.pth file, there is no need to extract weights.** +**为了使用ckpt进行测试,代码使用epoch\_\*.pth文件,这里不需要提取权重.** -Remarks: +注意: -- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. -- `${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5". -- `PRETRAIN`: the pre-trained model file. -- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. -- `EPOCH` is the epoch number of the ckpt that you want to test +- `${SELFSUP_CONFIG}`是自监督实验的配置文件. +- `${FEATURE_LIST}` 是一个字符串,用于指定从layer1到layer5的要评估特征;例如,如果你只想评估layer5,那么`FEATURE_LIST`是"feat5",如果你想要评估所有的特征,那么`FEATURE_LIST`是"feat1 feat2 feat3 feat4 feat5" (用空格分隔)。如果为空,那么`FEATURE_LIST`默认是"feat5"。 +- `PRETRAIN`:预训练模型文件。 +- 如果你想改变GPU个数, 你可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。 +- `EPOCH`是你想要测试的ckpt的轮数 -## Linear Evaluation and Fine-tuning +## 线性评估和微调 -Linear evaluation and fine-tuning are two of the most general benchmarks. We provide config files and scripts to launch the training and testing -for Linear Evaluation and Fine-tuning. The supported datasets are **ImageNet**, **Places205** and **iNaturalist18**. +线性评估和微调是最常见的两个基准。我们为线性评估和微调提供了配置文件和脚本来进行训练和测试。支持的数据集有 **ImageNet**,**Places205** 和 **iNaturalist18**。 -First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. +首先,确保你已经安装[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目. ```shell pip install openmim ``` -Besides, please refer to MMClassification for [installation](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md) and [data preparation](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md). +此外,请参考MMMMClassification的[安装](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md)和[数据准备](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md)。 -Then, run the command below. +然后运行如下命令。 ```shell # distributed version @@ -66,13 +64,13 @@ bash tools/benchmarks/classification/mim_dist_train.sh ${CONFIG} ${PRETRAIN} bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${PRETRAIN} ``` -Remarks: +注意: -- The default GPU number is 8. When changing GPUS, please also change `samples_per_gpu` in the config file accordingly to ensure the total batch size is 256. -- `CONFIG`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`. -- `PRETRAIN`: the pre-trained model file. +- 默认的GPU数量是8。当改变GPU数量时,请同时改变配置文件中的`samples_per_gpu`参数来确保总的batch size是256。 +- `CONFIG`:使用`configs/benchmarks/classification/`下的配置文件。具体来说,`imagenet` (除了`imagenet_*percent`文件), `places205` and `inaturalist2018`。 +- `PRETRAIN`:预训练模型文件。 -Example: +例子: ```shell bash ./tools/benchmarks/classification/mim_dist_train.sh \ @@ -80,7 +78,7 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k work_dir/pretrained_model.pth ``` -If you want to test the well-trained model, please run the command below. +如果你想测试训练好的模型,请运行如下命令。 ```shell # distributed version @@ -90,11 +88,11 @@ bash tools/benchmarks/classification/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -Remarks: +注意: -- `CHECKPOINT`: The well-trained classification model that you want to test. +- `CHECKPOINT`:你想测试的训练好的分类模型 -Example: +例子: ```shell bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \ @@ -102,23 +100,23 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k work_dir/model.pth ``` -## ImageNet Semi-Supervised Classification +## ImageNet半监督分类 -To run ImageNet semi-supervised classification, we still use the same `.sh` script as Linear Evaluation and Fine-tuning to launch training. +为了运行ImageNet半监督分类,我们将使用和线性评估和微调一样的`.sh`脚本进行训练。 -Remarks: +注意: -- The default GPU number is 4. -- `CONFIG`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders. -- `PRETRAIN`: the pre-trained model file. +- 默认GPU数量是4. +- `CONFIG`:使用`configs/benchmarks/classification/imagenet/`下的配置文件,命名为`imagenet_*percent`的文件。 +- `PRETRAIN`:预训练模型文件。 -## ImageNet Nearest-Neighbor Classification +## ImageNet最近邻分类 -```Note -Only support CNN-style backbones (like ResNet50). +```注意 +仅支持CNN形式的主干网络 (例如ResNet50). ``` -To evaluate the pre-trained models using the nearest-neighbor benchmark, you can run the command below. +为评估用于ImageNet最近邻分类基准的预训练模型,你可以运行如下命令。 ```shell # distributed version @@ -128,14 +126,14 @@ bash tools/benchmarks/classification/knn_imagenet/dist_test_knn.sh ${SELFSUP_CON bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${CHECKPOINT} [optional arguments] ``` -Remarks: +注意: -- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. -- `CHECKPOINT`: the path of checkpoint model file. -- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. -- `[optional arguments]`: for optional arguments, you can refer to the [script](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) +- `${SELFSUP_CONFIG}`是自监督实验的配置文件。 +- `CHECKPOINT`:检查点模型文件的路径。 +- 如果你想改变GPU的数量,你可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。 +- `[optional arguments]`:用于可选参数,你可以参考这个[脚本](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) -An example of command +命令的一个例子 ```shell # distributed version diff --git a/docs/zh_cn/user_guides/detection.md b/docs/zh_cn/user_guides/detection.md index 0867a4cfc..4cb8cb2dd 100644 --- a/docs/zh_cn/user_guides/detection.md +++ b/docs/zh_cn/user_guides/detection.md @@ -1,23 +1,23 @@ -# Detection +# 检测 -- [Detection](#detection) - - [Train](#train) - - [Test](#test) +- [检测](#detection) + - [训练](#train) + - [测试](#test) -Here, we prefer to use MMDetection to do the detection task. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. +这里,我们更喜欢使用MMDetection做检测任务。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 ```shell pip install openmim mim install 'mmdet>=3.0.0rc0' ``` -It is very easy to install the package. +非常容易安装这个包。 -Besides, please refer to MMDet for [installation](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html) and [data preparation](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html) +此外,请参考MMDet的[安装](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html)和[数据准备](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html) -## Train +## 训练 -After installation, you can run MMDetection with simple command. +安装完后,你可以使用如下的简单命令运行MMDetection。 ```shell # distributed version @@ -29,20 +29,20 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_c4.sh ${PARTITION} ${CONFIG} $ bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} ${PRETRAIN} ``` -Remarks: +注意: -- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like: +- `CONFIG`: 使用`configs/benchmarks/mmdetection/`下的配置文件。由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMDetection的配置文件,例如: ```shell _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' ``` -Writing your config files from scratch is also supported. +从头开始写你的配置文件也是支持的。 -- `PRETRAIN`: the pre-trained model file. -- `GPUS`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default. +- `PRETRAIN`:预训练模型文件 +- `GPUS`: 你想用于训练的GPU数量,对于检测任务,我们默认采用8块GPU。 -Example: +例子: ```shell bash ./tools/benchmarks/mmdetection/mim_dist_train_c4.sh \ @@ -50,8 +50,8 @@ configs/benchmarks/mmdetection/coco/mask-rcnn_r50-c4_ms-1x_coco.py \ https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 8 ``` -Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provide some config files. -Please refer to [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2. +或者你想用[detectron2](https://github.com/facebookresearch/detectron2)来做检测任务,我们也提供了一些配置文件。 +请参考[INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md)用于安装并按照detectron2需要的[目录结构](https://github.com/facebookresearch/detectron2/tree/main/datasets)准备你的数据集。 ```shell conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment @@ -60,9 +60,9 @@ python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must us bash run.sh ${DET_CFG} ${OUTPUT_FILE} ``` -## Test +## 测试 -After training, you can also run the command below to test your model. +在训练之后,你可以运行如下命令测试你的模型。 ```shell # distributed version @@ -72,11 +72,11 @@ bash tools/benchmarks/mmdetection/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${GPU bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -Remarks: +注意: -- `CHECKPOINT`: The well-trained detection model that you want to test. +- `CHECKPOINT`:你想测试的训练好的检测模型。 -Example: +例子: ```shell bash ./tools/benchmarks/mmdetection/mim_dist_test.sh \ diff --git a/docs/zh_cn/user_guides/segmentation.md b/docs/zh_cn/user_guides/segmentation.md index c5272fb6b..e2819957c 100644 --- a/docs/zh_cn/user_guides/segmentation.md +++ b/docs/zh_cn/user_guides/segmentation.md @@ -1,23 +1,23 @@ -# Segmentation +# 分割 -- [Segmentation](#segmentation) - - [Train](#train) - - [Test](#test) +- [分割](#segmentation) + - [训练](#train) + - [测试](#test) -For semantic segmentation task, we use MMSegmentation. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. +对于语义分割任务我们使用MMSegmentation。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 ```shell pip install openmim mim install 'mmsegmentation>=1.0.0rc0' ``` -It is very easy to install the package. +非常容易安装这个包。 -Besides, please refer to MMSegmentation for [installation](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html) and [data preparation](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html). +此外,请参考MMSegmentation的[安装](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html)和[数据准备](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html)。 -## Train +## 训练 -After installation, you can run MMSeg with simple command. +在安装完后,可以使用如下简单命令运行MMSeg。 ```shell # distributed version @@ -27,21 +27,20 @@ bash tools/benchmarks/mmsegmentation/mim_dist_train.sh ${CONFIG} ${PRETRAIN} ${G bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} ${PRETRAIN} ``` -Remarks: +注意: -- `CONFIG`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different - repositories, we can easily leverage the configs from MMSegmentation like: +- `CONFIG`:使用`configs/benchmarks/mmsegmentation/`下的配置文件. S由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMSegmentation的配置文件,例如: ```shell _base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py' ``` -Writing your config files from scratch is also supported. +从头开始写你的配置文件也是支持的。 -- `PRETRAIN`: the pre-trained model file. -- `GPUS`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default. +- `PRETRAIN`:预训练模型文件 +- `GPUS`: 你想用于训练的GPU数量,对于检测任务,我们默认采用4块GPU。 -Example: +例子: ```shell bash ./tools/benchmarks/mmsegmentation/mim_dist_train.sh \ @@ -49,9 +48,9 @@ configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_4xb4-20k_voc12aug-512x512. https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 4 ``` -## Test +## 测试 -After training, you can also run the command below to test your model. +在训练之后,你可以运行如下命令测试你的模型。 ```shell # distributed version @@ -61,11 +60,11 @@ bash tools/benchmarks/mmsegmentation/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${ bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -Remarks: +注意: -- `CHECKPOINT`: The well-trained segmentation model that you want to test. +- `CHECKPOINT`:你想测试的训练好的分割模型。 -Example: +例子: ```shell bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \ From 0a7103cde79f0c4f89f068aa03a8685a078127bb Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:38:17 +0800 Subject: [PATCH 2/8] Update docs/zh_cn/user_guides/classification.md Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> --- docs/zh_cn/user_guides/classification.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh_cn/user_guides/classification.md b/docs/zh_cn/user_guides/classification.md index 2f657c6f1..7de2b2ead 100644 --- a/docs/zh_cn/user_guides/classification.md +++ b/docs/zh_cn/user_guides/classification.md @@ -6,7 +6,7 @@ - [ImageNet 半监督分类](#imagenet-semi-supervised-classification) - [ImageNet 最近邻分类](#imagenet-nearest-neighbor-classification) -在MMSelfSup中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用MMSelfSup来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个用于与VOC SVM相关的分类任务,另一个用于ImageNet最近邻分类任务。 +在MMSelfSup中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用MMSelfSup来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个文件夹用于与VOC SVM相关的分类任务,另一个文件夹用于ImageNet最近邻分类任务。 ## VOC SVM / Low-shot SVM From 648bf0320614a88c0d5651dff57d38b8dc89f45e Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:39:23 +0800 Subject: [PATCH 3/8] Update docs/zh_cn/user_guides/classification.md Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> --- docs/zh_cn/user_guides/classification.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh_cn/user_guides/classification.md b/docs/zh_cn/user_guides/classification.md index 7de2b2ead..9d25351c7 100644 --- a/docs/zh_cn/user_guides/classification.md +++ b/docs/zh_cn/user_guides/classification.md @@ -64,7 +64,7 @@ bash tools/benchmarks/classification/mim_dist_train.sh ${CONFIG} ${PRETRAIN} bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${PRETRAIN} ``` -注意: +备注: - 默认的GPU数量是8。当改变GPU数量时,请同时改变配置文件中的`samples_per_gpu`参数来确保总的batch size是256。 - `CONFIG`:使用`configs/benchmarks/classification/`下的配置文件。具体来说,`imagenet` (除了`imagenet_*percent`文件), `places205` and `inaturalist2018`。 From 74c09ddc49ead5eca38902afa896640803354193 Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:41:31 +0800 Subject: [PATCH 4/8] Update classification.md --- docs/zh_cn/user_guides/classification.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/zh_cn/user_guides/classification.md b/docs/zh_cn/user_guides/classification.md index 9d25351c7..644ecb733 100644 --- a/docs/zh_cn/user_guides/classification.md +++ b/docs/zh_cn/user_guides/classification.md @@ -34,7 +34,7 @@ bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_epoch.sh ${PARTITI **为了使用ckpt进行测试,代码使用epoch\_\*.pth文件,这里不需要提取权重.** -注意: +备注: - `${SELFSUP_CONFIG}`是自监督实验的配置文件. - `${FEATURE_LIST}` 是一个字符串,用于指定从layer1到layer5的要评估特征;例如,如果你只想评估layer5,那么`FEATURE_LIST`是"feat5",如果你想要评估所有的特征,那么`FEATURE_LIST`是"feat1 feat2 feat3 feat4 feat5" (用空格分隔)。如果为空,那么`FEATURE_LIST`默认是"feat5"。 @@ -88,7 +88,7 @@ bash tools/benchmarks/classification/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -注意: +备注: - `CHECKPOINT`:你想测试的训练好的分类模型 @@ -104,7 +104,7 @@ work_dir/model.pth 为了运行ImageNet半监督分类,我们将使用和线性评估和微调一样的`.sh`脚本进行训练。 -注意: +备注: - 默认GPU数量是4. - `CONFIG`:使用`configs/benchmarks/classification/imagenet/`下的配置文件,命名为`imagenet_*percent`的文件。 @@ -112,7 +112,7 @@ work_dir/model.pth ## ImageNet最近邻分类 -```注意 +```备注 仅支持CNN形式的主干网络 (例如ResNet50). ``` @@ -126,7 +126,7 @@ bash tools/benchmarks/classification/knn_imagenet/dist_test_knn.sh ${SELFSUP_CON bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${CHECKPOINT} [optional arguments] ``` -注意: +备注: - `${SELFSUP_CONFIG}`是自监督实验的配置文件。 - `CHECKPOINT`:检查点模型文件的路径。 From 4f7be5dd21fa0a0a0f2b73b07cf45a52f765c1d4 Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:44:58 +0800 Subject: [PATCH 5/8] Update detection.md --- docs/zh_cn/user_guides/detection.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh_cn/user_guides/detection.md b/docs/zh_cn/user_guides/detection.md index 4cb8cb2dd..8cf06d036 100644 --- a/docs/zh_cn/user_guides/detection.md +++ b/docs/zh_cn/user_guides/detection.md @@ -4,7 +4,7 @@ - [训练](#train) - [测试](#test) -这里,我们更喜欢使用MMDetection做检测任务。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 +这里,我们倾向使用MMDetection做检测任务。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 ```shell pip install openmim From ffc95497f2ac47d8a215f89013c898f9555dc219 Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:47:10 +0800 Subject: [PATCH 6/8] Update detection.md --- docs/zh_cn/user_guides/detection.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/zh_cn/user_guides/detection.md b/docs/zh_cn/user_guides/detection.md index 8cf06d036..7c46acd53 100644 --- a/docs/zh_cn/user_guides/detection.md +++ b/docs/zh_cn/user_guides/detection.md @@ -29,7 +29,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_c4.sh ${PARTITION} ${CONFIG} $ bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} ${PRETRAIN} ``` -注意: +备注: - `CONFIG`: 使用`configs/benchmarks/mmdetection/`下的配置文件。由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMDetection的配置文件,例如: @@ -72,7 +72,7 @@ bash tools/benchmarks/mmdetection/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${GPU bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -注意: +备注: - `CHECKPOINT`:你想测试的训练好的检测模型。 From 17bd5fddecc797ea34ea752f6aff9502331d75af Mon Sep 17 00:00:00 2001 From: Junlin Chang <45223476+123456789asdfjkl@users.noreply.github.com> Date: Wed, 11 Jan 2023 12:47:54 +0800 Subject: [PATCH 7/8] Update segmentation.md --- docs/zh_cn/user_guides/segmentation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/zh_cn/user_guides/segmentation.md b/docs/zh_cn/user_guides/segmentation.md index e2819957c..094e46634 100644 --- a/docs/zh_cn/user_guides/segmentation.md +++ b/docs/zh_cn/user_guides/segmentation.md @@ -27,7 +27,7 @@ bash tools/benchmarks/mmsegmentation/mim_dist_train.sh ${CONFIG} ${PRETRAIN} ${G bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} ${PRETRAIN} ``` -注意: +备注: - `CONFIG`:使用`configs/benchmarks/mmsegmentation/`下的配置文件. S由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMSegmentation的配置文件,例如: @@ -60,7 +60,7 @@ bash tools/benchmarks/mmsegmentation/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${ bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} ``` -注意: +备注: - `CHECKPOINT`:你想测试的训练好的分割模型。 From 8dab7ddc5da6d84192f81389c6fd0920ee8b4c42 Mon Sep 17 00:00:00 2001 From: fangyixiao18 Date: Wed, 11 Jan 2023 19:35:41 +0800 Subject: [PATCH 8/8] update --- docs/en/user_guides/classification.md | 17 +++---- docs/en/user_guides/detection.md | 8 +-- docs/en/user_guides/segmentation.md | 8 +-- docs/zh_cn/user_guides/classification.md | 65 ++++++++++++------------ docs/zh_cn/user_guides/detection.md | 28 +++++----- docs/zh_cn/user_guides/segmentation.md | 24 ++++----- 6 files changed, 74 insertions(+), 76 deletions(-) diff --git a/docs/en/user_guides/classification.md b/docs/en/user_guides/classification.md index 63155bf8b..b1954aad3 100644 --- a/docs/en/user_guides/classification.md +++ b/docs/en/user_guides/classification.md @@ -39,9 +39,9 @@ Remarks: - `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. - `${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5". -- `PRETRAIN`: the pre-trained model file. +- `${PRETRAIN}`: the pre-trained model file. - if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. -- `EPOCH` is the epoch number of the ckpt that you want to test +- `${EPOCH}` is the epoch number of the ckpt that you want to test ## Linear Evaluation and Fine-tuning @@ -68,9 +68,8 @@ bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} Remarks: -- The default GPU number is 8. When changing GPUS, please also change `samples_per_gpu` in the config file accordingly to ensure the total batch size is 256. -- `CONFIG`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`. -- `PRETRAIN`: the pre-trained model file. +- `${CONFIG}`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`. +- `${PRETRAIN}`: the pre-trained model file. Example: @@ -92,7 +91,7 @@ bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} $ Remarks: -- `CHECKPOINT`: The well-trained classification model that you want to test. +- `${CHECKPOINT}`: The well-trained classification model that you want to test. Example: @@ -109,8 +108,8 @@ To run ImageNet semi-supervised classification, we still use the same `.sh` scri Remarks: - The default GPU number is 4. -- `CONFIG`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders. -- `PRETRAIN`: the pre-trained model file. +- `${CONFIG}`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders. +- `${PRETRAIN}`: the pre-trained model file. ## ImageNet Nearest-Neighbor Classification @@ -131,7 +130,7 @@ bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} Remarks: - `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. -- `CHECKPOINT`: the path of checkpoint model file. +- `${CHECKPOINT}`: the path of checkpoint model file. - if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. - `[optional arguments]`: for optional arguments, you can refer to the [script](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) diff --git a/docs/en/user_guides/detection.md b/docs/en/user_guides/detection.md index 0867a4cfc..ed2e66679 100644 --- a/docs/en/user_guides/detection.md +++ b/docs/en/user_guides/detection.md @@ -31,7 +31,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} Remarks: -- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like: +- `${CONFIG}`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like: ```shell _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' @@ -39,8 +39,8 @@ _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' Writing your config files from scratch is also supported. -- `PRETRAIN`: the pre-trained model file. -- `GPUS`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default. +- `${PRETRAIN}`: the pre-trained model file. +- `${GPUS}`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default. Example: @@ -74,7 +74,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHE Remarks: -- `CHECKPOINT`: The well-trained detection model that you want to test. +- `${CHECKPOINT}`: The well-trained detection model that you want to test. Example: diff --git a/docs/en/user_guides/segmentation.md b/docs/en/user_guides/segmentation.md index c5272fb6b..261b53dac 100644 --- a/docs/en/user_guides/segmentation.md +++ b/docs/en/user_guides/segmentation.md @@ -29,7 +29,7 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} $ Remarks: -- `CONFIG`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different +- `${CONFIG}`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMSegmentation like: ```shell @@ -38,8 +38,8 @@ _base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py' Writing your config files from scratch is also supported. -- `PRETRAIN`: the pre-trained model file. -- `GPUS`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default. +- `${PRETRAIN}`: the pre-trained model file. +- `${GPUS}`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default. Example: @@ -63,7 +63,7 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${ Remarks: -- `CHECKPOINT`: The well-trained segmentation model that you want to test. +- `${CHECKPOINT}`: The well-trained segmentation model that you want to test. Example: diff --git a/docs/zh_cn/user_guides/classification.md b/docs/zh_cn/user_guides/classification.md index 644ecb733..c8bf0a667 100644 --- a/docs/zh_cn/user_guides/classification.md +++ b/docs/zh_cn/user_guides/classification.md @@ -1,18 +1,18 @@ # 分类 -- [分类](#classification) - - [VOC SVM/ Low-shot SVM](#voc-svm--low-shot-svm) - - [线性评估和微调](#linear-evaluation-and-fine-tuning) - - [ImageNet 半监督分类](#imagenet-semi-supervised-classification) - - [ImageNet 最近邻分类](#imagenet-nearest-neighbor-classification) +- [分类](#分类) + - [VOC SVM / Low-shot SVM](#voc-svm--low-shot-svm) + - [线性评估和微调](#线性评估和微调) + - [ImageNet 半监督分类](#imagenet-半监督分类) + - [ImageNet 最近邻分类](#imagenet-最近邻分类) -在MMSelfSup中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用MMSelfSup来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个文件夹用于与VOC SVM相关的分类任务,另一个文件夹用于ImageNet最近邻分类任务。 +在 MMSelfSup 中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用 MMSelfSup 来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个文件夹用于与 VOC SVM 相关的分类任务,另一个文件夹用于 ImageNet 最近邻分类任务。 ## VOC SVM / Low-shot SVM -为了运行这些基准,你首先应该准备好你的VOC数据集。请参考[prepare_data.md](./2_dataset_prepare.md)来获取数据准备的详细信息。 +为了运行这些基准,您首先应该准备好您的 VOC 数据集。请参考 [prepare_data.md](./2_dataset_prepare.md) 来获取数据准备的详细信息。 -为了评估这些预训练的模型, 你可以运行如下指令。 +为了评估这些预训练的模型, 您可以运行如下指令。 ```shell # distributed version @@ -22,7 +22,7 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_pretrain.sh ${SELFS bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_pretrain.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${PRETRAIN} ${FEATURE_LIST} ``` -此外,如果你想评估由runner保存的ckpt文件,你可以运行如下指令. +此外,如果您想评估由 runner 保存的ckpt文件,您可以运行如下指令。 ```shell # distributed version @@ -32,27 +32,27 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_epoch.sh ${SELFSUP_ bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_epoch.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${EPOCH} ${FEATURE_LIST} ``` -**为了使用ckpt进行测试,代码使用epoch\_\*.pth文件,这里不需要提取权重.** +**使用 ckpt 进行测试,代码使用 epoch\_\*.pth 文件,这里不需要提取权重。** 备注: -- `${SELFSUP_CONFIG}`是自监督实验的配置文件. -- `${FEATURE_LIST}` 是一个字符串,用于指定从layer1到layer5的要评估特征;例如,如果你只想评估layer5,那么`FEATURE_LIST`是"feat5",如果你想要评估所有的特征,那么`FEATURE_LIST`是"feat1 feat2 feat3 feat4 feat5" (用空格分隔)。如果为空,那么`FEATURE_LIST`默认是"feat5"。 -- `PRETRAIN`:预训练模型文件。 -- 如果你想改变GPU个数, 你可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。 -- `EPOCH`是你想要测试的ckpt的轮数 +- `${SELFSUP_CONFIG}` 是自监督实验的配置文件. +- `${FEATURE_LIST}` 是一个字符串,用于指定从 layer1 到 layer5 的要评估特征;例如,如果您只想评估 layer5,那么 `FEATURE_LIST` 是 "feat5",如果您想要评估所有的特征,那么 `FEATURE_LIST` 是 "feat1 feat2 feat3 feat4 feat5" (用空格分隔)。如果为空,那么 `FEATURE_LIST` 默认是 "feat5"。 +- `${PRETRAIN}`:预训练模型文件。 +- 如果您想改变 GPU 个数, 您可以在命令的前面加上 `GPUS_PER_NODE=4 GPUS=4`。 +- `${EPOCH}` 是您想要测试的 ckpt 的轮数 ## 线性评估和微调 线性评估和微调是最常见的两个基准。我们为线性评估和微调提供了配置文件和脚本来进行训练和测试。支持的数据集有 **ImageNet**,**Places205** 和 **iNaturalist18**。 -首先,确保你已经安装[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目. +首先,确保您已经安装 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目. ```shell pip install openmim ``` -此外,请参考MMMMClassification的[安装](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md)和[数据准备](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md)。 +此外,请参考 MMClassification 的[安装](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md)和[数据准备](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md)。 然后运行如下命令。 @@ -66,9 +66,8 @@ bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} 备注: -- 默认的GPU数量是8。当改变GPU数量时,请同时改变配置文件中的`samples_per_gpu`参数来确保总的batch size是256。 -- `CONFIG`:使用`configs/benchmarks/classification/`下的配置文件。具体来说,`imagenet` (除了`imagenet_*percent`文件), `places205` and `inaturalist2018`。 -- `PRETRAIN`:预训练模型文件。 +- `${CONFIG}`:使用`configs/benchmarks/classification/`下的配置文件。具体来说,`imagenet` (除了`imagenet_*percent`文件), `places205` and `inaturalist2018`。 +- `${PRETRAIN}`:预训练模型文件。 例子: @@ -78,7 +77,7 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k work_dir/pretrained_model.pth ``` -如果你想测试训练好的模型,请运行如下命令。 +如果您想测试训练好的模型,请运行如下命令。 ```shell # distributed version @@ -90,7 +89,7 @@ bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} $ 备注: -- `CHECKPOINT`:你想测试的训练好的分类模型 +- `${CHECKPOINT}`:您想测试的训练好的分类模型 例子: @@ -100,23 +99,23 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k work_dir/model.pth ``` -## ImageNet半监督分类 +## ImageNet 半监督分类 -为了运行ImageNet半监督分类,我们将使用和线性评估和微调一样的`.sh`脚本进行训练。 +为了运行 ImageNet 半监督分类,我们将使用和线性评估和微调一样的`.sh`脚本进行训练。 备注: - 默认GPU数量是4. -- `CONFIG`:使用`configs/benchmarks/classification/imagenet/`下的配置文件,命名为`imagenet_*percent`的文件。 -- `PRETRAIN`:预训练模型文件。 +- `${CONFIG}`:使用`configs/benchmarks/classification/imagenet/`下的配置文件,命名为`imagenet_*percent`的文件。 +- `${PRETRAIN}`:预训练模型文件。 -## ImageNet最近邻分类 +## ImageNet 最近邻分类 ```备注 -仅支持CNN形式的主干网络 (例如ResNet50). +仅支持 CNN 形式的主干网络 (例如 ResNet50)。 ``` -为评估用于ImageNet最近邻分类基准的预训练模型,你可以运行如下命令。 +为评估用于 ImageNet 最近邻分类基准的预训练模型,您可以运行如下命令。 ```shell # distributed version @@ -128,10 +127,10 @@ bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} 备注: -- `${SELFSUP_CONFIG}`是自监督实验的配置文件。 -- `CHECKPOINT`:检查点模型文件的路径。 -- 如果你想改变GPU的数量,你可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。 -- `[optional arguments]`:用于可选参数,你可以参考这个[脚本](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) +- `${SELFSUP_CONFIG}`:是自监督实验的配置文件。 +- `${CHECKPOINT}`:检查点模型文件的路径。 +- 如果您想改变GPU的数量,您可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。 +- `[optional arguments]`:用于可选参数,您可以参考这个[脚本](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) 命令的一个例子 diff --git a/docs/zh_cn/user_guides/detection.md b/docs/zh_cn/user_guides/detection.md index 7c46acd53..e5869ae61 100644 --- a/docs/zh_cn/user_guides/detection.md +++ b/docs/zh_cn/user_guides/detection.md @@ -1,10 +1,10 @@ # 检测 -- [检测](#detection) - - [训练](#train) - - [测试](#test) +- [检测](#检测) + - [训练](#训练) + - [测试](#测试) -这里,我们倾向使用MMDetection做检测任务。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 +这里,我们倾向使用 MMDetection 做检测任务。首先确保您已经安装了 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目。 ```shell pip install openmim @@ -13,11 +13,11 @@ mim install 'mmdet>=3.0.0rc0' 非常容易安装这个包。 -此外,请参考MMDet的[安装](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html)和[数据准备](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html) +此外,请参考 MMDetection 的[安装](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html)和[数据准备](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html) ## 训练 -安装完后,你可以使用如下的简单命令运行MMDetection。 +安装完后,您可以使用如下的简单命令运行 MMDetection。 ```shell # distributed version @@ -31,16 +31,16 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} 备注: -- `CONFIG`: 使用`configs/benchmarks/mmdetection/`下的配置文件。由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMDetection的配置文件,例如: +- `${CONFIG}`: 使用`configs/benchmarks/mmdetection/`下的配置文件。由于 OpenMMLab 的算法库支持跨不同存储库引用配置文件,因此我们可以轻松使用 MMDetection 的配置文件,例如: ```shell _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' ``` -从头开始写你的配置文件也是支持的。 +从头开始写您的配置文件也是支持的。 -- `PRETRAIN`:预训练模型文件 -- `GPUS`: 你想用于训练的GPU数量,对于检测任务,我们默认采用8块GPU。 +- `${PRETRAIN}`:预训练模型文件 +- `${GPUS}`: 您想用于训练的 GPU 数量,对于检测任务,我们默认采用 8 块 GPU。 例子: @@ -50,8 +50,8 @@ configs/benchmarks/mmdetection/coco/mask-rcnn_r50-c4_ms-1x_coco.py \ https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 8 ``` -或者你想用[detectron2](https://github.com/facebookresearch/detectron2)来做检测任务,我们也提供了一些配置文件。 -请参考[INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md)用于安装并按照detectron2需要的[目录结构](https://github.com/facebookresearch/detectron2/tree/main/datasets)准备你的数据集。 +或者您想用 [detectron2](https://github.com/facebookresearch/detectron2) 来做检测任务,我们也提供了一些配置文件。 +请参考 [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) 用于安装并按照 detectron2 需要的[目录结构](https://github.com/facebookresearch/detectron2/tree/main/datasets)准备您的数据集。 ```shell conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment @@ -62,7 +62,7 @@ bash run.sh ${DET_CFG} ${OUTPUT_FILE} ## 测试 -在训练之后,你可以运行如下命令测试你的模型。 +在训练之后,您可以运行如下命令测试您的模型。 ```shell # distributed version @@ -74,7 +74,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHE 备注: -- `CHECKPOINT`:你想测试的训练好的检测模型。 +- `${CHECKPOINT}`:您想测试的训练好的检测模型。 例子: diff --git a/docs/zh_cn/user_guides/segmentation.md b/docs/zh_cn/user_guides/segmentation.md index 094e46634..145bc76a9 100644 --- a/docs/zh_cn/user_guides/segmentation.md +++ b/docs/zh_cn/user_guides/segmentation.md @@ -1,10 +1,10 @@ # 分割 -- [分割](#segmentation) - - [训练](#train) - - [测试](#test) +- [分割](#分割) + - [训练](#训练) + - [测试](#测试) -对于语义分割任务我们使用MMSegmentation。首先确保你已经安装了[MIM](https://github.com/open-mmlab/mim),这也是OpenMMLab的一个项目。 +对于语义分割任务我们使用 MMSegmentation。首先确保您已经安装了 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目。 ```shell pip install openmim @@ -13,11 +13,11 @@ mim install 'mmsegmentation>=1.0.0rc0' 非常容易安装这个包。 -此外,请参考MMSegmentation的[安装](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html)和[数据准备](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html)。 +此外,请参考 MMSegmentation 的[安装](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html)和[数据准备](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html)。 ## 训练 -在安装完后,可以使用如下简单命令运行MMSeg。 +在安装完后,可以使用如下简单命令运行 MMSegmentation。 ```shell # distributed version @@ -29,16 +29,16 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} $ 备注: -- `CONFIG`:使用`configs/benchmarks/mmsegmentation/`下的配置文件. S由于OpenMMLab的存储库支持跨不同存储库引用配置文件,因此我们可以轻松使用MMSegmentation的配置文件,例如: +- `${CONFIG}`:使用`configs/benchmarks/mmsegmentation/`下的配置文件。由于 OpenMMLab 的算法库支持跨不同存储库引用配置文件,因此我们可以轻松使用 MMSegmentation 的配置文件,例如: ```shell _base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py' ``` -从头开始写你的配置文件也是支持的。 +从头开始写您的配置文件也是支持的。 -- `PRETRAIN`:预训练模型文件 -- `GPUS`: 你想用于训练的GPU数量,对于检测任务,我们默认采用4块GPU。 +- `${PARTITION}`:预训练模型文件 +- `${GPUS}`: 您想用于训练的 GPU 数量,对于分割任务,我们默认采用 4 块 GPU。 例子: @@ -50,7 +50,7 @@ https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-20 ## 测试 -在训练之后,你可以运行如下命令测试你的模型。 +在训练之后,您可以运行如下命令测试您的模型。 ```shell # distributed version @@ -62,7 +62,7 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${ 备注: -- `CHECKPOINT`:你想测试的训练好的分割模型。 +- `${CHECKPOINT}`:您想测试的训练好的分割模型。 例子: