From 53ffbe9b847ff753681d6a389b0fbe9afecbefad Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Fri, 16 Sep 2022 14:28:11 +0800 Subject: [PATCH 1/6] update customize --- docs/en/advanced_guides/new_dataset.md | 257 +++++++++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 docs/en/advanced_guides/new_dataset.md diff --git a/docs/en/advanced_guides/new_dataset.md b/docs/en/advanced_guides/new_dataset.md new file mode 100644 index 000000000..5e087a4bb --- /dev/null +++ b/docs/en/advanced_guides/new_dataset.md @@ -0,0 +1,257 @@ +# 2: Train with customized datasets + +In this note, you will know how to train and test predefined models with customized datasets. We use the Waymo dataset as an example to describe the whole process. + +The basic steps are as below: + +1. Prepare data +2. Prepare a config +3. Train, test, inference models on the customized dataset. + +## Data Preparation + +The ideal situation is that we can reorganize the customized raw data and convert the annotation format into KITTI style. However, considering some calibration files and 3D annotations in KITTI format are difficult to obtain for customized datasets, we introduce the basic data format in the doc. + +### Basic Data Format + +#### Point cloud Format + +Currently, we only support `.bin` format point cloud training and inference, before training on your own datasets, you need to transform your point cloud format to `.bin` file. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference. + +1. Convert pcd to bin: https://github.com/leofansq/Tools_RosBag2KITTI +2. Convert las to bin: The common conversion path is las -> pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor). + +#### Label Format + +The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a given scene as follow: + +``` +# format: [x, y, z, dx, dy, dz, yaw, category_name] +1.23 1.42 0.23 3.96 1.65 1.55 1.56 Car +3.51 2.15 0.42 1.05 0.87 1.86 1.23 Pedestrian +... +``` + +The 3D Box should be stored in unified 3D coordinates. + +#### Calibration Format + +During data collection, we will have multiple lidars and cameras with different sensor setup. For the point cloud data collected by each lidar, they are usually fused and converted to a certain LiDAR coordinate, So typically the calibration information file should contain the intrinsic matrix of each camera and the transformation extrinsic matrix from the lidar to each camera in calibration `.txt` file, while `Px` represents the intrinsic matrix of `camera_x` and `lidar2camx` represents the transformation extrinsic matrix from the `lidar` to `camera_x`. + +``` +P0 +P1 +P2 +P3 +P4 +... +lidar2cam0 +lidar2cam1 +lidar2cam2 +lidar2cam3 +lidar2cam4 +... +``` + +### Raw Data Structure + +#### LiDAR-Based 3D Detection + +The raw data for LiDAR-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` include point cloud data which are supposed to be stored in `.bin` format and `labels` includes label files for 3D detection. + +``` +mmdetection3d +├── mmdet3d +├── tools +├── configs +├── data +│ ├── custom +│ │ ├── ImageSets +│ │ │ ├── train.txt +│ │ │ ├── val.txt +│ │ ├── points +│ │ │ ├── 000000.bin +│ │ │ ├── 000001.bin +│ │ │ ├── ... +│ │ ├── labels +│ │ │ ├── 000000.txt +│ │ │ ├── 000001.txt +│ │ │ ├── ... +``` + +#### Vision-Based 3D Detection + +The raw data for Vision-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `images` contains the images from different cameras, for example, images from `camera_x` need to be placed in `images\images_x`. `calibs` contains calibration information files which store the camera intrinsic matrix of each camera, and `labels` includes label files for 3D detection. + +``` +mmdetection3d +├── mmdet3d +├── tools +├── configs +├── data +│ ├── custom +│ │ ├── ImageSets +│ │ │ ├── train.txt +│ │ │ ├── val.txt +│ │ ├── calibs +│ │ │ ├── 000000.txt +│ │ │ ├── 000001.txt +│ │ │ ├── ... +│ │ ├── images +│ │ │ ├── images_0 +│ │ │ │ ├── 000000.png +│ │ │ │ ├── 000001.png +│ │ │ │ ├── ... +│ │ │ ├── images_1 +│ │ │ ├── images_2 +│ │ │ ├── ... +│ │ ├── labels +│ │ │ ├── 000000.txt +│ │ │ ├── 000001.txt +│ │ │ ├── ... +``` + +#### Multi-Modality 3D Detection + +The raw data for Multi-Modality 3D object detection are typically organized as follows. Different from Vision-based 3D Object detection, calibration information files in `calibs` store the camera intrinsic matrix of each camera and extrinsic matrix. + +``` +mmdetection3d +├── mmdet3d +├── tools +├── configs +├── data +│ ├── custom +│ │ ├── ImageSets +│ │ │ ├── train.txt +│ │ │ ├── val.txt +│ │ ├── calibs +│ │ │ ├── 000000.txt +│ │ │ ├── 000001.txt +│ │ │ ├── ... +│ │ ├── points +│ │ │ ├── 000000.bin +│ │ │ ├── 000001.bin +│ │ │ ├── ... +│ │ ├── images +│ │ │ ├── images_0 +│ │ │ │ ├── 000000.png +│ │ │ │ ├── 000001.png +│ │ │ │ ├── ... +│ │ │ ├── images_1 +│ │ │ ├── images_2 +│ │ │ ├── ... +│ │ ├── labels +│ │ │ ├── 000000.txt +│ │ │ ├── 000001.txt +│ │ │ ├── ... +``` + +#### LiDAR-Based 3D Semantic Segmentation + +The raw data for LiDAR-Based 3D semantic segmentation are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` includes point cloud data, and `semantic_mask` includes point-level label. + +``` +mmdetection3d +├── mmdet3d +├── tools +├── configs +├── data +│ ├── custom +│ │ ├── ImageSets +│ │ │ ├── train.txt +│ │ │ ├── val.txt +│ │ ├── points +│ │ │ ├── 000000.bin +│ │ │ ├── 000001.bin +│ │ │ ├── ... +│ │ ├── semantic_mask +│ │ │ ├── 000000.bin +│ │ │ ├── 000001.bin +│ │ │ ├── ... +``` + +### Data Converter + +Once you prepared the raw data following our instruction, you can directly use the following command to generate training/validation information files. + +``` +python tools/create_data.py base --root-path ./data/custom --out-dir ./data/custom +``` + +## An example of customized dataset + +Once we finish data preparation, we can create a new dataset in `mmdet3d/datasets/my_dataset.py` to load the data. + +``` +import mmengine + +from mmdet.base_det_dataset import BaseDetDataset +from mmdet.registry import DATASETS + + +@DATASETS.register_module() +class MyDataset(Det3DDataset): + + METAINFO = { + 'CLASSES': ('person', 'bicycle', 'car', 'motorcycle') + } + + def parse_ann_info(self, info): + """Get annotation info according to the given index. + + Args: + info (dict): Data information of single data sample. + + Returns: + dict: annotation information consists of the following keys: + + - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): + 3D ground truth bboxes. + - bbox_labels_3d (np.ndarray): Labels of ground truths. + - gt_bboxes (np.ndarray): 2D ground truth bboxes. + - gt_labels (np.ndarray): Labels of ground truths. + - difficulty (int): Difficulty defined by KITTI. + 0, 1, 2 represent xxxxx respectively. + """ + ann_info = super().parse_ann_info(info) + if ann_info is None: + ann_info = dict() + # empty instance + ann_info['gt_bboxes_3d'] = np.zeros((0, 7), dtype=np.float32) + ann_info['gt_labels_3d'] = np.zeros(0, dtype=np.int64) + + lidar2cam = np.array(info['images']['CAM0']['lidar2cam']) + gt_bboxes_3d = LiDARInstance3DBoxes( + ann_info['gt_bboxes_3d']).convert_to(self.box_mode_3d) + ann_info['gt_bboxes_3d'] = gt_bboxes_3d + return ann_info +``` + +## Prepare a config + +The second step is to prepare configs such that the dataset could be successfully loaded. In addition, adjusting hyperparameters is usually necessary to obtain decent performance in 3D detection. + +Suppose we would like to train PointPillars on Waymo to achieve 3D detection for 3 classes, vehicle, cyclist and pedestrian, we need to prepare dataset config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/waymoD5-3d-3class.py), model config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_waymo.py) and combine them like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py), compared to KITTI [dataset config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/kitti-3d-3class.py), [model config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_kitti.py) and [overall](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py). + +## Train a new model + +To train a model with the new config, you can simply run + +```shell +python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +``` + +For more detailed usages, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). + +## Test and inference + +To test the trained model, you can simply run + +```shell +python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py work_dirs/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/latest.pth --eval waymo +``` + +**Note**: To use Waymo evaluation protocol, you need to follow the [tutorial](https://mmdetection3d.readthedocs.io/en/latest/datasets/waymo_det.html) and prepare files related to metrics computation as official instructions. + +For more detailed usages for test and inference, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). From 79293bb769161ff667d43671548be86e00fb9e82 Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Fri, 16 Sep 2022 14:29:06 +0800 Subject: [PATCH 2/6] update customize dataset --- docs/en/advanced_guides/customize_dataset.md | 538 ++++++++----------- docs/en/advanced_guides/new_dataset.md | 257 --------- 2 files changed, 214 insertions(+), 581 deletions(-) delete mode 100644 docs/en/advanced_guides/new_dataset.md diff --git a/docs/en/advanced_guides/customize_dataset.md b/docs/en/advanced_guides/customize_dataset.md index fea9a2572..af1129b79 100644 --- a/docs/en/advanced_guides/customize_dataset.md +++ b/docs/en/advanced_guides/customize_dataset.md @@ -1,367 +1,257 @@ # Customize Datasets -## Support new data format - -To support a new data format, you can either convert them to existing formats or directly convert them to the middle format. You could also choose to convert them offline (before training by a script) or online (implement a new dataset and do the conversion at training). In MMDetection3D, for the data that is inconvenient to read directly online, we recommend to convert it into KITTI format and do the conversion offline, thus you only need to modify the config's data annotation paths and classes after the conversion. -For data sharing similar format with existing datasets, like Lyft compared to nuScenes, we recommend to directly implement data converter and dataset class. During the procedure, inheritation could be taken into consideration to reduce the implementation workload. - -### Reorganize new data formats to existing format - -For data that is inconvenient to read directly online, the simplest way is to convert your dataset to existing dataset formats. - -Typically we need a data converter to reorganize the raw data and convert the annotation format into KITTI style. Then a new dataset class inherited from existing ones is sometimes necessary for dealing with some specific differences between datasets. Finally, the users need to further modify the config files to use the dataset. An [example](https://mmdetection3d.readthedocs.io/en/latest/2_new_data_model.html) training predefined models on Waymo dataset by converting it into KITTI style can be taken for reference. - -### Reorganize new data format to middle format - -It is also fine if you do not want to convert the annotation format to existing formats. -Actually, we convert all the supported datasets into pickle files, which summarize useful information for model training and inference. - -The annotation of a dataset is a list of dict, each dict corresponds to a frame. -A basic example (used in KITTI) is as follows. A frame consists of several keys, like `image`, `point_cloud`, `calib` and `annos`. -As long as we could directly read data according to these information, the organization of raw data could also be different from existing ones. -With this design, we provide an alternative choice for customizing datasets. - -```python - -[ - {'image': {'image_idx': 0, 'image_path': 'training/image_2/000000.png', 'image_shape': array([ 370, 1224], dtype=int32)}, - 'point_cloud': {'num_features': 4, 'velodyne_path': 'training/velodyne/000000.bin'}, - 'calib': {'P0': array([[707.0493, 0. , 604.0814, 0. ], - [ 0. , 707.0493, 180.5066, 0. ], - [ 0. , 0. , 1. , 0. ], - [ 0. , 0. , 0. , 1. ]]), - 'P1': array([[ 707.0493, 0. , 604.0814, -379.7842], - [ 0. , 707.0493, 180.5066, 0. ], - [ 0. , 0. , 1. , 0. ], - [ 0. , 0. , 0. , 1. ]]), - 'P2': array([[ 7.070493e+02, 0.000000e+00, 6.040814e+02, 4.575831e+01], - [ 0.000000e+00, 7.070493e+02, 1.805066e+02, -3.454157e-01], - [ 0.000000e+00, 0.000000e+00, 1.000000e+00, 4.981016e-03], - [ 0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00]]), - 'P3': array([[ 7.070493e+02, 0.000000e+00, 6.040814e+02, -3.341081e+02], - [ 0.000000e+00, 7.070493e+02, 1.805066e+02, 2.330660e+00], - [ 0.000000e+00, 0.000000e+00, 1.000000e+00, 3.201153e-03], - [ 0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00]]), - 'R0_rect': array([[ 0.9999128 , 0.01009263, -0.00851193, 0. ], - [-0.01012729, 0.9999406 , -0.00403767, 0. ], - [ 0.00847068, 0.00412352, 0.9999556 , 0. ], - [ 0. , 0. , 0. , 1. ]]), - 'Tr_velo_to_cam': array([[ 0.00692796, -0.9999722 , -0.00275783, -0.02457729], - [-0.00116298, 0.00274984, -0.9999955 , -0.06127237], - [ 0.9999753 , 0.00693114, -0.0011439 , -0.3321029 ], - [ 0. , 0. , 0. , 1. ]]), - 'Tr_imu_to_velo': array([[ 9.999976e-01, 7.553071e-04, -2.035826e-03, -8.086759e-01], - [-7.854027e-04, 9.998898e-01, -1.482298e-02, 3.195559e-01], - [ 2.024406e-03, 1.482454e-02, 9.998881e-01, -7.997231e-01], - [ 0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00]])}, - 'annos': {'name': array(['Pedestrian'], dtype=' pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor). + +#### Label Format + +The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a given scene as follow: ``` -{'point_cloud': {'num_features': 6, 'lidar_idx': 'scene0000_00'}, 'pts_path': 'points/scene0000_00.bin', - 'pts_instance_mask_path': 'instance_mask/scene0000_00.bin', 'pts_semantic_mask_path': 'semantic_mask/scene0000_00.bin', - 'annos': {'gt_num': 27, 'name': array(['window', 'window', 'table', 'counter', 'curtain', 'curtain', - 'desk', 'cabinet', 'sink', 'garbagebin', 'garbagebin', - 'garbagebin', 'sofa', 'refrigerator', 'table', 'table', 'toilet', - 'bed', 'cabinet', 'cabinet', 'cabinet', 'cabinet', 'cabinet', - 'cabinet', 'door', 'door', 'door'], dtype=' pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor). - -#### Label Format - -The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a given scene as follow: - -``` -# format: [x, y, z, dx, dy, dz, yaw, category_name] -1.23 1.42 0.23 3.96 1.65 1.55 1.56 Car -3.51 2.15 0.42 1.05 0.87 1.86 1.23 Pedestrian -... -``` - -The 3D Box should be stored in unified 3D coordinates. - -#### Calibration Format - -During data collection, we will have multiple lidars and cameras with different sensor setup. For the point cloud data collected by each lidar, they are usually fused and converted to a certain LiDAR coordinate, So typically the calibration information file should contain the intrinsic matrix of each camera and the transformation extrinsic matrix from the lidar to each camera in calibration `.txt` file, while `Px` represents the intrinsic matrix of `camera_x` and `lidar2camx` represents the transformation extrinsic matrix from the `lidar` to `camera_x`. - -``` -P0 -P1 -P2 -P3 -P4 -... -lidar2cam0 -lidar2cam1 -lidar2cam2 -lidar2cam3 -lidar2cam4 -... -``` - -### Raw Data Structure - -#### LiDAR-Based 3D Detection - -The raw data for LiDAR-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` include point cloud data which are supposed to be stored in `.bin` format and `labels` includes label files for 3D detection. - -``` -mmdetection3d -├── mmdet3d -├── tools -├── configs -├── data -│ ├── custom -│ │ ├── ImageSets -│ │ │ ├── train.txt -│ │ │ ├── val.txt -│ │ ├── points -│ │ │ ├── 000000.bin -│ │ │ ├── 000001.bin -│ │ │ ├── ... -│ │ ├── labels -│ │ │ ├── 000000.txt -│ │ │ ├── 000001.txt -│ │ │ ├── ... -``` - -#### Vision-Based 3D Detection - -The raw data for Vision-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `images` contains the images from different cameras, for example, images from `camera_x` need to be placed in `images\images_x`. `calibs` contains calibration information files which store the camera intrinsic matrix of each camera, and `labels` includes label files for 3D detection. - -``` -mmdetection3d -├── mmdet3d -├── tools -├── configs -├── data -│ ├── custom -│ │ ├── ImageSets -│ │ │ ├── train.txt -│ │ │ ├── val.txt -│ │ ├── calibs -│ │ │ ├── 000000.txt -│ │ │ ├── 000001.txt -│ │ │ ├── ... -│ │ ├── images -│ │ │ ├── images_0 -│ │ │ │ ├── 000000.png -│ │ │ │ ├── 000001.png -│ │ │ │ ├── ... -│ │ │ ├── images_1 -│ │ │ ├── images_2 -│ │ │ ├── ... -│ │ ├── labels -│ │ │ ├── 000000.txt -│ │ │ ├── 000001.txt -│ │ │ ├── ... -``` - -#### Multi-Modality 3D Detection - -The raw data for Multi-Modality 3D object detection are typically organized as follows. Different from Vision-based 3D Object detection, calibration information files in `calibs` store the camera intrinsic matrix of each camera and extrinsic matrix. - -``` -mmdetection3d -├── mmdet3d -├── tools -├── configs -├── data -│ ├── custom -│ │ ├── ImageSets -│ │ │ ├── train.txt -│ │ │ ├── val.txt -│ │ ├── calibs -│ │ │ ├── 000000.txt -│ │ │ ├── 000001.txt -│ │ │ ├── ... -│ │ ├── points -│ │ │ ├── 000000.bin -│ │ │ ├── 000001.bin -│ │ │ ├── ... -│ │ ├── images -│ │ │ ├── images_0 -│ │ │ │ ├── 000000.png -│ │ │ │ ├── 000001.png -│ │ │ │ ├── ... -│ │ │ ├── images_1 -│ │ │ ├── images_2 -│ │ │ ├── ... -│ │ ├── labels -│ │ │ ├── 000000.txt -│ │ │ ├── 000001.txt -│ │ │ ├── ... -``` - -#### LiDAR-Based 3D Semantic Segmentation - -The raw data for LiDAR-Based 3D semantic segmentation are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` includes point cloud data, and `semantic_mask` includes point-level label. - -``` -mmdetection3d -├── mmdet3d -├── tools -├── configs -├── data -│ ├── custom -│ │ ├── ImageSets -│ │ │ ├── train.txt -│ │ │ ├── val.txt -│ │ ├── points -│ │ │ ├── 000000.bin -│ │ │ ├── 000001.bin -│ │ │ ├── ... -│ │ ├── semantic_mask -│ │ │ ├── 000000.bin -│ │ │ ├── 000001.bin -│ │ │ ├── ... -``` - -### Data Converter - -Once you prepared the raw data following our instruction, you can directly use the following command to generate training/validation information files. - -``` -python tools/create_data.py base --root-path ./data/custom --out-dir ./data/custom -``` - -## An example of customized dataset - -Once we finish data preparation, we can create a new dataset in `mmdet3d/datasets/my_dataset.py` to load the data. - -``` -import mmengine - -from mmdet.base_det_dataset import BaseDetDataset -from mmdet.registry import DATASETS - - -@DATASETS.register_module() -class MyDataset(Det3DDataset): - - METAINFO = { - 'CLASSES': ('person', 'bicycle', 'car', 'motorcycle') - } - - def parse_ann_info(self, info): - """Get annotation info according to the given index. - - Args: - info (dict): Data information of single data sample. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes. - - bbox_labels_3d (np.ndarray): Labels of ground truths. - - gt_bboxes (np.ndarray): 2D ground truth bboxes. - - gt_labels (np.ndarray): Labels of ground truths. - - difficulty (int): Difficulty defined by KITTI. - 0, 1, 2 represent xxxxx respectively. - """ - ann_info = super().parse_ann_info(info) - if ann_info is None: - ann_info = dict() - # empty instance - ann_info['gt_bboxes_3d'] = np.zeros((0, 7), dtype=np.float32) - ann_info['gt_labels_3d'] = np.zeros(0, dtype=np.int64) - - lidar2cam = np.array(info['images']['CAM0']['lidar2cam']) - gt_bboxes_3d = LiDARInstance3DBoxes( - ann_info['gt_bboxes_3d']).convert_to(self.box_mode_3d) - ann_info['gt_bboxes_3d'] = gt_bboxes_3d - return ann_info -``` - -## Prepare a config - -The second step is to prepare configs such that the dataset could be successfully loaded. In addition, adjusting hyperparameters is usually necessary to obtain decent performance in 3D detection. - -Suppose we would like to train PointPillars on Waymo to achieve 3D detection for 3 classes, vehicle, cyclist and pedestrian, we need to prepare dataset config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/waymoD5-3d-3class.py), model config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_waymo.py) and combine them like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py), compared to KITTI [dataset config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/kitti-3d-3class.py), [model config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_kitti.py) and [overall](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py). - -## Train a new model - -To train a model with the new config, you can simply run - -```shell -python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py -``` - -For more detailed usages, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). - -## Test and inference - -To test the trained model, you can simply run - -```shell -python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py work_dirs/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/latest.pth --eval waymo -``` - -**Note**: To use Waymo evaluation protocol, you need to follow the [tutorial](https://mmdetection3d.readthedocs.io/en/latest/datasets/waymo_det.html) and prepare files related to metrics computation as official instructions. - -For more detailed usages for test and inference, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). From 15133379f841a74b81b3e984f250b75e1659f01b Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Tue, 20 Sep 2022 17:23:23 +0800 Subject: [PATCH 3/6] update doc --- docs/en/advanced_guides/customize_dataset.md | 279 +++++++++++++++++-- 1 file changed, 249 insertions(+), 30 deletions(-) diff --git a/docs/en/advanced_guides/customize_dataset.md b/docs/en/advanced_guides/customize_dataset.md index af1129b79..c6713e45a 100644 --- a/docs/en/advanced_guides/customize_dataset.md +++ b/docs/en/advanced_guides/customize_dataset.md @@ -1,6 +1,6 @@ # Customize Datasets -In this note, you will know how to train and test predefined models with customized datasets. We use the Waymo dataset as an example to describe the whole process. +In this note, you will know how to train and test predefined models with customized datasets. The basic steps are as below: @@ -25,13 +25,15 @@ Currently, we only support `.bin` format point cloud training and inference, bef The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a given scene as follow: -``` +```python # format: [x, y, z, dx, dy, dz, yaw, category_name] 1.23 1.42 0.23 3.96 1.65 1.55 1.56 Car 3.51 2.15 0.42 1.05 0.87 1.86 1.23 Pedestrian ... ``` +**Note**: Currently we only support KITTI Metric evaluation for customized datasets evaluation. + The 3D Box should be stored in unified 3D coordinates. #### Calibration Format @@ -183,18 +185,19 @@ python tools/create_data.py base --root-path ./data/custom --out-dir ./data/cust Once we finish data preparation, we can create a new dataset in `mmdet3d/datasets/my_dataset.py` to load the data. -``` +```python import mmengine -from mmdet.base_det_dataset import BaseDetDataset -from mmdet.registry import DATASETS +from mmdet3d.det3d_dataset import Det3DDataset +from mmdet3d.registry import DATASETS @DATASETS.register_module() class MyDataset(Det3DDataset): + # replace with all the classes in customized pkl info file METAINFO = { - 'CLASSES': ('person', 'bicycle', 'car', 'motorcycle') + 'CLASSES': ('Pedestrian', 'Cyclist', 'Car') } def parse_ann_info(self, info): @@ -209,10 +212,7 @@ class MyDataset(Det3DDataset): - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): 3D ground truth bboxes. - bbox_labels_3d (np.ndarray): Labels of ground truths. - - gt_bboxes (np.ndarray): 2D ground truth bboxes. - - gt_labels (np.ndarray): Labels of ground truths. - - difficulty (int): Difficulty defined by KITTI. - 0, 1, 2 represent xxxxx respectively. + """ ann_info = super().parse_ann_info(info) if ann_info is None: @@ -221,37 +221,256 @@ class MyDataset(Det3DDataset): ann_info['gt_bboxes_3d'] = np.zeros((0, 7), dtype=np.float32) ann_info['gt_labels_3d'] = np.zeros(0, dtype=np.int64) - lidar2cam = np.array(info['images']['CAM0']['lidar2cam']) - gt_bboxes_3d = LiDARInstance3DBoxes( - ann_info['gt_bboxes_3d']).convert_to(self.box_mode_3d) + # filter the gt classes not used in training + ann_info = self._remove_dontcare(ann_info) + gt_bboxes_3d = LiDARInstance3DBoxes(ann_info['gt_bboxes_3d']) ann_info['gt_bboxes_3d'] = gt_bboxes_3d return ann_info ``` -## Prepare a config - -The second step is to prepare configs such that the dataset could be successfully loaded. In addition, adjusting hyperparameters is usually necessary to obtain decent performance in 3D detection. +After the data pre-processing, there are two steps for users to train the customized new dataset: + +1. Modify the config file for using the customized dataset. +2. Check the annotations of the customized dataset. + +Here we take training PointPillars on customized dataset as an example: + +### Prepare a config + +Here we demonstrate a config sample for pure point cloud training: + +#### Prepare dataset config + +In `configs/_base_/datasets/custom.py`: + +```python +# dataset settings +dataset_type = 'MyDataset' +data_root = 'data/custom/' +class_names = ['Pedestrian', 'Cyclist', 'Car'] # replace with your dataset class +point_cloud_range = [0, -40, -3, 70.4, 40, 1] # adjust according to your dataset +input_modality = dict(use_lidar=True, use_camera=False) +metainfo = dict(CLASSES=class_names) + +train_pipeline = [ + dict( + type='LoadPointsFromFile', + coord_type='LIDAR', + load_dim=4, # replace with your point cloud data dimension + use_dim=4), # replace with the actual dimension used in training and inference + dict( + type='LoadAnnotations3D', + with_bbox_3d=True, + with_label_3d=True), + dict( + type='ObjectNoise', + num_try=100, + translation_std=[1.0, 1.0, 0.5], + global_rot_range=[0.0, 0.0], + rot_range=[-0.78539816, 0.78539816]), + dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), + dict( + type='GlobalRotScaleTrans', + rot_range=[-0.78539816, 0.78539816], + scale_ratio_range=[0.95, 1.05]), + dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), + dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), + dict(type='PointShuffle'), + dict( + type='Pack3DDetInputs', + keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) +] +test_pipeline = [ + dict( + type='LoadPointsFromFile', + coord_type='LIDAR', + load_dim=4, # replace with your point cloud data dimension + use_dim=4), + dict(type='Pack3DDetInputs', keys=['points']) +] +# construct a pipeline for data and gt loading in show function +eval_pipeline = [ + dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), + dict(type='Pack3DDetInputs', keys=['points']), +] +train_dataloader = dict( + batch_size=6, + num_workers=4, + persistent_workers=True, + sampler=dict(type='DefaultSampler', shuffle=True), + dataset=dict( + type='RepeatDataset', + times=2, + dataset=dict( + type=dataset_type, + data_root=data_root, + ann_file='custom_infos_train.pkl', # specify your training pkl info + data_prefix=dict(pts='points'), + pipeline=train_pipeline, + modality=input_modality, + test_mode=False, + metainfo=metainfo, + box_type_3d='LiDAR'))) +val_dataloader = dict( + batch_size=1, + num_workers=1, + persistent_workers=True, + drop_last=False, + sampler=dict(type='DefaultSampler', shuffle=False), + dataset=dict( + type=dataset_type, + data_root=data_root, + data_prefix=dict(pts='points'), + ann_file='custom_infos_val.pkl', # specify your validation pkl info + pipeline=test_pipeline, + modality=input_modality, + test_mode=True, + metainfo=metainfo, + box_type_3d='LiDAR')) +val_evaluator = dict( + type='KittiMetric', + ann_file=data_root + 'custom_infos_val.pkl', # specify your validation pkl info + metric='bbox') +``` -Suppose we would like to train PointPillars on Waymo to achieve 3D detection for 3 classes, vehicle, cyclist and pedestrian, we need to prepare dataset config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/waymoD5-3d-3class.py), model config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_waymo.py) and combine them like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py), compared to KITTI [dataset config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/kitti-3d-3class.py), [model config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_kitti.py) and [overall](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py). +#### Prepare model config + +For voxel-based detectors such as SECOND, PointPillars and CenterPoint, the point cloud range and voxel size should be adjusted according to your dataset. +Theoretically, `voxel_size` is linked to the setting of `point_cloud_range`. Setting a smaller `voxel_size` will increase the voxel num and the corresponding memory consumption. In addition, the following issues need to be noted: + +If the `point_cloud_range` and `voxel_size` are set to be `[0, -40, -3, 70.4, 40, 1]` and `[0.05, 0.05, 0.1]` respectively, then the shape of intermediate feature map should be `[(1-(-3))/0.1+1, (40-(-40))/0.05, (70.4-0)/0.05]=[41, 1600, 1408]`. When changing `point_cloud_range`, remember to change the shape of intermediate feature map in `middel_encoder` according to the `voxel_size`. + +Regarding the setting of `anchor_range`, it is generally adjusted according to dataset. Note that `z` value needs to be adjusted accordingly to the position of the point cloud, please refer to this [issue](https://github.com/open-mmlab/mmdetection3d/issues/986). + +Regarding the setting of `anchor_size`, it is usually necessary to count the average length, width and height of the entire training dataset as `anchor_size` to obtain the best results. + +In `configs/_base_/models/pointpillars_hv_secfpn_custom.py`: + +```python +voxel_size = [0.16, 0.16, 4] # adjust according to your dataset +point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] # adjust according to your dataset +model = dict( + type='VoxelNet', + data_preprocessor=dict( + type='Det3DDataPreprocessor', + voxel=True, + voxel_layer=dict( + max_num_points=32, + point_cloud_range=point_cloud_range, + voxel_size=voxel_size, + max_voxels=(16000, 40000))), + voxel_encoder=dict( + type='PillarFeatureNet', + in_channels=4, + feat_channels=[64], + with_distance=False, + voxel_size=voxel_size, + point_cloud_range=point_cloud_range), + # the `output_shape` should be adjusted according to `point_cloud_range` + # and `voxel_size` + middle_encoder=dict( + type='PointPillarsScatter', in_channels=64, output_shape=[496, 432]), + backbone=dict( + type='SECOND', + in_channels=64, + layer_nums=[3, 5, 5], + layer_strides=[2, 2, 2], + out_channels=[64, 128, 256]), + neck=dict( + type='SECONDFPN', + in_channels=[64, 128, 256], + upsample_strides=[1, 2, 4], + out_channels=[128, 128, 128]), + bbox_head=dict( + type='Anchor3DHead', + num_classes=3, + in_channels=384, + feat_channels=384, + use_direction_classifier=True, + assign_per_class=True, + # adjust the `ranges` and `sizes` according to your dataset + anchor_generator=dict( + type='AlignedAnchor3DRangeGenerator', + ranges=[ + [0, -39.68, -0.6, 69.12, 39.68, -0.6], + [0, -39.68, -0.6, 69.12, 39.68, -0.6], + [0, -39.68, -1.78, 69.12, 39.68, -1.78], + ], + sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], + rotations=[0, 1.57], + reshape_out=False), + diff_rad_by_sin=True, + bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), + loss_cls=dict( + type='mmdet.FocalLoss', + use_sigmoid=True, + gamma=2.0, + alpha=0.25, + loss_weight=1.0), + loss_bbox=dict( + type='mmdet.SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), + loss_dir=dict( + type='mmdet.CrossEntropyLoss', use_sigmoid=False, + loss_weight=0.2)), + # model training and testing settings + train_cfg=dict( + assigner=[ + dict( # for Pedestrian + type='Max3DIoUAssigner', + iou_calculator=dict(type='mmdet3d.BboxOverlapsNearest3D'), + pos_iou_thr=0.5, + neg_iou_thr=0.35, + min_pos_iou=0.35, + ignore_iof_thr=-1), + dict( # for Cyclist + type='Max3DIoUAssigner', + iou_calculator=dict(type='mmdet3d.BboxOverlapsNearest3D'), + pos_iou_thr=0.5, + neg_iou_thr=0.35, + min_pos_iou=0.35, + ignore_iof_thr=-1), + dict( # for Car + type='Max3DIoUAssigner', + iou_calculator=dict(type='mmdet3d.BboxOverlapsNearest3D'), + pos_iou_thr=0.6, + neg_iou_thr=0.45, + min_pos_iou=0.45, + ignore_iof_thr=-1), + ], + allowed_border=0, + pos_weight=-1, + debug=False), + test_cfg=dict( + use_rotate_nms=True, + nms_across_levels=False, + nms_thr=0.01, + score_thr=0.1, + min_bbox_size=0, + nms_pre=100, + max_num=50)) +``` -## Train a new model +#### Prepare overall config -To train a model with the new config, you can simply run +We combine all the configs above in `configs/pointpillars/pointpillars_hv_secfpn_8xb6_custom.py`: -```shell -python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +```python +_base_ = [ + '../_base_/models/pointpillars_hv_secfpn_custom.py', + '../_base_/datasets/custom.py', + '../_base_/schedules/cyclic-40e.py', '../_base_/default_runtime.py' +] ``` -For more detailed usages, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). +## Evaluation -## Test and inference +Once the data and config has been prepared well, you can directly run the training / testing script following our doc. -To test the trained model, you can simply run +**Note**: we only provide an implementation for KITTI stype evaluation for customized dataset. It should be included in dataset config: -```shell -python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py work_dirs/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/latest.pth --eval waymo +```python +val_evaluator = dict( + type='KittiMetric', + ann_file=data_root + 'custom_infos_val.pkl', # specify your validation pkl info + metric='bbox') ``` - -**Note**: To use Waymo evaluation protocol, you need to follow the [tutorial](https://mmdetection3d.readthedocs.io/en/latest/datasets/waymo_det.html) and prepare files related to metrics computation as official instructions. - -For more detailed usages for test and inference, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html). From 7081c986ff6e22a1cef53be827c09a90d8ecbc63 Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Tue, 20 Sep 2022 17:28:21 +0800 Subject: [PATCH 4/6] update doc --- docs/en/advanced_guides/customize_dataset.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/en/advanced_guides/customize_dataset.md b/docs/en/advanced_guides/customize_dataset.md index c6713e45a..ce6b7a61d 100644 --- a/docs/en/advanced_guides/customize_dataset.md +++ b/docs/en/advanced_guides/customize_dataset.md @@ -462,6 +462,11 @@ _base_ = [ ] ``` +#### Visualize your dataset (optional) + +To valiate whether your prepared data and config are correct, it's highly recommended to use `tools/browse_dataest.py` script +to visualize your dataset and annotations before training and validation, more details refer to the visualization doc. + ## Evaluation Once the data and config has been prepared well, you can directly run the training / testing script following our doc. From 641e3cb61f3d76db344f0b8960f871028b6c10d5 Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Fri, 23 Sep 2022 19:53:13 +0800 Subject: [PATCH 5/6] fix comments --- docs/en/advanced_guides/customize_dataset.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/en/advanced_guides/customize_dataset.md b/docs/en/advanced_guides/customize_dataset.md index ce6b7a61d..5c8eaf9a3 100644 --- a/docs/en/advanced_guides/customize_dataset.md +++ b/docs/en/advanced_guides/customize_dataset.md @@ -6,7 +6,7 @@ The basic steps are as below: 1. Prepare data 2. Prepare a config -3. Train, test, inference models on the customized dataset. +3. Train, test and inference models on the customized dataset. ## Data Preparation @@ -16,14 +16,14 @@ The ideal situation is that we can reorganize the customized raw data and conver #### Point cloud Format -Currently, we only support `.bin` format point cloud training and inference, before training on your own datasets, you need to transform your point cloud format to `.bin` file. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference. +Currently, we only support '.bin' format point cloud for training and inference. Before training on your own datasets, you need to convert your point cloud files with other formats to '.bin' files. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference. 1. Convert pcd to bin: https://github.com/leofansq/Tools_RosBag2KITTI 2. Convert las to bin: The common conversion path is las -> pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor). #### Label Format -The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a given scene as follow: +The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a certain scene as follow: ```python # format: [x, y, z, dx, dy, dz, yaw, category_name] @@ -38,7 +38,7 @@ The 3D Box should be stored in unified 3D coordinates. #### Calibration Format -During data collection, we will have multiple lidars and cameras with different sensor setup. For the point cloud data collected by each lidar, they are usually fused and converted to a certain LiDAR coordinate, So typically the calibration information file should contain the intrinsic matrix of each camera and the transformation extrinsic matrix from the lidar to each camera in calibration `.txt` file, while `Px` represents the intrinsic matrix of `camera_x` and `lidar2camx` represents the transformation extrinsic matrix from the `lidar` to `camera_x`. +For the point cloud data collected by each lidar, they are usually fused and converted to a certain LiDAR coordinate. So typically the calibration information file should contain the intrinsic matrix of each camera and the transformation extrinsic matrix from the lidar to each camera in calibration `.txt` file, while `Px` represents the intrinsic matrix of `camera_x` and `lidar2camx` represents the transformation extrinsic matrix from the `lidar` to `camera_x`. ``` P0 @@ -342,7 +342,7 @@ If the `point_cloud_range` and `voxel_size` are set to be `[0, -40, -3, 70.4, 40 Regarding the setting of `anchor_range`, it is generally adjusted according to dataset. Note that `z` value needs to be adjusted accordingly to the position of the point cloud, please refer to this [issue](https://github.com/open-mmlab/mmdetection3d/issues/986). -Regarding the setting of `anchor_size`, it is usually necessary to count the average length, width and height of the entire training dataset as `anchor_size` to obtain the best results. +Regarding the setting of `anchor_size`, it is usually necessary to count the average length, width and height of objects in the entire training dataset as `anchor_size` to obtain the best results. In `configs/_base_/models/pointpillars_hv_secfpn_custom.py`: @@ -465,13 +465,14 @@ _base_ = [ #### Visualize your dataset (optional) To valiate whether your prepared data and config are correct, it's highly recommended to use `tools/browse_dataest.py` script -to visualize your dataset and annotations before training and validation, more details refer to the visualization doc. +to visualize your dataset and annotations before training and validation, more details refer to the [visualization](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docs/en/user_guides/visualization.md/) doc. +s ## Evaluation -Once the data and config has been prepared well, you can directly run the training / testing script following our doc. +Once the data and config have been prepared, you can directly run the training/testing script following our doc. -**Note**: we only provide an implementation for KITTI stype evaluation for customized dataset. It should be included in dataset config: +**Note**: we only provide an implementation for KITTI style evaluation for the customized dataset. It should be included in the dataset config: ```python val_evaluator = dict( From 4743076c1976e21c9ccb28223b3577abfec38042 Mon Sep 17 00:00:00 2001 From: ChaimZhu Date: Fri, 30 Sep 2022 20:18:45 +0800 Subject: [PATCH 6/6] fix comments --- docs/en/advanced_guides/customize_dataset.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/en/advanced_guides/customize_dataset.md b/docs/en/advanced_guides/customize_dataset.md index 5c8eaf9a3..9798ae5ad 100644 --- a/docs/en/advanced_guides/customize_dataset.md +++ b/docs/en/advanced_guides/customize_dataset.md @@ -23,7 +23,7 @@ Currently, we only support '.bin' format point cloud for training and inference. #### Label Format -The most basic information: 3D bounding box and category label of each scene need to be contained in annotation `.txt` file. Each line represents a 3D box in a certain scene as follow: +The most basic information: 3D bounding box and category label of each scene need to be contained in the annotation `.txt` file. Each line represents a 3D box in a certain scene as follow: ```python # format: [x, y, z, dx, dy, dz, yaw, category_name] @@ -59,7 +59,7 @@ lidar2cam4 #### LiDAR-Based 3D Detection -The raw data for LiDAR-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` include point cloud data which are supposed to be stored in `.bin` format and `labels` includes label files for 3D detection. +The raw data for LiDAR-based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `points` includes point cloud data which are supposed to be stored in `.bin` format and `labels` includes label files for 3D detection. ``` mmdetection3d @@ -83,7 +83,7 @@ mmdetection3d #### Vision-Based 3D Detection -The raw data for Vision-Based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `images` contains the images from different cameras, for example, images from `camera_x` need to be placed in `images\images_x`. `calibs` contains calibration information files which store the camera intrinsic matrix of each camera, and `labels` includes label files for 3D detection. +The raw data for vision-based 3D object detection are typically organized as follows, where `ImageSets` contains split files indicating which files belong to training/validation set, `images` contains the images from different cameras, for example, images from `camera_x` need to be placed in `images\images_x`. `calibs` contains calibration information files which store the camera intrinsic matrix of each camera, and `labels` includes label files for 3D detection. ``` mmdetection3d @@ -115,7 +115,7 @@ mmdetection3d #### Multi-Modality 3D Detection -The raw data for Multi-Modality 3D object detection are typically organized as follows. Different from Vision-based 3D Object detection, calibration information files in `calibs` store the camera intrinsic matrix of each camera and extrinsic matrix. +The raw data for multi-modality 3D object detection are typically organized as follows. Different from vision-based 3D Object detection, calibration information files in `calibs` store the camera intrinsic matrix of each camera and extrinsic matrix. ``` mmdetection3d