Skip to content

Latest commit

 

History

History
102 lines (71 loc) · 3.57 KB

README.md

File metadata and controls

102 lines (71 loc) · 3.57 KB

Delving into Localization Errors for Monocular 3D Detection

By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang.

Introduction

This repository is an official implementation of the paper 'Delving into Localization Errors for Monocular 3D Detection'. In this work, by intensive diagnosis experiments, we quantify the impact introduced by each sub-task and found the ‘localization error’ is the vital factor in restricting monocular 3D detection. Besides, we also investigate the underlying reasons behind localization errors, analyze the issues they might bring, and propose three strategies.

vis

Usage

Installation

This repo is tested on our local environment (python=3.6, cuda=9.0, pytorch=1.1), and we recommend you to use anaconda to create a vitural environment:

conda create -n monodle python=3.6

Then, activate the environment:

conda activate monodle

Install Install PyTorch:

conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch

and other requirements:

pip install -r requirements.txt

Data Preparation

Please download KITTI dataset and organize the data as follows:

#ROOT
  |data/
    |KITTI/
      |ImageSets/ [already provided in this repo]
      |object/			
        |training/
          |calib/
          |image_2/
          |label/
        |testing/
          |calib/
          |image_2/

Training & Evaluation

Move to the workplace and train the network:

 cd #ROOT
 cd experiments/example
 python ../../tools/train_val.py --config kitti_example.yaml

The model will be evaluated automatically if the training completed. If you only want evaluate your trained model (or the provided pretrained model) , you can modify the test part configuration in the .yaml file and use the following command:

python ../../tools/train_val.py --config kitti_example.yaml --e

For ease of use, we also provide a pre-trained checkpoint, which can be used for evaluation directly. See the below table to check the performance.

AP40@Easy AP40@Mod. AP40@Hard
In original paper 17.45 13.66 11.68
In this repo 17.94 13.72 12.10

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Ma_2021_CVPR,
author = {Ma, Xinzhu and Zhang, Yinmin, and Xu, Dan and Zhou, Dongzhan and Yi, Shuai and Li, Haojie and Ouyang, Wanli},
title = {Delving into Localization Errors for Monocular 3D Object Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}}

Acknowlegment

This repo benefits from the excellent work CenterNet. Please also consider citing it.

License

This project is released under the MIT License.

Contact

If you have any question about this project, please feel free to contact xinzhu.ma@sydney.edu.au.