This is the implementation of our IEEE Signal Processing Letters paper "MonoBooster: Semi-Dense Skip Connection with Cross-Level Attention for Boosting Self-Supervised Monocular Depth Estimation".
We use python 3.8.13/cuda 11.4/torch 1.10.0/torchvision 0.11.0/opencv 3.4.8 for training and evaluation.
For KITTI depth, download KITTI raw dataset from the script provided on the official website. The data structure should be:
raw_data
| 2011_09_26
| 2011_09_28
| 2011_09_29
| 2011_09_30
| 2011_10_03
In the main directory, run:
python main.py --gpu [gpu id] --dataset kitti_raw --kitti_raw_root [/path/to/your/kitti/raw_data/root] --kitti_raw_txt ./splits/eigen_zhou/train_files.txt
We provide the pre-trained models here for evaluating.
Run the following commands to generate the ground truth files for testing in eigen split.
cd ./splits/eigen
python export_gt_depth.py --data_path /path/to/your/kitti/raw_data/root
In the main directory, run:
python eval_kitti.py --gpu [gpu id] --pretrained_model [/path/to/saved/checkpoints] --raw_base_dir [/path/to/your/kitti/raw_data/root]
The code is released under the MIT license.
https://github.com/nianticlabs/monodepth2
If you find our code useful, please cite:
@ARTICLE{monobooster/spl24,
author={Wang, Changhao and Zhang, Guanwen and Cheng, Zhengyun and Zhou, Wei},
journal={IEEE Signal Processing Letters},
title={MonoBooster: Semi-Dense Skip Connection With Cross-Level Attention for Boosting Self-Supervised Monocular Depth Estimation},
year={2024},
volume={31},
pages={3069-3073},
}