LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels
Tuo Feng, Wenguan Wang, Fan Ma, Yi Yang
This is the official implementation of "LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels" (Accepted at CVPR 2024).
Autonomous systems need to process large-scale, sparse, and irregular point clouds with limited compute resources. Consequently, it is essential to develop LiDAR perception methods that are both efficient and effective. Although naively enlarging 3D kernel size can enhance performance, it will also lead to a cubically-increasing overhead. Therefore, it is crucial to develop streamlined 3D large kernel designs that eliminate redundant weights and work effectively with larger kernels. In this paper, we propose an efficient and effective Large Sparse Kernel 3D Neural Network (LSK3DNet) that leverages dynamic pruning to amplify the 3D kernel size. Our method comprises two core components: Spatial-wise Dynamic Sparsity (SDS) and Channel-wise Weight Selection (CWS). SDS dynamically prunes and regrows volumetric weights from the beginning to learn a large sparse 3D kernel. It not only boosts performance but also significantly reduces model size and computational cost. Moreover, CWS selects the most important channels for 3D convolution during training and subsequently prunes the redundant channels to accelerate inference for 3D vision tasks. We demonstrate the effectiveness of LSK3DNet on three benchmark datasets and five tracks compared with classical models and large kernel designs. Notably, LSK3DNet achieves the state-of-the-art performance on SemanticKITTI (i.e., 75.6% on single-scan and 63.4% on multi-scan), with roughly 40% model size reduction and 60% computing operations reduction compared to the naive large 3D kernel model.
Below is an overview of the key folders and scripts:
.
├── builder/ # Scripts to build or initialize models
├── config/ # Configuration files (.yaml) for training/testing
├── dataloader/ # Data loading scripts
├── network/ # Neural network architectures
├── utils/ # Utility functions
├── test_skitti.py # Script to test
├── train_skitti.py # Main training script
├── test_nusc.py # Script to test on the nuScenes dataset
├── train_nusc.py # Training script for nuScenes
└── README.md # This README file
- Python 3.7+
- PyTorch >= 1.11.0
- c_gen_normal_map: refer to c_utils/README.md for details
Use the following command to install libraries:
pip install -r requirements.txt
- Download SemanticKITTI data.
- Extract them into
./dataset/SemanticKitti/
.
Example folder structure:
./dataset/SemanticKitti/
└── sequences/
├── 00/
│ ├── velodyne/ # .bin files
│ ├── labels/ # .label files
│ └── calib.txt
├── 08/ # validation
├── 11/ # testing
└── ...
- Download the Full dataset (v1.0) from NuScenes with lidarseg annotations.
- Extract everything to
./dataset/nuscenes/
.
Example folder structure:
./dataset/nuscenes/
├── v1.0-trainval
├── samples
├── sweeps
├── maps
├── lidarseg
└── ...
CUDA_VISIBLE_DEVICES=0,1 python train_xxxx.py | tee output/opensource_ks9_64.txt
Use the corresponding test scripts to evaluate the trained models:
CUDA_VISIBLE_DEVICES=0,1 python test_XXXX.py
You can find our pretrained models here.
Model | mIoU (TTA) | Download |
---|---|---|
LSK3DNet (SemanticKITTI val) | 70.2% | Pretrained model |
LSK3DNet (NuScenes val) | 80.1% | Pretrained model |
The SemanticKITTI benchmark test results are accessible at this link under the account Cluster3DSeg_, while the ScanNet test results can be viewed at this link under the method entry LSK3DNet.
Our work references or builds upon:
We thank the authors for their open-source contributions.
This project is released under the MIT License.
If you find the code useful in your research, please consider citing our paper:
@inproceedings{feng2024lsk3dnet,
title={Lsk3dnet: Towards effective and efficient 3d perception with large sparse kernels},
author={Feng, Tuo and Wang, Wenguan and Ma, Fan and Yang, Yi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={14916--14927},
year={2024}
}
Contact
Any comments, please email: feng.tuo@student.uts.edu.au.