Skip to content

[ICRA2025] OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction

License

Notifications You must be signed in to change notification settings

ika-rwth-aachen/OCCUQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction

Severin Heidrich*1   Till Beemelmanns*1   Alexey Nekrasov*2   Bastian Leibe2   Lutz Eckstein1  
1Institute for Automotive Engineering, RWTH Aachen University, Germany  
2Computer Vision Institute, RWTH Aachen University, Germany  
*Denotes equal contribution

Abstract: Autonomous driving has the potential to significantly enhance productivity and provide numerous societal benefits. Ensuring robustness in these safety-critical systems is essential, particularly when vehicles must navigate adverse weather conditions and sensor corruptions that may not have been encountered during training. Current methods often overlook uncertainties arising from adversarial conditions or distributional shifts, limiting their real-world applicability. We propose an efficient adaptation of an uncertainty estimation technique for 3D occupancy prediction. Our method dynamically calibrates model confidence using epistemic uncertainty estimates. Our evaluation under various camera corruption scenarios, such as fog or missing cameras, demonstrates that our approach effectively quantifies epistemic uncertainty by assigning higher uncertainty values to unseen data. We introduce region-specific corruptions to simulate defects affecting only a single camera and validate our findings through both scene-level and region-level assessments. Our results show superior performance in Out-of-Distribution (OoD) detection and confidence calibration compared to common baselines such as Deep Ensembles and MC-Dropout. Our approach consistently demonstrates reliable uncertainty measures, indicating its potential for enhancing the robustness of autonomous driving systems in real-world scenarios.


Outline

News

  • [01/31/2025] OCCUQ is accepted to ICRA2025.

Introduction

Current autonomous driving methods use multi-camera setups to construct 3D occupancy maps, which consist of voxels representing space occupancy and different semantic classes, serving as input for trajectory planning and collision avoidance. While many approaches focus on dataset generation and model architecture improvements for 3D occupancy prediction, they often overlook uncertainties arising from adversarial conditions or distributional shifts, hindering real-world deployment.

In our work, we focus on the adaptation of an efficient uncertainty estimation method for 3D occupancy prediction. By incorporating an uncertainty module in the dense 3D occupancy detection head and separately training a Gaussian Mixture Model (GMM) at the feature level, we aim to disentangle aleatoric and epistemic uncertainty during inference.

Method

OCCUQ Overview From multi-view camera images, our method provides 3D occupancy predictions with reliable epistemic and aleatoric uncertainties on a voxel level. We build on top of SurroundOCC, and introduce an additional Uncertainty Quantification (UQ) module into the prediction head.

Demo


Motorcycle


Scooter


Motorcycle Overtaking

Getting Started

Quick Start

1. Fit GMM

Download the trained model and fit the Gaussian Mixture Model (GMM) for uncertainty estimation. Run gmm_fit.py with the following command:

export CUDA_VISIBLE_DEVICES=0
export PYTHONPATH=$PYTHONPATH:/workspace

config=/workspace/projects/configs/occuq/occuq_mlpv5_sn.py
weight=/workspace/work_dirs/occuq_mlpv5_sn/epoch_6.pth

python tools/gmm_fit.py \
$config \
$weight \
--eval bbox

2. Inference

Once the GMM is fitted, you can run inference of the model with GMM Uncertainty Quantification using the following command:

python tools/gmm_evaluate.py \
$config \
$weight \
--eval bbox

3. OOD Detection

To generate the OOD detection results as in the paper for OCCUQ, you can check out the gmm_multicorrupt_evaluate.sh script where we perform step 1. and 2. and then iterate over the corruptions snow, fog, motionblur, brightness and missingcamera, each with severity levels 1, 2 and 3. Then, we evaluate the OOD detection performance using scripts/ood_detection_evaluation.py.

Resolution 200x200x16 (Scale 3)

Measure mAUROC ⬆️ mAUPR ⬆️ mFPR95 ⬇️
Softmax Entropy 54.63 56.21 94.47
Max. Softmax 56.16 57.52 93.17
GMM (Ours) 80.15 79.43 56.18
Results at other output scales

Resolution 100x100x8 (Scale 2)

Measure mAUROC ⬆️ mAUPR ⬆️ mFPR95 ⬇️
Softmax Entropy 53.74 55.11 95.00
Max. Softmax 54.74 55.97 94.39
GMM (Ours) 75.60 74.85 69.65

Resolution 50x50x4 (Scale 1)

Measure mAUROC ⬆️ mAUPR ⬆️ mFPR95 ⬇️
Softmax Entropy 51.07 52.25 95.79
Max. Softmax 52.32 53.24 94.90
GMM (Ours) 72.05 72.01 74.05

Resolution 20x20x2 (Scale 0)

Measure mAUROC ⬆️ mAUPR ⬆️ mFPR95 ⬇️
Softmax Entropy 46.95 48.96 97.23
Max. Softmax 48.70 50.34 96.67
GMM (Ours) 57.93 61.44 90.77

Note: After refactoring the code and retraining the GMM, we obtained slight different results compared to the values reported in our paper.

4. Generate Video

For video generation, run the following command:

python tools/gmm_video.py \
$config \
$weight \
--eval bbox

We generated voxel visualizations as in the videos with Mayavi. More instructions will follow soon.

TODOs

  • Upload MultiCorrupt dataset for evaluation
  • Add scripts for OOD detection
  • Explain which corruptions were used
  • Explain GMM GPU inference
  • Add scripts for Region OOD Detection
  • Add Monte Carlo Dropout and Deep Ensembles
  • Add Uncertainty Guided Temperature Scaling (UGTS)

Acknowledgement

Many thanks to these excellent projects ❤️

We thank the BMBF and EU for funding this project ❤️

This work has received funding from the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No. 101076754 - AIthena project. The project was partially funded by the BMBF project “WestAI” (grant no. 01IS22094D)

Citation

If this work is helpful for your research, please consider citing this work:

@inproceedings{heidrich2025occuq,
 title={{OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction}},
 author={Heidrich, Severin and Beemelmanns, Till and Nekrasov, Alexey and Leibe, Bastian and Eckstein, Lutz},
 booktitle="International Conference on Robotics and Automation (ICRA)",
 year={2025}
}

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages