[Paper] [Code] [Video] [DeepREAL Lab]
This repository holds the Pytorch implementation of Distributionally Robust Explanations (DRE) in Are Data-driven Explanations Robust against Out-of-distribution Data? by Tang Li, Fengchun Qiao, Mengmeng Ma, and Xi Peng. If you find our code useful in your research, please consider citing:
@inproceedings{li2023dre,
title={Are Data-driven Explanations Robust against Out-of-distribution Data?},
author={Li, Tang and Qiao, Fengchun and Ma, Mengmeng and Peng, Xi},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
We study the out-of-distribution (OOD) robustness of data-driven explanations. Our evaluations prove that data-driven explanations are susceptible to distributional shifts. However, acquiring the ground truth explanations for all samples or obtaining the one-to-one mapping between samples from different distributions are prohibitively expensive or even impossible in practice. To this end, we propose Distributionally Robust Explanation (DRE) that, inspired by self-supervised learning, leveraging the mixed explanation to provide supervisory signals for the learning of explanations.
DRE models:
- Terra Incognita: [tst_env=0] [tst_env=1] [tst_env=2] [tst_env=3]
- VLCS: [tst_env=0] [tst_env=1] [tst_env=2] [tst_env=3]
This repository reproduces our results on Terra Incognita and VLCS, which is build upon Python3, Pytorch v1.12.1, and CUDA v10.2 on Ubuntu 18.04. Please install all required packages by running:
pip install -r requirements.txt
To download the datasets, please run:
python download.py --data_dir=./
Please note that some URLs may not work due to various factors. You can copy the URLs and download them manually.
The results for explanation quality and prediction accuracy:
To reproduce the results of our DRE method, please run:
python -m dre.train \
--dataset terra_incognita \
--model DRE
To reproduce the results of baseline ERM method, please run:
python -m dre.train \
--dataset terra_incognita \
--model ERM
For other baselines, such as IRM, GroupDRO, and Mixup, please run (you can specify the baseline method by changing the algorithm):
python3 -m domainbed.scripts.train \
--data_dir=./data/ \
--algorithm IRM \
--dataset terra_incognita \
--test_env 0
The explanations using Grad-CAM:
To reproduce the explanation comparison between DRE and baseline methods, please run the notebooks in "./dre/explanations/visualizations/". For example, the Grad-CAM comparison between DRE and ERM:
./dre/explanations/visualizations/grad_cam_erm.ipynb
To reproduce the explanation fidelity results, please run:
python -m dre.explanations.fidelity.evaluate_auc \
--ckpt-path ../../ckpts/best_model.pth \
--root ../../../data/terra_incognita/location_38
- Training code
- Evaluation code
- Terra Incognita
- VLCS
- Urban Land
Part of our code is borrowed from the following repositories.
We thank to the authors for releasing their codes. Please also consider citing their works.