Skip to content

Latest commit

 

History

History
77 lines (52 loc) · 4.31 KB

xrai_saliency_card.md

File metadata and controls

77 lines (52 loc) · 4.31 KB

XRAI Saliency Card

XRAI is a region-based saliency method extension.

Methodology

XRAI converts feature-based saliency methods into region-based saliency. It over-segments the input into many regions, computes saliency using another saliency method (e.g., integrated gradients and guided integrated gradients), and sums the saliency within each region.

Developed by: Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry at Google.

References:

Implementations and Tutorials:

Example: The XRAI saliency map (right) on an ImageNet image for the class samoyed (left) using a Inception v3. This example is from the Google PAIR Blog on XRAI.

Example of XRAI on an image of a samoyed puppy. The saliency is highlights the dog.

Determinism

XRAI is deterministic unless using a non-deterministic saliency method or segmentation method.

Hyperparameter Dependence

XRAI relies on a choice of saliency method and segmentation method. It inherits the hyperparameter dependence of its saliency method (integrated gradients and guided integrated gradients are used in the original paper). The regions will depend on the segmentation method and its parameters (the original paper uses Felzenszwalb image segmentation).

Model Agnosticism

XRAI requires input features that can be meaningfully segmented. It inherits the model agnosticism of its underlying saliency method.

Computational Efficiency

Computing XRAI takes on the order of $1\mathrm{e}{1}$ seconds using the Captum implementation on a 224x224x3 dimensional ImageNet image, ResNet50 model, and one NVidia G100 GPU.

Semantic Directness

XRAI represents the importance of input regions. Its semantic directness depends on the semantic directness of its underlying saliency method.

Sensitivity Testing

Input Sensitivity

Not tested for input sensitivity.

Label Sensitivity

Not tested for label sensitivity.

Model Sensitivity

🟥 Model Weight Randomization: XRAI did not reach randomization even on a fully randomized model. Evaluated on SIIM-ACR Pneumothorax and RSNA Pneumonia medical images.

🟢 Repeatability: XRAI has the highest repeatability across saliency methods (vanilla gradients, integrated gradients, SmoothGrad, Grad-CAM, guided backpropagation, and guided Grad-CAM) and outperformed the baseline. Evaluated using Inception v3 models on SIIM-ACR Pneumothorax and RSNA Pneumonia medical images.

🟢 Reproducibility: XRAI has the highest reproducibility across saliency methods (vanilla gradients, integrated gradients, SmoothGrad, Grad-CAM, guided backpropagation, and guided Grad-CAM). Evaluated using Inception V3 and DenseNet-121 on SIIM-ACR Pneumothorax and RSNA Pneumonia medical images.

Perceptibility Testing

Minimality

Not tested for minimality.

Perceptual Correspondence

🟢 Localization Utility: XRAI passes the localization utility test. It outperformed the other saliency methods (vanilla gradients, integrated gradients, SmoothGrad, Grad-CAM, guided backpropagation, and guided Grad-CAM) and the average saliency map. Evaluated using Inception V3 and DenseNet-121 on SIIM-ACR Pneumothorax and RSNA Pneumonia medical images.

Citation

BibTeX:

@inproceedings{xrai,
  author    = {Andrei Kapishnikov and
               Tolga Bolukbasi and
               Fernanda B. Vi{\'{e}}gas and
               Michael Terry},
  title     = {{XRAI:} {B}etter Attributions Through Regions},
  booktitle = {Proceedings of the International Conference on Computer Vision ({ICCV})},
  pages     = {4947--4956},
  publisher = {{IEEE}},
  year      = {2019},
}