-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Why not scale values when attribution values smaller than 1e-5? #393
Comments
Is there any update on this? |
Hi @margiki , we have this check to try to catch instances where attribution values are all approximately 0 and avoid cases where the user may be misled by visual artifacts in the attribution maps which might be magnifying small magnitude differences when normalizing (e.g. noise or floating point error). By not normalizing in these cases, the visualization would indicate that values are all approximately 0, and if outliers exist, they would be particularly salient. Do you have a use-case where normalization below this magnitude is meaningful? You can also alternatively normalize attributions prior to calling visualize_image_attr and set the outlier_perc argument to 0. |
Use-case On robust models, the gradients with respect to the input are very small (see picture below), where the s axis represents the attributions before rescaling. Notice that the range is around 1e-3. Using SmoothGrad, the gradients are around 1e-5, 1e-6 -> which creates issues with Captum. Issue with current warning In Jupyter this warning wasn't printed, which took me hours to dig into Captum and understand why the saliency map was essentially white (because the inputs weren't scaled) Potential solution: |
Thank you very much @margiki for the useful insights. |
Awesome! Maybe the best way for users is to do the scaling anyways, and get a warning if the values were small. What do you think? I'm happy to make a pull request/ |
@margiki Thanks for the details on your use case, makes sense! I agree, the cleanest solution is probably just to do the scaling regardless and update the warning message accordingly. If you want to make the pull request with the change, that would be great, thanks! |
@margiki, @vivekmig , do you still want to work on the PR ? Can we close this issue ? |
Thank you for the heads up! @vivekmig, please go ahead as you initially proposed and plan this change for a future release :) |
Summary: Addresses issue #393 , continue scaling attributions with small magnitude with a warning, only asserting when scale factor is 0. Pull Request resolved: #458 Reviewed By: bilalsal Differential Revision: D23347489 Pulled By: vivekmig fbshipit-source-id: 816a0ca98119a4fe7726325fcbd63dd0ce21f3c6
Summary: Addresses issue pytorch#393 , continue scaling attributions with small magnitude with a warning, only asserting when scale factor is 0. Pull Request resolved: pytorch#458 Reviewed By: bilalsal Differential Revision: D23347489 Pulled By: vivekmig fbshipit-source-id: 816a0ca98119a4fe7726325fcbd63dd0ce21f3c6
When displaying the attribution, you normalise and scale the values.
However, do you skip normalising if the scaling factor (which is the max value after the outliers) is below 1e-5?
The text was updated successfully, but these errors were encountered: