You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get the following warning when using the Saliency or InputXGradient attribution method but not with IntegratedGradients or GradientShap:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad "
All algorithms should behave consistently. I think, captum shouldn't raise the warnings or correct tutorials that use models with embeddings + add comment in the api docs.
Environment
Pytorch 1.6.0 + Captum 0.2 + Ubuntu 20.04
Additional context
I tested with Saliency, InputXGradient, IntegratedGradients and GradientShap gradient methods.
Finally, I think that this bug is similar to #421
The text was updated successfully, but these errors were encountered:
Summary:
This removes the resetting of grad attribute to zero, which is causing warnings as mentioned in #491 and #421 . Based on torch [documentation](https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad), resetting of grad is only needed when using torch.autograd.backward, which accumulates results into the grad attribute for leaf nodes. Since we only utilize torch.autograd.grad (with only_inputs always set to True), the gradients obtained in Captum are never actually accumulated into grad attributes, so resetting the attribute is not actually necessary.
This also adds a test to confirm that the grad attribute is not altered when gradients are utilized through Saliency.
Pull Request resolved: #597
Reviewed By: bilalsal
Differential Revision: D26079970
Pulled By: vivekmig
fbshipit-source-id: f7ccee02a17f66ee75e2176f1b328672b057dbfa
🐛 Bug
I get the following warning when using the Saliency or InputXGradient attribution method but not with IntegratedGradients or GradientShap:
To Reproduce
Execute this:
Expected behavior
All algorithms should behave consistently. I think, captum shouldn't raise the warnings or correct tutorials that use models with embeddings + add comment in the api docs.
Environment
Pytorch 1.6.0 + Captum 0.2 + Ubuntu 20.04
Additional context
I tested with Saliency, InputXGradient, IntegratedGradients and GradientShap gradient methods.
Finally, I think that this bug is similar to #421
The text was updated successfully, but these errors were encountered: