-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
non-leaf tensor warning in NoiseTunnel usage #421
Comments
Thank you for brining this up, @arnoldjulian !
We can have a fix for this. |
facebook-github-bot
pushed a commit
that referenced
this issue
Jul 13, 2020
Summary: This will fix the warning error specifically related to NoiseTunnel in #421. In addition to that I moved almost everything under no_grad in the attribute method. This will hopefully also help with runtime performance. In the `_forward_layer_eval ` I had to add `grad_enabled ` flag in order to allow to enable the gradients externally. As it is also needed in `test_neuron_gradient.py` test case. Pull Request resolved: #426 Reviewed By: vivekmig Differential Revision: D22500566 Pulled By: NarineK fbshipit-source-id: d3170e1711012593ff421b964a02e54532a95b13
Fixed in #426 |
NarineK
added a commit
to NarineK/captum-1
that referenced
this issue
Nov 19, 2020
…#426) Summary: This will fix the warning error specifically related to NoiseTunnel in pytorch#421. In addition to that I moved almost everything under no_grad in the attribute method. This will hopefully also help with runtime performance. In the `_forward_layer_eval ` I had to add `grad_enabled ` flag in order to allow to enable the gradients externally. As it is also needed in `test_neuron_gradient.py` test case. Pull Request resolved: pytorch#426 Reviewed By: vivekmig Differential Revision: D22500566 Pulled By: NarineK fbshipit-source-id: d3170e1711012593ff421b964a02e54532a95b13
Closed
facebook-github-bot
pushed a commit
that referenced
this issue
Jan 27, 2021
Summary: This removes the resetting of grad attribute to zero, which is causing warnings as mentioned in #491 and #421 . Based on torch [documentation](https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad), resetting of grad is only needed when using torch.autograd.backward, which accumulates results into the grad attribute for leaf nodes. Since we only utilize torch.autograd.grad (with only_inputs always set to True), the gradients obtained in Captum are never actually accumulated into grad attributes, so resetting the attribute is not actually necessary. This also adds a test to confirm that the grad attribute is not altered when gradients are utilized through Saliency. Pull Request resolved: #597 Reviewed By: bilalsal Differential Revision: D26079970 Pulled By: vivekmig fbshipit-source-id: f7ccee02a17f66ee75e2176f1b328672b057dbfa
I'm still getting this warning
|
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
I get the following warning when using the saliency attribution method in combination with NoiseTunnel:
A MWE:
When using the saliency attribution method as defined above without NoiseTunnel I do not get any warning.
Based on the API reference on NoiseTunnel and Saliency I would assume that
requires_grad
should be set toTrue
for the input. And indeed, when removing therequires_grad
statement I get the following warning:The same warning also appears when using the saliency attribution method without NoiseTunnel.
When following the same approach as in the MWE using IntegratedGradients instead of Saliency no warnings are thrown.
Is there a way to use the saliency attribution method in combination with NoiseTunnel without getting any warnings?
The text was updated successfully, but these errors were encountered: