-
Notifications
You must be signed in to change notification settings - Fork 513
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Select targets gpu fix #317
Conversation
Current build is now failing! Test code has duplication. Discuss with maintainer how to refactor.
On my machine isort is changing it wrong.
I am running it on Windows if this has any impact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks so much for adding this fix @kai-tub ! The _target_batch_test_assert
method works well there, just one small change to avoid code duplication.
Regarding the question of whether to automatically move the target tensor, I think it might be better for the user to be responsible for this when they provide a tensor. Throughout the algorithms, there are multiple tensor arguments, such as baselines, masks, etc., which are not automatically moved to the appropriate device, so for consistency, it would likely be best to leave the behavior as is for now. What do you think @NarineK ?
tests/attr/test_targets.py
Outdated
target = torch.tensor([0, 1, 1, 0]).cuda() | ||
self._target_batch_test_assert(Saliency, net, inputs=inp, targets=target) | ||
|
||
def _target_batch_test_assert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you share this method between the GPUTest and Test classes by putting the method after the classes (removing the one in the Test class as well)? The first argument can be renamed to test instead of self, and self can be passed as the first argument for each call to the method - essentially replace "self._target_batch_test_assert(" with "_target_batch_test_assert(self, ".
Thank you for working on the PR, @kai-tub and reviewing @vivekmig ! I agree that we don't want to do that check here. If we were, we would have to do it in other places as well and I don't think that it is a critical check. The user will get an error about the wrong device at some point if they provide it incorrectly. |
Thanks! I've changed the tests accordingly. I probably could've figured that one out on my own. But I would like to ask what version of isort you are using. It makes a lot of different changes to the current repo and will be rejected. Maybe you could add a note for what specifc version has to be used? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great @kai-tub , thanks again! I'll go ahead and merge this PR.
Regarding the isort issues, thanks for bringing it to our attention! We're also using version 4.3.21, so it doesn't seem like a version issue. I will need to debug further and possibly test with a Windows machine in case that makes a difference. For now, I'll change the CircleCI output to be verbose to show the proposed changes to make it easier if isort locally doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vivekmig has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Hey, Here is my proposed fix for [https://github.com/pytorch/captum/issues/316](https://github.com/pytorch/captum/issues/316). I've added a test case that uses the BaseGPUTest class and simply runs the saliency target tests again, with everything on the GPU. As I did not fully understand the `_target_batch_test_assert` function and what the requirements for the tests are, I simply copied the function for now. I assume there may be an obvious way to integrate these tests. :) Another question is: Do we want to enforce the move if we receive a target tensor, which is not on the right device? We could print a User Warning similar to automatically setting require gradients as in gradient.py:33. Or should the user be responsible to move the target tensor to the correct device? Best regards, Kai Pull Request resolved: pytorch#317 Reviewed By: NarineK Differential Revision: D20371529 Pulled By: vivekmig fbshipit-source-id: 92461d358f589bf47487d6170fac6a54e1d83123
Hey,
Here is my proposed fix for #316.
I've added a test case that uses the BaseGPUTest class and simply runs the saliency target tests again, with everything on the GPU. As I did not fully understand the
_target_batch_test_assert
function and what the requirements for the tests are, I simply copied the function for now. I assume there may be an obvious way to integrate these tests. :)Another question is:
Do we want to enforce the move if we receive a target tensor, which is not on the right device?
We could print a User Warning similar to automatically setting require gradients as in gradient.py:33.
Or should the user be responsible to move the target tensor to the correct device?
Best regards,
Kai