Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Apply device context when converting to torch tensors #135

Merged
merged 1 commit into from
Apr 17, 2023

Conversation

edknv
Copy link
Contributor

@edknv edknv commented Apr 17, 2023

Similar to #132 but this PR applies the torch.cuda.device context only when converting numpy/cupy arrays to torch tensors. Unlike #132, it doesn't use cupy.cuda.Device, which did not work with tensorflow as discovered in Models multi-gpu tests.

Note: In addition to the fix in this PR, users have to set cupy.cuda.Device() manually. See here for details. Follow-up: Enable setting device in Core/Dataset. Add 2GPU unit tests in dataloader.

@edknv edknv self-assigned this Apr 17, 2023
@edknv edknv added the bug Something isn't working label Apr 17, 2023
@edknv edknv added this to the Merlin 23.04 milestone Apr 17, 2023
@edknv edknv marked this pull request as ready for review April 17, 2023 16:12
@edknv edknv merged commit d83285e into NVIDIA-Merlin:main Apr 17, 2023
@edknv edknv deleted the multi_gpu_cupy_device branch April 17, 2023 19:24
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants