-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[DistDGL] Dataloader throws error when sampler is not 0 for torch versions > 1.12 #5731
Comments
0
for torch versions > 1.12
0
for torch versions > 1.12
could you add more about how to reproduce this issue? share the key part of |
this issue happens even with |
when reproducing this issue, I hit an other known issue: #5528 (comment) |
@Rhett-Ying I reproduced the issue on GraphStorm: awslabs/graphstorm#199 |
Please check if it is a duplicate issue of #5480 due to a bug from PyT's |
This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you |
This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you |
🐛 Bug
Dataloader cannot handle when number of sampler is more than 0 in distributed training for pytorch versions > 1.12. Could run the same script with PyTorch 1.12.
Will add more details.
To Reproduce
Steps to reproduce the behavior:
(Will add more details)
Expected behavior
Environment
conda
,pip
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: