Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

RuntimeError #57

Open
Jayku88 opened this issue Sep 14, 2023 · 1 comment
Open

RuntimeError #57

Jayku88 opened this issue Sep 14, 2023 · 1 comment

Comments

@Jayku88
Copy link

Jayku88 commented Sep 14, 2023

Traceback (most recent call last):
File "train.py", line 907, in
main()
File "train.py", line 97, in main
main_worker(args.train_gpu, args.ngpus_per_node, args)
File "train.py", line 415, in main_worker
loss_train, mIoU_train, mAcc_train, allAcc_train = train(train_loader, model, criterion, optimizer, epoch, scaler, scheduler, gpu)
File "train.py", line 519, in train
scaler.scale(loss).backward()
File "/home/vrlabhlbs/anaconda3/envs/spheretest/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/vrlabhlbs/anaconda3/envs/spheretest/lib/python3.7/site-packages/torch/autograd/init.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: transform: failed to synchronize: cudaErrorAssert: device-side assert triggered

@Jaywxy
Copy link

Jaywxy commented Sep 16, 2023

I have got the same error

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants