You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your brilliant CUDA kernel of sparse spiking BP. But I got a RunTimeError when executing the CUDA code.
The error happened in line 459 "train_and_time.py" grad_weights = s3gd_backward_cuda(spk_trace, aout_b, aout_t, aout_i, ds_out, nb_inputs, nb_hidden)
While the CUDA kernel is fine when running other commands like print(torch.tensor([1.0, 2.0]).cuda()).
Then get tensor([1., 2.], device='cuda:0') as the result.
And my pytorch (= 1.7.1) and cuda (=11.0) are identical to your setting.
Do you have any idea about the reason for this problem?? Since it only happens in the s3gd kernel, so I thought that might be some problem hidden in the CUDA code? Could you provide some hints for this error?
The text was updated successfully, but these errors were encountered:
Thank you for your brilliant CUDA kernel of sparse spiking BP. But I got a RunTimeError when executing the CUDA code.
The error happened in line 459 "train_and_time.py"
grad_weights = s3gd_backward_cuda(spk_trace, aout_b, aout_t, aout_i, ds_out, nb_inputs, nb_hidden)
While the CUDA kernel is fine when running other commands like
print(torch.tensor([1.0, 2.0]).cuda())
.Then get
tensor([1., 2.], device='cuda:0')
as the result.And my pytorch (= 1.7.1) and cuda (=11.0) are identical to your setting.
Do you have any idea about the reason for this problem?? Since it only happens in the s3gd kernel, so I thought that might be some problem hidden in the CUDA code? Could you provide some hints for this error?
The text was updated successfully, but these errors were encountered: