We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
I have run this code by change some parameter。
context_channels =384//4 forecast_steps = 20 forenum = 20 input_channels = 3 #use my dataset
CPU 9cores Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz GPU 3090*1 memory 256GB PyTorch 1.10.0 Python 3.8 Cuda 11.3
traindataset = MyDataset(datapath=opt.datapath,dataname= "./train/TestAname.npy",presize=opt.prenum,foresize=opt.forenum,inputchannel=opt.input_channels) tran_loader = DataLoader(traindataset, batch_size=opt.batchsize, num_workers=9,pin_memory=True,shuffle=True,prefetch_factor=8,persistent_workers=True)
GPU-utils rate: 0~50% when use torch.profiler.profile to analyze model Output information is in the analyze.md. analyze.md Any good advice plz?
The text was updated successfully, but these errors were encountered:
Is the bottleneck loading data into RAM?
Sorry, something went wrong.
@all-contributors please add @hedaobaishui for question
@peterdudfield
I've put up a pull request to add @hedaobaishui! 🎉
No branches or pull requests
I have run this code by change some parameter。
1 setting channels num smaller.
2 Hardware parameters
CPU 9cores Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
GPU 3090*1 memory 256GB
PyTorch 1.10.0
Python 3.8
Cuda 11.3
3 train parameter
3 Issue
GPU-utils rate: 0~50%
when use torch.profiler.profile to analyze model
Output information is in the analyze.md.
analyze.md
Any good advice plz?
The text was updated successfully, but these errors were encountered: