Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Operational Problems Caused by GPU Use #25

Open
X86Git opened this issue Nov 6, 2023 · 0 comments
Open

Operational Problems Caused by GPU Use #25

X86Git opened this issue Nov 6, 2023 · 0 comments

Comments

@X86Git
Copy link

X86Git commented Nov 6, 2023

I have just come into contact with NeRF for satellite image reconstruction, so I want to try to reproduce your work, but when I follow the relevant commands of Testing and Training given in README, I have the following errors, and I want your answer:

1. Testing
Running the command of Testing, the program burst with the error: RuntimeError: CUDA error: out of memory

2. Training
When running the command of Training, the initial error of the program is missing gpu_id, but I don't see this item in your instruction, so I commented it out and entered it in main.py instead.
After modification, the program loads until Validation sanity check: 0%, with an error: RuntimeError: cuda runtime error (2): out of memory at/pytorch/aven/src/THC/THCCachingHostAllocator. cpp: 278.
Later, I tried to use multiple graphics cards, but when I saw the int type specified in parser, I commented it out and made three 3090 graphics cards instead, because I saw that you used two graphics cards in your source file.

I wonder if it is because my hardware resources are difficult to meet the requirements of the project. I would like to ask you if you have any way to solve this question
Thank you.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant