You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for publishing your implementation.
I want to generate the ScanNet dataset using the learned weights.
For this, from the huggingface, I downloaded the files including last.ckpt.
Then, using the demo code, I tried to render the images of the first scene (scene0000_00).
For rendering without additional training or evaluation, I slightly modified the final block of scannet.gin as follows:
python -m run --ginc configs/scannet.gin --scene_name scene0000_00
However, when I run the demo code, it seems taking too much memory and returns the following message.
Unable to allocate array with shape (1210619520, 3) and data type float64
This issue also had been mentioned by #11.
The rendering loop (predict_step in /model/plenoxel_torch/model.py) seems to sequentially render the image tensors and keep all of them on RAM.
Maybe this part has better to be fixed for better accessibility of the dataset.
Anyway, in my case, I just picked one pose (frame_id=0) and rendered a single image.
The code runs without error, but it returns an unexpected result.
Fortunately, at least I can see the room-like shape (probably the room of scene0000_00, right?).
It seems that there is a pose-related problem.
The following (intermediate) pose tensors might be helpful for figuring out what is wrong.
original pose (before processing with pcd-related things)
First of all, thank you for publishing your implementation.
I want to generate the ScanNet dataset using the learned weights.
For this, from the huggingface, I downloaded the files including last.ckpt.
Then, using the demo code, I tried to render the images of the first scene (scene0000_00).
For rendering without additional training or evaluation, I slightly modified the final block of scannet.gin as follows:
After that, I run the demo code with
However, when I run the demo code, it seems taking too much memory and returns the following message.
This issue also had been mentioned by #11.
The rendering loop (predict_step in /model/plenoxel_torch/model.py) seems to sequentially render the image tensors and keep all of them on RAM.
Maybe this part has better to be fixed for better accessibility of the dataset.
Anyway, in my case, I just picked one pose (frame_id=0) and rendered a single image.
The code runs without error, but it returns an unexpected result.
Fortunately, at least I can see the room-like shape (probably the room of scene0000_00, right?).
It seems that there is a pose-related problem.
The following (intermediate) pose tensors might be helpful for figuring out what is wrong.
original pose (before processing with pcd-related things)
render_pose (the finally returned one)
I'm not very familiar with NeRF-related things, so the aforementioned trials might be wrong somewhere.
Any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered: