Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

what do I need to do if I need to make the network only predict the model? #14

Open
MNILjj opened this issue Sep 1, 2022 · 4 comments

Comments

@MNILjj
Copy link

MNILjj commented Sep 1, 2022

Dear author,

Thanks for your excellent work! I want to try to get model without the background,I downloaded your training set and remove the background.
No matter whether "use_white_bkgd" is set to "True" or "False", the obtained model still has background. So I need to modify some parameters in the "general_lod0.conf" file?

The context I'm talking about is the part outside the predictive model, as shown in the image below:
2022-09-01 14-37-46 的屏幕截图

The parameter I modified is this place:
2022-09-01 14-48-32 的屏幕截图

This is the training image and the predicted model after I removed the background. I have tried both white and black backgrounds, and the effect is not very good.
rect_001_0_r5000
rect_001_0_r5000
2022-09-01 15-00-13 的屏幕截图
2022-09-01 15-02-36 的屏幕截图

In order to quickly verify my ideas, currently I only train scan7 on one data.
Excuse me, what do I need to do if I need to make the network only predict the model?thanks.

@flamehaze1115
Copy link
Collaborator

Hello.
The background has surfaces is a common problem for neural rendering-based methods, including neus or volsdf.
This is because only images are utilized as supervision.
For texture-less background, any surface predictions will be acceptable, since the regions will have very small errors in the rendering loss. This is why we propose a consistence-aware FT to enhance the predicted model by generic model and remove the free surfaces of background parts.

Modifying this parameter will only influence the color of the synthesized images.
image

If you want to make the generic model to predict clean surfaces of the target object, you have to preprocess the input images to mask out its background, and enforce the background part to be empty in the training.

In the figures of paper, we just use a smaller bounding box to cut off the predicted mesh of the generic model to only keep the foreground part.
If you train the generic model on a larger dataset not just one scene, the surfaces of the foreground and background will be separate like the first figure you plotted in this issue.

@MNILjj
Copy link
Author

MNILjj commented Sep 1, 2022

Thank you for your reply! I will try it.

1 similar comment
@MNILjj
Copy link
Author

MNILjj commented Sep 1, 2022

Thank you for your reply! I will try it.

@zjhthu
Copy link

zjhthu commented Sep 22, 2022

I find using occupancy_mask when extracting mesh can remove the background. More specifically, I uncomment this line.
But I'm not sure whether this will hurt the final mesh accuracy if the initial occupancy mask is not good enough. @flamehaze1115

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants