Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

About the dataset depth. #26

Open
huahangc opened this issue Sep 22, 2023 · 9 comments
Open

About the dataset depth. #26

huahangc opened this issue Sep 22, 2023 · 9 comments

Comments

@huahangc
Copy link

How did you get the groundtruth of depth in the dataset?

@yuehaowang
Copy link
Collaborator

Rigorously speaking, there is no ground truth of depth.
As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

@huahangc
Copy link
Author

Thanks for your replying.

@darthandvader
Copy link

Can I compare the rendered depth map with thoses in the dataset? How can I obtain the rendered depth map?

@huahangc
Copy link
Author

@darthandvader

disp_map = 1./torch.max(1e-10 * torch.ones_like(depth_map), depth_map / (torch.sum(weights, -1) + 1e-6))
this variable means the 1/depth, i recommend use the 1/disp rather use the depth directly.

@junzastar
Copy link

Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you

@yuehaowang
Copy link
Collaborator

Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you

I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.

@junzastar
Copy link

Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you

I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.

Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?

@yuehaowang
Copy link
Collaborator

Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you

I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.

Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?

Stereo matching is still a way to estimate the depth. And it requires correspondence information on the image pairs, which is not available in our case.

@junzastar
Copy link

Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.

Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you

I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.

Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?

Stereo matching is still a way to estimate the depth. And it requires correspondence information on the image pairs, which is not available in our case.

I see, thank you very much for your reply.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants