You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Capturing AHAT depth frame using SensorRecording sample app -> converting into PLY -> visualize using Open3D, provides this results below (closer pixel is red). Notice the envelope-like area around the hand. It supposes to be a wall with some distance.
When visualized with the SensorVisualization sample app, it is also confirmed that the background wall in this region is detected to have similar depth as the hand. Is it possible to filter this area out?
The text was updated successfully, but these errors were encountered:
I found the description about "aliased depth" in the arxiv paper, which approximately offsets every depth pixel > 1m back into a smaller range. Any possibility to get back the actual depth value?
Correct, we tried to highlight the characteristics of depth in AHAT mode in both the API docs and the tech report to make it as clear as possible.
The mode uses single frequency, and it's not possible to run de-aliasing to compute depth for it. However, it should be possible to develop a method to identify "wrapped" depth pixels, at least invalidate them -- for example, leveraging DNNs and the active brightness images.
Capturing AHAT depth frame using SensorRecording sample app -> converting into PLY -> visualize using Open3D, provides this results below (closer pixel is red). Notice the envelope-like area around the hand. It supposes to be a wall with some distance.
When visualized with the SensorVisualization sample app, it is also confirmed that the background wall in this region is detected to have similar depth as the hand. Is it possible to filter this area out?
The text was updated successfully, but these errors were encountered: