You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1- where does the z axis value for the generated virtual points come from? It is calculated by the interpolated depth ? right?
2- virtual points are generated from the instance masks and not from Frustum? right?
The paper says: "We start by randomly sampling 2D points s 2 m from each instance mask m."
but it also says:
"we generate virtual points from each frustum Fj ."
and:
"We augment sparse Lidar point cloud with dense semantic virtual points generated from 2D detections".
I think the first sentence is the exact sentence.
3- Why is dist_thresh=3000 chosen? Based on what?
4- Will virtual points be generated even if there is only one lidar point in instance mask?
5- How to take class scores of the virtual points after entering the real and virtual points into the 3d detector? Is there a certain percentage that is taken, for example, 50% of the 2d detector score and 50% of the 3d detector score?
6- For example, if there is a mask for a distant car, one lidar point is projected within it and 50 virtual lidar points are generated with a 50% score, that's a car. How would the final result be calculated in the 3d detector?
7- how are the virtual points unprojected into 3D? where does the inverse matrix transform come from? from the neighbor lidar points from which the depth is taken?
May i ask questions again?
1- where does the z axis value for the generated virtual points come from? It is calculated by the interpolated depth ? right?
2- virtual points are generated from the instance masks and not from Frustum? right?
The paper says: "We start by randomly sampling 2D points s 2 m from each instance mask m."
but it also says:
"we generate virtual points from each frustum Fj ."
and:
"We augment sparse Lidar point cloud with dense semantic virtual points generated from 2D detections".
I think the first sentence is the exact sentence.
3- Why is dist_thresh=3000 chosen? Based on what?
4- Will virtual points be generated even if there is only one lidar point in instance mask?
5- How to take class scores of the virtual points after entering the real and virtual points into the 3d detector? Is there a certain percentage that is taken, for example, 50% of the 2d detector score and 50% of the 3d detector score?
6- For example, if there is a mask for a distant car, one lidar point is projected within it and 50 virtual lidar points are generated with a 50% score, that's a car. How would the final result be calculated in the 3d detector?
7- how are the virtual points unprojected into 3D? where does the inverse matrix transform come from? from the neighbor lidar points from which the depth is taken?
8- What is painted points here?
https://github.com/tianweiy/CenterPoint/blob/8fcdc944bbb455bd25943b331aaf961aa0ab32cd/det3d/models/readers/dynamic_voxel_encoder.py#L28
The text was updated successfully, but these errors were encountered: