Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Question about the "share_planes" #66

Open
BuLingBin opened this issue Nov 4, 2022 · 0 comments
Open

Question about the "share_planes" #66

BuLingBin opened this issue Nov 4, 2022 · 0 comments

Comments

@BuLingBin
Copy link

Hi, it is a nice work!
But I am confused about the "share_planes" in PointTransformerLayer.
n, nsample, c = x_v.shape; s = self.share_planes
x = ((x_v + p_r).view(n, nsample, s, c // s) * w.unsqueeze(2)).sum(1).view(n, c)
Apparently, w's dimension is reduced by Linear, which is not illustrated in the paper. I think this operation is not consistent with the vector attention, It is more like a compromise of scalar attention and vector attention.
Why partition the feature dimension of (x_v + p_r) into share_planes?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant