Skip to content

Question about the attention block. #6

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
SongYupei opened this issue Jul 15, 2022 · 0 comments
Open

Question about the attention block. #6

SongYupei opened this issue Jul 15, 2022 · 0 comments

Comments

@SongYupei
Copy link

Thanks for sharing the code of the paper. I have a question about the code in line 234-237.

geo_feature = torch.cat(geo_feature, dim=0)

``
geo_feature = torch.cat(geo_feature, dim=0)
print('attention input shape:{0}'.format(geo_feature.shape))

[4, 1, 4, 320, 5000]

if self.opt.coarse_part:
geo_feature = self.attention(geo_feature, self.feature_fusion)
# [1, 1, 4, 320, 5000]
print('attention output shape:{0}'.format(geo_feature.shape))
``
the shape[0] of geo_feature is become from 4 to1, but geo_feature.shape[0] is level number, it is not the viewr number become one.
Any one can help me understand the probelem?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants