We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
作者您好,非常感谢你的工作。 您在文中强调,strong augumentation的目的是产生prediction disagreement,但对为什么prediction disagreement能提升性能,没有做太多解释。 不知道我这么理解对不对:与无监督对比学习同理,在strong augmentation下,消除S-T不一致,将迫使Student网络,过滤掉被augmentation破坏的低层信息(如色彩、纹理等),而专注于提取语义信息。 希望作者解答一下,感谢!
BTW,arxiv版论文的公式4、5,theta_s和theta_t似乎是写反了?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
作者您好,非常感谢你的工作。
您在文中强调,strong augumentation的目的是产生prediction disagreement,但对为什么prediction disagreement能提升性能,没有做太多解释。
不知道我这么理解对不对:与无监督对比学习同理,在strong augmentation下,消除S-T不一致,将迫使Student网络,过滤掉被augmentation破坏的低层信息(如色彩、纹理等),而专注于提取语义信息。
希望作者解答一下,感谢!
BTW,arxiv版论文的公式4、5,theta_s和theta_t似乎是写反了?
The text was updated successfully, but these errors were encountered: