We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
你好,我对DSDG中测试代码(FaceX-Zoo/addition_module/DSDG/DUM/test.py)有点疑问,请问测试中计算score的时候,有用到norm的操作(score_norm = torch.sum(mu) / torch.sum(test_maps[:, frame_t, :, :])),针对这个我有两个疑惑: 1、假如是一个fake的样本,torch.sum(test_maps[:, frame_t, :, :]应该等于0?那是否需要加一个偏置项来避免除0的情况发生? 2、假如网络训练的很好的话,mu和test_maps[:, frame_t, :, :]应近似相等?那不论是对real还是fake的样本,score_norm应该都近似为1吧?怎么在计算指标的时候对他们进行区分呢
The text was updated successfully, but these errors were encountered:
No branches or pull requests
你好,我对DSDG中测试代码(FaceX-Zoo/addition_module/DSDG/DUM/test.py)有点疑问,请问测试中计算score的时候,有用到norm的操作(score_norm = torch.sum(mu) / torch.sum(test_maps[:, frame_t, :, :])),针对这个我有两个疑惑:
1、假如是一个fake的样本,torch.sum(test_maps[:, frame_t, :, :]应该等于0?那是否需要加一个偏置项来避免除0的情况发生?
2、假如网络训练的很好的话,mu和test_maps[:, frame_t, :, :]应近似相等?那不论是对real还是fake的样本,score_norm应该都近似为1吧?怎么在计算指标的时候对他们进行区分呢
The text was updated successfully, but these errors were encountered: