We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
我也有此疑问, 按论文中的意思,y_FM= reduce_sum(first_order,1) + reduce_sum(second_order,1) y_DNN = reduce_sum(y_deep,1),这个和 concat([first_order, second_order, y_deep]) X weights["concat_projection"])是不等价的吧,毕竟weights["concat_projection"]是不全为1的向量(变量),而且只有wx和DNN最后一层需要乘,second_order的<vi,vj>xixj项不需要乘weight 不知道是不是我理解不对?
concat之后再输出,在计算结果上,和论文中 sigmoid(y_FM+y_DNN) 单独计算再加和是一样的。
我觉得first order乘以feature_bias是多余的。因为embedding的结果与deep、second order拼接最后接一个projection layer,只看feat_value-projection这一块就已经是等价LR <w, x>的形式(论文中的公式2),在前面乘以一个feature_bias又不加非线性激活函数完全没必要。
PS:在gayhub上讨论,是不是还是用英语更合适?
Originally posted by @futureer in #32 (comment)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
我也有此疑问,
按论文中的意思,y_FM= reduce_sum(first_order,1) + reduce_sum(second_order,1)
y_DNN = reduce_sum(y_deep,1),这个和
concat([first_order, second_order, y_deep]) X weights["concat_projection"])是不等价的吧,毕竟weights["concat_projection"]是不全为1的向量(变量),而且只有wx和DNN最后一层需要乘,second_order的<vi,vj>xixj项不需要乘weight
不知道是不是我理解不对?
concat之后再输出,在计算结果上,和论文中 sigmoid(y_FM+y_DNN) 单独计算再加和是一样的。
我觉得first order乘以feature_bias是多余的。因为embedding的结果与deep、second order拼接最后接一个projection layer,只看feat_value-projection这一块就已经是等价LR <w, x>的形式(论文中的公式2),在前面乘以一个feature_bias又不加非线性激活函数完全没必要。
PS:在gayhub上讨论,是不是还是用英语更合适?
Originally posted by @futureer in #32 (comment)
The text was updated successfully, but these errors were encountered: