Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

how to tune hyper-parameters to balance between sentiment accuracy and BLEU score #15

Open
jind11 opened this issue Oct 2, 2019 · 3 comments

Comments

@jind11
Copy link

jind11 commented Oct 2, 2019

hi, I tried re-running your code without changing any hyper-parameters, however, I got this result: 0-1_Test(Batch:1600) Senti:77.300 BLEU(4ref):61.125(A:55.520+B:66.730) G-score:68.738 H-score:68.267 Cost time:2.57.
Could you provide some experience about how to tune the hyper-parameters so that I can balance between the sentiment accuracy and the BLEU score? Thank you so much!

@luofuli
Copy link
Owner

luofuli commented Oct 2, 2019

You can change 0.25 to a larger value, which can cause a better sentiment accuracy and worse content presentation.

reward = (1+0.25) * style_reward * content_reward / (style_reward + 0.25 * content_reward)

Note: The printed logs just show the results on one test set. 0-1_Test..... is the performance of test.0 and 1-0_Test.... is the performance of test.1.

@jind11
Copy link
Author

jind11 commented Oct 3, 2019

I thought in the result log: A:55.520+B:66.730, A and B correspond to two directions: 0->1 and 1->0 and the BLEU score is the average of them, am I right?

@jind11
Copy link
Author

jind11 commented Oct 6, 2019

hi, Fuli, I have tried increasing the context reward coefficient from 0.25 to 1.0 and the highest sentiment accuracy I got is 78.7%, should I further increase this coefficient? Or I should tune other hyper-parameters to increase the sentiment accuracy to what is reported in the paper? Thank you so much!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants