You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to reproduce the results presented on the paper with the provided code, but the result that I obtained is (slightly) different from the ones provided after Stage I. Those are my results on BEA-2019
Model
Precision
Recall
F0.5
RoBERTa from the paper (Table 10)
40.8
22.1
34.9
RoBERTa from my run
42.7
19.8
34.7
It was mentioned in previous issues that your best model is from epoch 18 on Stage 1, but my best epoch was epoch 16. In addition, my training was considerably faster than the one reported by you on other issues, taking 2.5 days on one RTX 6000.
I question whether these differences should be expected given the randomness in initialization and data order, or maybe there's something wrong with how I'm running the code.
Hello,
I've been trying to reproduce the results presented on the paper with the provided code, but the result that I obtained is (slightly) different from the ones provided after Stage I. Those are my results on BEA-2019
It was mentioned in previous issues that your best model is from epoch 18 on Stage 1, but my best epoch was epoch 16. In addition, my training was considerably faster than the one reported by you on other issues, taking 2.5 days on one RTX 6000.
I question whether these differences should be expected given the randomness in initialization and data order, or maybe there's something wrong with how I'm running the code.
Please find my training command:
Thank you for you time :)
The text was updated successfully, but these errors were encountered: