-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
finetune results seems not very stable #125
Comments
Hello, has your issue been resolved? I'm not sure why the eval_loss keeps fluctuating around 0.69, and the eval_accuracy remains at 0.5. {'loss': 0.6966, 'learning_rate': 2.9946319642130948e-05, 'epoch': 0.04} |
Which datasets are you using? It is possible that the model fails to converge with some random seeds and hyperparameters. We observe same phenomenon on the COVID dataset before. |
For the first run, the Evaluation in step 200,400,800:
{'eval_loss': 0.6967583894729614, 'eval_accuracy': 0.5025337837837838, 'eval_f1': 0.3344575604272063, 'eval_matthews_correlation': 0.0, 'eval_precision': 0.2512668918918919, 'eval_recall': 0.5, 'eval_runtime': 3.1287, 'eval_samples_per_second': 378.436, 'eval_steps_per_second': 23.652, 'epoch': 0.15}
{'eval_loss': 0.6942448019981384, 'eval_accuracy': 0.49746621621621623, 'eval_f1': 0.332205301748449, 'eval_matthews_correlation': 0.0, 'eval_precision': 0.24873310810810811, 'eval_recall': 0.5, 'eval_runtime': 3.0649, 'eval_samples_per_second': 386.309, 'eval_steps_per_second': 24.144, 'epoch': 0.3}
{'eval_loss': 0.6936447620391846, 'eval_accuracy': 0.5025337837837838, 'eval_f1': 0.3344575604272063, 'eval_matthews_correlation': 0.0, 'eval_precision': 0.2512668918918919, 'eval_recall': 0.5, 'eval_runtime': 3.1124, 'eval_samples_per_second': 380.411, 'eval_steps_per_second': 23.776, 'epoch': 0.9}
In 3000 steps:
{'eval_loss': 0.6934958696365356, 'eval_accuracy': 0.49746621621621623, 'eval_f1': 0.332205301748449, 'eval_matthews_correlation': 0.0, 'eval_precision': 0.24873310810810811, 'eval_recall': 0.5, 'eval_runtime': 3.0942, 'eval_samples_per_second': 382.652, 'eval_steps_per_second': 23.916, 'epoch': 2.4}
Then run again,the Evaluation in step 200,400,800:
{'eval_loss': 0.6764485239982605, 'eval_accuracy': 0.543918918918919, 'eval_f1': 0.500248562558037, 'eval_matthews_correlation': 0.1047497035448165, 'eval_precision': 0.5646432374866879, 'eval_recall': 0.5424348347148706, 'eval_runtime': 3.1702, 'eval_samples_per_second': 373.483, 'eval_steps_per_second': 23.343, 'epoch': 0.15}
{'eval_loss': 0.6603909730911255, 'eval_accuracy': 0.7170608108108109, 'eval_f1': 0.7006777463594056, 'eval_matthews_correlation': 0.4870947362226591, 'eval_precision': 0.7747432713117492, 'eval_recall': 0.7158936240030817, 'eval_runtime': 3.0877, 'eval_samples_per_second': 383.453, 'eval_steps_per_second': 23.966, 'epoch': 0.45}
{'eval_loss': 0.3846745193004608, 'eval_accuracy': 0.8386824324324325, 'eval_f1': 0.8381642045361026, 'eval_matthews_correlation': 0.6827625873383905, 'eval_precision': 0.8437887048419396, 'eval_recall': 0.8389907406086374, 'eval_runtime': 3.088, 'eval_samples_per_second': 383.423, 'eval_steps_per_second': 23.964, 'epoch': 0.6}
use the default parameter,need I set a different LR?
The text was updated successfully, but these errors were encountered: