Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

bad accuracy on test data #31

Open
Caiji-123 opened this issue Dec 9, 2020 · 3 comments
Open

bad accuracy on test data #31

Caiji-123 opened this issue Dec 9, 2020 · 3 comments

Comments

@Caiji-123
Copy link

Hi, I find your accuracy is calculated on training data, but generally we evaluate the model on test data. I get the accuracy from test data and the result is not that good, is there anything wrong?

@Caiji-123
Copy link
Author

the acc on test I got is 0.588
image
epoch 38

@plkmo
Copy link
Owner

plkmo commented Sep 24, 2023

Hi there, for semeval task the model is evaluated on test data. However for pre-training, there is no test data as the pre-training is self-supervised. You were probably looking at the code for pre-training.

Also the accuracy is evaluated on a multi-class > 2 dataset, so the baseline accuracy is not 0.5. You can use F1 for a better comparison

@Caiji-123
Copy link
Author

Caiji-123 commented Sep 24, 2023 via email

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants