Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

should we calculate F1-score with micro-average or macro-average? #26

Open
Junpliu opened this issue Nov 18, 2019 · 0 comments
Open

should we calculate F1-score with micro-average or macro-average? #26

Junpliu opened this issue Nov 18, 2019 · 0 comments

Comments

@Junpliu
Copy link

Junpliu commented Nov 18, 2019

In the jupyter notebook "conll2003 BERTBiLSTMCRF" in the "examples" folder, the result report is as follow:

image

I notice you put macro-avg "0.9221" in the "README.md" file, but it seems like that the code at "https://paperswithcode.com/sota/named-entity-recognition-ner-on-conll-2003" adopt the micro-avg value as the final F1-value.

I would appreciate it very much if you can tell me why, thanks.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant