You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
thanks for reporting! that is indeed a mistake. the model page was still showing metrics for the complete test set whereas the landing page recently transitioned to showing metrics on a subset of the WBM test set restricted to new and unique structures prototypes. see #75 for details.
* breaking: LabelEnum.dict() -> val_dict(), add label_dict() method
* add training_set col of main metrics table into urls
* fix gnome.yml targets: EFS-> EF
also add missing predictions note
* test mbd/enums.py and LabelEnum
* show model-stats-uniq-protos.json on /models to fix mismatch with landing page metrics (closes#91)
* render missing_preds notes in ModelCard tooltip
* rename model schema .yml + d.ts files
* clickable links to training sets in metrics table
I noticed that the F1 scores for GNoME listed on two different web pages within the Matbench Discovery section appear to be inconsistent:
https://matbench-discovery.materialsproject.org/models
and
https://matbench-discovery.materialsproject.org/
Could someone please clarify why there is a difference in the reported F1 scores for GNoME? Thanks!
The text was updated successfully, but these errors were encountered: