You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First and foremost, I would like to extend my sincere appreciation for your dedication to assembling such an impressive collection of models within your repository, as well as for your continued efforts to maintain their currency. Your work is invaluable to the community.
Upon utilizing the resources provided, I have encountered a concern I hope you can help me address. Despite having cloned the latest version of the main branch and ensuring all dependencies are correctly installed, I've observed a slight discrepancy between the outcomes produced by the preds.py script (I printed the df_metrics) and the results published on your website.
Would you be able to direct me to any additional steps or considerations I might have overlooked which could account for this difference? Any guidance you could provide would be greatly appreciated.
Thank you for your time and assistance.
Best regards,
Anyang
The text was updated successfully, but these errors were encountered:
by default, the site shows the metrics on the subset of the WBM test set that includes only unique structure prototypes (removing prototype overlap with MP as well as duplicates within WBM itself). while this only has a minor effect on the metrics, our goal with that was to maximize the OODness of the test set. if you print df_metrics_uniq_protos instead, that should match what you see on the site. see #75, #69 (reply in thread), #69 (reply in thread) for additional context
First and foremost, I would like to extend my sincere appreciation for your dedication to assembling such an impressive collection of models within your repository, as well as for your continued efforts to maintain their currency. Your work is invaluable to the community.
Upon utilizing the resources provided, I have encountered a concern I hope you can help me address. Despite having cloned the latest version of the main branch and ensuring all dependencies are correctly installed, I've observed a slight discrepancy between the outcomes produced by the
preds.py
script (I printed thedf_metrics
) and the results published on your website.Would you be able to direct me to any additional steps or considerations I might have overlooked which could account for this difference? Any guidance you could provide would be greatly appreciated.
Thank you for your time and assistance.
Best regards,
Anyang
The text was updated successfully, but these errors were encountered: