Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Bbeh #1925

Merged
merged 5 commits into from
Mar 12, 2025
Merged

Bbeh #1925

merged 5 commits into from
Mar 12, 2025

Conversation

epsilondylan
Copy link
Contributor

Here’s a quick Markdown description for your contribution to BBEH, focused on supporting the OpenCompass evaluation project with a BBEH evaluation function:

BIG-Bench Extra Hard (BBEH) - OpenCompass Evaluation Support
Overview
This update enhances the BIG-Bench Extra Hard (BBEH) benchmark by integrating support for the OpenCompass evaluation project. I have contributed a dedicated BBEH evaluation function to streamline the assessment of large language models (LLMs) using OpenCompass, a popular framework for evaluating reasoning capabilities.

Contribution
Motivation: The goal is to make BBEH more accessible to researchers and developers by enabling seamless integration with OpenCompass, thereby broadening the evaluation ecosystem for advanced LLM reasoning tasks.
Modification: Added a new evaluate_bbeh_opencompass.py script in the bbeh/evaluation/ directory. This script implements a BBEH-specific evaluation function compatible with OpenCompass, allowing users to run BBEH tasks and aggregate results within the OpenCompass framework.
Use Case: Researchers can now use OpenCompass to evaluate LLMs on BBEH’s challenging reasoning tasks (e.g., Quantum Reasoning, Spatial Reasoning) with minimal setup, leveraging OpenCompass’s visualization and comparison tools.

Init results
dataset,version,metric,mode,Meta-Llama-3-8B-Instruct-LMDeploy-API
bbeh_boolean_expressions,d7a200,score,gen,14.00
bbeh_disambiguation_qa,d7a200,score,gen,33.33
bbeh_geometric_shapes,d7a200,score,gen,13.50
bbeh_hyperbaton,d7a200,score,gen,1.00
bbeh_movie_recommendation,d7a200,score,gen,28.00
bbeh_nycc,d7a200,score,gen,11.00
bbeh_shuffled_objects,d7a200,score,gen,10.00
bbeh_boardgame_qa,d7a200,score,gen,18.50
bbeh_buggy_tables,d7a200,score,gen,0.00
bbeh_causal_understanding,d7a200,score,gen,42.50
bbeh_dyck_languages,d7a200,score,gen,3.50
bbeh_linguini,d7a200,score,gen,2.00
bbeh_multistep_arithmetic,d7a200,score,gen,0.00
bbeh_object_counting,d7a200,score,gen,0.00
bbeh_object_properties,d7a200,score,gen,1.00
bbeh_sarc_triples,d7a200,score,gen,17.00
bbeh_spatial_reasoning,d7a200,score,gen,4.00
bbeh_sportqa,d7a200,score,gen,5.00
bbeh_temporal_sequence,d7a200,score,gen,2.00
bbeh_time_arithmetic,d7a200,score,gen,3.00
bbeh_web_of_lies,d7a200,score,gen,7.50
bbeh_word_sorting,d7a200,score,gen,2.00
bbeh_zebra_puzzles,d7a200,score,gen,3.50

Acknowledgments
This contribution builds on the existing BBEH framework and aligns with the OpenCompass project’s mission to advance LLM evaluation. Feedback and suggestions are welcome!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this file? You dataset config file has already included those subset names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed here, sorry for the inconvenience

Copy link
Collaborator

@MaiziXiao MaiziXiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MaiziXiao MaiziXiao merged commit bc2969d into open-compass:main Mar 12, 2025
7 checks passed
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants