Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Evaluation on Objective Benchmarks #40

Open
jingmingzhuo opened this issue Aug 3, 2024 · 0 comments
Open

Evaluation on Objective Benchmarks #40

jingmingzhuo opened this issue Aug 3, 2024 · 0 comments

Comments

@jingmingzhuo
Copy link

I think this work is meaningful and provide remarkable results. However, I find all the test benchs are subjective benchs which outputs are judged by LLMs. Have you tried using MoA for objective tasks such as MMLU or MATH? I think this could make MoA even more valuable. Thanks!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant