Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Add benchmarks to compare solvers #68

Open
Specy opened this issue Dec 2, 2024 · 4 comments
Open

Add benchmarks to compare solvers #68

Specy opened this issue Dec 2, 2024 · 4 comments

Comments

@Specy
Copy link
Contributor

Specy commented Dec 2, 2024

It could be useful to have benchmarks that run with all (or most) solvers, going through different kinds of problems, with different domains, and different sizes to show the performance difference and feasability of all the solvers in good_lp.

It could also be considered as additional testcases to run

@lovasoa
Copy link
Collaborator

lovasoa commented Dec 2, 2024

That would be great! Do you want to make it?

@Specy
Copy link
Contributor Author

Specy commented Dec 2, 2024

I made this issue mostly as a tracking issue for future work implementing this. I don't currently have time to add this but a good start would be actually finding testcases to run with the model/results

Maybe we could take some tests from repositories of other solvers? maybe a bunch from each solver. Or if anyone knows where to get a dataset of models to run in benchmarks

@lovasoa
Copy link
Collaborator

lovasoa commented Dec 2, 2024

@Specy
Copy link
Contributor Author

Specy commented Dec 2, 2024

That is definitely a place where to get some problems.

We also need some continuous only so we can include clarabel too

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants