You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It could be useful to have benchmarks that run with all (or most) solvers, going through different kinds of problems, with different domains, and different sizes to show the performance difference and feasability of all the solvers in good_lp.
It could also be considered as additional testcases to run
The text was updated successfully, but these errors were encountered:
I made this issue mostly as a tracking issue for future work implementing this. I don't currently have time to add this but a good start would be actually finding testcases to run with the model/results
Maybe we could take some tests from repositories of other solvers? maybe a bunch from each solver. Or if anyone knows where to get a dataset of models to run in benchmarks
It could be useful to have benchmarks that run with all (or most) solvers, going through different kinds of problems, with different domains, and different sizes to show the performance difference and feasability of all the solvers in good_lp.
It could also be considered as additional testcases to run
The text was updated successfully, but these errors were encountered: