-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Mixed-integer differential evolution for next point suggestion #549
base: master
Are you sure you want to change the base?
Conversation
Code linter optimisation with Ruff
@udicaprio I realized the coverage bot is broken, so let me fix that first before moving on with this PR |
@till-m, fine no problem. I just realized that I could make some easy modification that improve the computational performance without affecting the identifications one that I would like to implement. I will be working on this today, it is a quite an easy one but I need to fine-tune some DE parameters. |
Great! Could you tag me or request a review when you're done? |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #549 +/- ##
==========================================
- Coverage 96.04% 95.99% -0.06%
==========================================
Files 12 12
Lines 1139 1149 +10
==========================================
+ Hits 1094 1103 +9
- Misses 45 46 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Computational-efficiency of the MIP identification increased
Linter improved
FYI I need to manually approve the tests since you're a first-time contributor. If I forget to run them, please tag me and I will make sure they run. Also, don't worry about small changes in coverage. We're trying to cover the code as much as makes sense, not chase a number. |
Hello @till-m, thank you for the notification. Seems the code passed all the tests except the coverage one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally speaking, this is great code: Clear, concise, etc. I've left some minor comments, though some are just to clarify things. One big issue, however, is assuming that any non-continuous parameter is necessarily integer valued... I don't see any easy way around this right now, though I assume that the algorithm isn't inherently incompatible with discrete, non-int parameters.
should I make any further modifications in order to reach at least the old coverage percentage?
The line that is currently not covered is perfectly fine to not cover, imo.
Ah, also, the name of the function should probably be adjusted somewhat. I'm not sure what the best name would be -- something like |
Working on this |
@till-m, thank you for revising the code and your comments. I am working on them. I will push the modifications as soon as possible. |
Excellent, thank you, much appreciated 🙏 |
Regarding the discrete != integrable discussion, I know that this significantly complicates the PR and it might not be what you've signed up for when you offered to implement DE optimization. If there's something I can do to make things easier for you, let me know. I'm convinced that this is an extremely valuable contribution to the library so I'd like to make sure it gets added! |
Hello @till-m, I have modified the DE search including the transformation with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @udicaprio, things have been/are a bit busy on my end, sorry about that. I think this is almost there, but I left a few small comments that should be addressed, I think.
Thanks for all the work!
ntrials = max(1, len(x_seeds) // 100) | ||
for _ in range(ntrials): | ||
xinit = space.random_sample(15 * len(space.bounds), random_state=self.random_state) | ||
de = DifferentialEvolutionSolver(acq, bounds=space.bounds, init=xinit, rng=self.random_state) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to use polish=False
to turn off gradient-based polishing?
|
||
# Store it if better than previous minimum(maximum). | ||
if min_acq is None or np.squeeze(res.fun) >= min_acq: | ||
x_try_sc = de._unscale_parameters(res.x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will unscale the result to be between [0, 1]
, right, which we don't want to do(?) maybe this call to _unscale
is not necessary?
@@ -219,7 +219,7 @@ def _acq_min( | |||
acq, space, n_random=max(n_random, n_l_bfgs_b), n_x_seeds=n_l_bfgs_b | |||
) | |||
if n_l_bfgs_b: | |||
x_min_l, min_acq_l = self._l_bfgs_b_minimize(acq, space, x_seeds=x_seeds) | |||
x_min_l, min_acq_l = self._smart_minimize(acq, space, x_seeds=x_seeds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the name is good!
x_try_sc = de._unscale_parameters(res.x) | ||
x_try = space.kernel_transform(x_try_sc).flatten() | ||
x_min = x_try | ||
min_acq = np.squeeze(res.fun) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about adding a manual polishing step here, i.e. we do one single l-bfgs-b optimization over continuous dimensions, starting from the x_min
found by the DE. If I understand correctly, this should work as the polish
argument of DE solver does.
Hi,
this commit solves the pull request #548. It implements mixed-integer differential evolution (MIP-DE) to minimise the acquisition function in case of a mixed-integer optimisation.
The MIP-DE is implemented only in case discrete variables are present in the problem.