Replies: 1 comment 3 replies
-
Can you share a bit more about your setup and ideally a repro with the code you used to generate these plots?
Great to hear. Do you happen to know what changes in particular fixed this? Mostly so that we can be on the lookout for similar issues in the future. My suspicion is that this may be a change @SebastianAment made to avoid using Boltzmann sampling on the initial candidates to avoid "overexploitative" behaviors in the random restarts.
I would think so, but we'd first have to understand the mechanism that causes this. One obvious guess would be that as we're optimizing a nonconvex function (over a potentially high-dimensional domain) our random restart initializations miss the basin of attraction of the global optimum (or are otherwise degenerate) and so the optimization only finds a local optimum. However, this could also be some other failure of the optimization. If you provide a reproducible example (ideally with a fixed seed or set of initial conditions) we could take a look at what's going on when this failure occurs.
The main reason this is done by default is that when that was not the case we've observed many users in the past struggle b/c they were not normalizing their inputs. So we opted to do this automatically for them by default.
Seems like the difference in speed comes from getting rid of the overhead of the transform during the optimization, but it would be good to profile this. I'd need to know more about the problem here to say something more intelligible - what dimension, how many observations, what exactly are you optimizing, etc. That said, a 5x boost seems quite substantial. @saitcakmak you have observed some slowdowns from the transforms, but IIRC not nearly as large - any thoughts on this? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I had wanted to discuss two topics regarding acquisition function optimizations.
optimize_acqf
100 times (without updating the model) and plot the candidates, there is a clear issue on version 0.11.3:The generated candidates are entirely different, quite often at the bounds. This issue doesn't persist in version 0.13:
However, there is still the occasional outlier (which is not seen with UCB). Is there a possibility of fixing this?
optimize_acqf
perform the optimization over unnormalized bounds? I tested two cases using aSingleTaskGP
:Defined the GP with an input_transform using
Normalize
, with the unnormalized bounds foroptimize_acqf
.Defined the GP without an input_transform but instead a manual min-max transformation on the inputs. Using the normalized inputs as the bounds for
optimize_acqf
.The latter method is much quicker, my tests show a 5x speed boost. Is there a reason this isn't done by default?
Thank you in advance! Please let me know if you need any additional information.
Beta Was this translation helpful? Give feedback.
All reactions