-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[Bug] condition_on_observation with SaasFullyBayesianGP #1680
Comments
Hmm yeah I'm not surprised, I don't think we've tried / tested this before. We'll take a look |
Thanks! I'm trying to sort something out on my end too, but it may be a bit patchy if one is to avoid GPyTorch. |
Yeah we may well have to change something on the gpytorch end, but that's ok. |
So, I have a suggested solution to this problem that seems to be relatively pain-free. However, it doesn't quite fit within the BOTorch ecosystem, as it involves duplicating the training data in the FBGP and moving the
I guess this equates to a regular batched model. The only issue is that num_mcmc_samples is not available when the model is initialized (or re-initialize when MCMC samples are obtained). In my own runs, I have simply hacked the num_mcmc_samples in there. |
Yeah, like you said, this is effectively just using a batched model with a I think what you have is a decent solution. |
Ideally we could do this in a way that doesn't require changing the forward/posterior methods. I haven't dug into this in detail, but is the issue here fundamentally that the underlying gpytorch model currently doesn't actually have training data of the right size so that the conditioning logic there fails? I wonder if it would be possible to expand that functionality on the gpytorch end to support this. |
That is the way that I have understood things. The FBGP doesn't have batch dimension in the inputs, since there's only one set of training data. The length- and outputscales obviously do, however, which gives a batch dimension in the output. So, As such, inside
So, I tried to address it by adding a |
So one possibility here would be to add a dedicated @dme65, @saitcakmak does this sound reasonable? |
Yeah, that sounds good. We already have a |
Started a PR for this in cornellius-gp/gpytorch#2307 @hvarfner can you check if this works for this use case? I haven't looked super closely through the prediction strategy, but as the |
Not quite - the same issue still persists! It seems like the forward pass through the GP The batch_shape attribute in Once again, only my understanding of things =) |
@hvarfner Is this fixed? |
Yes! |
Awesome! Do you know what PR fixed it? |
🐛 Bug
With the risk of looking a bit silly, I don't think conditioning on additional observations works when using fully Bayesian GPs out of the box.
To reproduce
The SAASBO tutorial works fine to reproduce.
** Code snippet to reproduce **
** Stack trace/error message **
For both cases, this occurs:
Expected Behavior
Conditioning on the new observations for every constituent GP in the FBGB.
My two cents on the issue
Notably, creating a BatchedMultiOutputModel with
num_mcmc_samples
batches of train inputs and targets handles this without issue. The issue seems to appear here: https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/models/exact_gp.py#L220, where the output is of shapenum_mcmc_samples x num_mcmc_samples x num_inputs
(should probably benum_mcmc_samples x num_inputs
.System information
Please complete the following information:
The text was updated successfully, but these errors were encountered: