Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

get_fantasy_model for multi-task GPs #800

Closed
eytan opened this issue Jul 21, 2019 · 9 comments
Closed

get_fantasy_model for multi-task GPs #800

eytan opened this issue Jul 21, 2019 · 9 comments
Assignees
Labels
enhancement fantasy points multitask For questions about multitask models

Comments

@eytan
Copy link

eytan commented Jul 21, 2019

🚀 Feature Request

I would like to be able to use get_fantasy_model() with a GP model that includes derivative observations (along the lines of the GPModelWithDerivatives) tutorial notebook. However, when I try to do this, I get the following error:

RuntimeError: Cannot yet add fantasy observations to multitask GPs, but this is coming soon!

Is support for multi-task GPs coming soon? I know that @Balandat has implemented them for the specific case of IndependentModelList models, but it doesn't appear that there has been more general development for other types of MTGPs.

Motivation

Derivative observations have been shown to substantially accelerate Bayesian optimization. It should be fairly straightforward to plug derivative information into BoTorch if it were possible to get fantasy models for data with derivative observations.

Pitch

Describe the solution you'd like
Ability to use fantasies with generic multi-task models (or at least, the types of multi-task models you would have when derivative information is available for all observations).

Describe alternatives you've considered
None.

Are you willing to open a pull request? (We LOVE contributions!!!)
I'm sorry.

Additional context

@cahity
Copy link

cahity commented Nov 16, 2022

Hey there!
I see that #805 didn't make it to the merge. Are there any improvements expected?

@gpleiss
Copy link
Member

gpleiss commented Nov 28, 2022

@cahity This was a while ago, so I'm not sure. It looks like there were test failures that weren't resolved.

We would probably want to start the PR from scratch tho (3 years is a long time for code...) would you be willing to revive the PR?

@cahity
Copy link

cahity commented Nov 28, 2022

I would be happy to contribute, so I will give it a go.
Anyone who beats me to it is more than welcome though.

@yyexela
Copy link
Contributor

yyexela commented Feb 13, 2023

Hey all! Any update on this?

@cahity
Copy link

cahity commented Feb 13, 2023

Unfortunately, I didn't have the time. I doubt anyone is working on it, so... I guess no updates for now.

@yyexela
Copy link
Contributor

yyexela commented Feb 13, 2023

No worries at all! I appreciate the quick reply:)

@yyexela
Copy link
Contributor

yyexela commented Feb 15, 2023

I would like to implement this, since it's needed as part of my research project, is there any guidance I could get on where to start or what exactly should be implemented? I originally thought I should have my derivative-enabled GP from subclass BatchedMultiOutputGPyTorchModel instead of GPyTorchModel, but the documentation for the former says the outputs are independent, which is not the case when working with derivatives. Otherwise, I'm getting an error with qNoisyExpectedImprovement: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 6 is not positive-definite). which I'm trying to resolve, but I'm not sure if it's the right direction. Any help is appreciated!

@gpleiss
Copy link
Member

gpleiss commented Mar 17, 2023

cc/ @jacobrgardner

@yyexela
Copy link
Contributor

yyexela commented May 19, 2023

I think this can be closed as well @jacobrgardner because of #805 and #2317

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement fantasy points multitask For questions about multitask models
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants