You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey all!
We have an online learning setting where we add datapoints to the GP using the get_fantasy_model method. I noticed that there is a significant performance increase (in terms of computation time) when the dataset crosses the 2048 datapoint threshold which I guess can be traced back to #1224 (specifically these lines in linear_operator). Is this still up to date?
I tried recreating the computation time graph in #1224 but I am getting slightly different results where the crossover point is already at around 100 datapoints as compared to 2000. The test was performed in a google colab with a Tesla T4 runtime and locally on a Surface Book 2 with similar results (haven't gotten around to testing this on an RTX 4090 but I assume it will look similar).
I suppose it makes sense to update this value or to create a setting where this parameter can be changed as it is hardware dependent right? I am happy propose a PR for this if you want:)
🐛 Bug
Hey all!
We have an online learning setting where we add datapoints to the GP using the
get_fantasy_model
method. I noticed that there is a significant performance increase (in terms of computation time) when the dataset crosses the 2048 datapoint threshold which I guess can be traced back to #1224 (specifically these lines inlinear_operator
). Is this still up to date?I tried recreating the computation time graph in #1224 but I am getting slightly different results where the crossover point is already at around 100 datapoints as compared to 2000. The test was performed in a google colab with a Tesla T4 runtime and locally on a Surface Book 2 with similar results (haven't gotten around to testing this on an RTX 4090 but I assume it will look similar).

I suppose it makes sense to update this value or to create a setting where this parameter can be changed as it is hardware dependent right? I am happy propose a PR for this if you want:)
To reproduce
Code snippet used to generate the plot
Expected Behavior
get_fantasy_model
should not have a sudden decrease in computation time at 2048 datapointsSystem information
gpytorch version 1.11
torch version 2.1.0
ubuntu 22.04
The text was updated successfully, but these errors were encountered: