Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve sampling from GP predictive posteriors. #2498

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

gpleiss
Copy link
Member

@gpleiss gpleiss commented Mar 18, 2024

In GaussianLikelihood#marginal the covaraince matrix is now a PsdSumLinearOperator
rather than an AddedDiagLinearOperatior. This change improves the samples from GP predictive posteriors.
Rather than applying a low-rank approximation to K + \sigma^2 I, the PsdSumLinearOperator
now only applies a low-rank approximation to K for sampling, and then adds on i.i.d. N(0, \sigma^2 I)
noise.

In  the covaraince matrix is now a
rather than an . This change improves the samples from GP predictive posteriors.
Rather than applying a low-rank approximation to , the
now only applies a low-rank approximation to  for sampling, and then adds on i.i.d.
noise.
@Balandat
Copy link
Collaborator

Technically, the noise_covar here can be arbitrary, right? I.e. in the general case this would be K + Sigma where Sigma is p.d. (either non-uniform noise levels, or potentially even a full covariance matrix if the observation noise is correlated) and things should still work, right?

@gpleiss
Copy link
Member Author

gpleiss commented Mar 19, 2024

@Balandat yes, noise_covar can be arbitrary!
Unfortunately, this PR is going to be slightly more challenging than I thought... (due to the special behavior we need for RFF kernel, etc.). It'll become easier once we merge #2342, so maybe its time to revive that thread

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants