Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds settings.trace_mode and skip cholesky jitter #27

Merged
merged 4 commits into from
Nov 8, 2022

Conversation

feynmanliang
Copy link
Contributor

@feynmanliang feynmanliang commented Oct 19, 2022

We would like to use gpytorch in our NUTS sampler. While the linked code works, it does not utilize JIT compilation (use_nnc) because of control flow.

I understand trace_mode was originally used to disable custom pytorch functions back when JIT did not support compiling through them. This PR (along with cornellius-gp/gpytorch#2163) revives the setting for the AoT JIT compiler (https://pytorch.org/functorch/nightly/aot_autograd.html).

  • Moves settings.trace_mode from gpytorch to linear_operators
  • Skips conditional control flow when trace_mode is true, currently:
    • PSD check / jitter in linear_operators.utils.cholesky
    • Diagonal x1_eq_x2 check in gpytorch.kernels.kernel

@gpleiss
Copy link
Member

gpleiss commented Oct 22, 2022

This makes sense to me! It seems like we'd want a PR in GPyTorch to ensure that gpytorch.settings.trace_mode also calls linear_operator.settings.trace_mode.

@gpleiss gpleiss enabled auto-merge (squash) October 22, 2022 20:28
@feynmanliang
Copy link
Contributor Author

Sounds good, I will send a commit for the unit tests soon.

auto-merge was automatically disabled October 26, 2022 21:20

Head branch was pushed to by a user without write access

@gpleiss gpleiss enabled auto-merge (squash) November 8, 2022 01:01
@gpleiss gpleiss merged commit 5b1e019 into cornellius-gp:main Nov 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants