Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Choose which dimentions to differenciate with respect to in derivative multitask GPs #2525

Open
Codesflow-Simon opened this issue May 16, 2024 · 0 comments

Comments

@Codesflow-Simon
Copy link

🚀 Feature Request

Derivative GP components such as
ConstantMeanGrad
RBFKernelGrad
differentiate all inputs and concatenate that to the output of the, however, I propose that you can choose which dimensions you differentiate.

Here's an example.
Take a GP that maps inputs (x,y,a,b) and outputs (f, dfdx, dfdy) where f is some function output and dfd* is the derivative of the output w.r.t that input.

Currently, you can solve problems like this but all inputs derivatives are included in the output. Meaning the output would be (f, dfdx, dfdy, dfda, dfdb) where we know nothing about (dfda, dfdb)

Motivation

This would aim to clean up and make easier problems like the one I specified above, one may be able to just set all ground truth values of da or db to NaN, however, I hope the reader agrees this is a poor solution.

Here is a more concrete example. Lets say we are modelling some kind of flow over a 2d field. For example lets say we are modelling temperature as f, and we are interested in flux (dfdx, dfdy), at each input point x,y, we also have additional information, perhaps material properties at that point a,b. What we aim to model as a function (x,y,a,b) => (f, dfdx, dfdy), the infmation (dfda, dfdb), just doesn't make much sense in this context and we aren't interested in it so I don't want it in the output.

Pitch

I propose that we add a dim argument to all kernels and means with grad in their title. Using a approach similar to torch.diff.

Selecting an axis (or axes) is dim will add it to the list of inputs to be differentiated against, (you could also add a special case for all dimensions which is default behavior, perhaps 'all')

ConstantMeanGrad(dims='all') will act as the class currently does
ConstantMeanGrad(dims=()) will not perform differentiation and will be identical to ConstantMean
ConstantMeanGrad(dims=(0,1)) will act as the example provided above.

Describe the solution you'd like
I would like to have the code in these classes updated, or even, a base class for *grad mean and kernel modules, from others to inherit.

Describe alternatives you've considered
Perhaps we can publish a guide or something for a work around.

Are you willing to open a pull request? (We LOVE contributions!!!)
Yes!! But I am limited for time and my understanding of GPyTorch is still developing.

Additional context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant