Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combining rapidtide regressors with other confounds #16

Open
tsalo opened this issue Sep 6, 2024 · 6 comments
Open

Combining rapidtide regressors with other confounds #16

tsalo opened this issue Sep 6, 2024 · 6 comments

Comments

@tsalo
Copy link
Collaborator

tsalo commented Sep 6, 2024

This stems from #7 (comment) and #10 (comment). Basically, I want to make sure that my plan for chaining fMRIPost-Rapidtide with other fMRIPost workflows (including XCP-D and giga_connectome) makes sense.

My idea was that users could take the voxel-wise lagged regressor and combine it with other sets of confounds in an omnibus denoising step. However, I'm seeing in retroglm that a number of derivatives from the rapidtide run are being used for the GLM, and that has me a little worried.

@bbfrederick
Copy link
Collaborator

bbfrederick commented Sep 6, 2024

Retroglm uses the derived sLFO regressor, some masks, and the delay map (and the original input data). It then generates voxelwise delayed regressors from that information, and regresses them out. I can split that into two parts - the voxelwise regressor generation step and the actual filtering part, if that would be helpful. The main reason retroglm exists is that I realized that 90% of the runtime of rapidtide is extracting the regressor and getting the voxelwise delay, and that that only generates a tiny fraction of the output data (23MB vs 5GB for a single HCP resting state run). It's only once you start saving the delayed regressors and the products of filtering that your output data size explodes. So pausing the analysis at this point lets you save the majority of the effort with only tiny data size. But generating the voxel specific regressors is very fast - doing it on the fly would certainly be doable.

@tsalo
Copy link
Collaborator Author

tsalo commented Sep 6, 2024

That makes sense. I don't think the regressor generation step requires a command-line interface. A function in the rapidtide package that accepts the lag map and the regressor file would be amazing though. That would be easier to incorporate into the Nipype workflow than something that accepts the rapidtide output directory.

Also, would it make sense to average the lag map across runs from the same subject/session before generating the voxel-wise regressor? I figured that might reduce run-wise noise, since the lag should be fairly stable over runs, right?

@tsalo
Copy link
Collaborator Author

tsalo commented Sep 8, 2024

Was just watching your coffee chat with Ben Inglis and having the first derivative of the voxel-wise regressor from this function would be wonderful. It would be trivial to compute separately, but it would still be nice to have it straight out of the function.

@bbfrederick
Copy link
Collaborator

bbfrederick commented Sep 8, 2024

Rapidtide and retroglm already have this!

--glmderivs. When doing final GLM, include derivatives up to NDERIVS order. Default is 0

When you invoke the option, the voxelwise derivatives are saved.

XXX_desc-lfofilterEV_bold (nii.gz, json) - Shifted sLFO regressor to filter
XXX_desc-lfofilterEVDerivN_bold (nii.gz, json) - Nth time derivative of shifted sLFO regressor

@bbfrederick
Copy link
Collaborator

Also, would it make sense to average the lag map across runs from the same subject/session before generating the voxel-wise regressor? I figured that might reduce run-wise noise, since the lag should be fairly stable over runs, right?

You'd like to think so, but the fits are often kind of noisy. We have a paper in revision about this - you can dramatically improve the reliability of the delay maps using a PCA decomposition, but that's not currently part of rapidtide.

@tsalo
Copy link
Collaborator Author

tsalo commented Sep 9, 2024

Just to make sure I understand, you'd run a PCA on the delay maps after concatenating them across runs? If we take HCP-YA as an example, you get four resting-state runs and... I dunno, four or so task runs. Would you run rapidtide on each of the ~8 runs separately, then concatenate the delay maps, run PCA on that, keep the first component's map, and then feed that delay map into RetroGLM for each run?

EDIT: Because I can totally implement that in fMRIPost-rapidtide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants