Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests to consider #41

Open
Rikkkkkin opened this issue Jul 11, 2023 · 5 comments
Open

Requests to consider #41

Rikkkkkin opened this issue Jul 11, 2023 · 5 comments

Comments

@Rikkkkkin
Copy link

Hi Michael,

I am writing a paper describing proper approaches to ITR-related research in epidemiology, in which your package will be promoted. I would like to suggest some functions that are relevant to applied research.

• Comparison of average outcome under ITR versus treat everyone: I understand the concept of PAPE but applied researchers are primary interested in this comparison.

• GATE-like comparison of those with CATE<0 and >0: Although the RCT population is not representative, this comparison is a better illustration for applied researchers.

• Integration of DML (DR-learner or R-learner): For these approaches needing additional cross-validation, is it okay just to use this package by estimating CATE for all samples just like T-learner (i.e. if we further split the data within the cross-fitting steps)?

Hope these can be integrated in the next version. Thanks for your consideration.

Rik

@Rikkkkkin
Copy link
Author

On addition. Please consider adding a function to output the AUPEC plot. Thanks.

@xiaolong-y
Copy link
Collaborator

Dear @Rikkkkkin,

Thank you very much for these suggestions and we will be working on them now. We will let you know when these changes are implemented.

Best regards,
Xiaolong

@jialul
Copy link
Collaborator

jialul commented Jul 13, 2023

Hi @Rikkkkkin,

Thanks for reaching out. We have a new version of evalITR package under development. It's not on CRAN yet but it has already incorporated many features you suggested. You can install it from GitHub with:

# install.packages("devtools")
devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("MichaelLLi/evalITR", ref = "causal-ml")

Please feel free to test it out if it fits your needs. We would really appreciate any feedback you might have on the new development!

  • Comparison of average outcome under ITR versus treat everyone
    • If I understand your question correctly, the estimate you are proposing would be the AUPEC value with Maxim Proportion Treated = 100. But please note that, as we increase the Maxim Proportion Treated, we will reach a point where the remaining units will not benefit from receiving the treatment. We stop treating these units when we compute metrics to evaluate the ITR.
    • Alternatively, you might be interested in learning about the average outcome to treat everyone, even if some subgroup might be harmed from receiving the treatment. It would be great if you could share a bit more context on why applied researchers are interested in this quantity in particular (vs. stop treating if harmed) and we can try to incorporate it in the new iteration of the package.
  • GATE-like comparison of those with CATE<0 and >0
    • We want to provide a flexible tool to allow people to estimate and compare ITRs with different scoring rules (can be CATE but is not limited to CATE). If you have a particular use case in mind that need to compare different subgroups, we are happy to explore ways to better accommodate your needs.
    • Also, the new version of the package has a function to plot GATE directly and you might want to check it out here.
  • Integration of DML (DR-learner or R-learner)
    • The new version has already incorporated some widely used R-learner proposed by Nie and Wager (2017). We are currently working on integrating DR-learner and will keep you posted on any new updates!
  • The function to plot AUPEC is also incorporated and you can find an example here. The new version allow users to compare ITRs estimated by different machine learning models and plot AUPEC with a simple plot() function.

If you encounter any problems running the new version of the package, please feel free to reply to this thread and we are happy to help!

Jialu

@Rikkkkkin
Copy link
Author

Thank you so much, these are very helpful. Especially plotting AUPEC is very informative.
Let me follow up on a few additional points.

• Suppose a new drug A had on average better outcome compared with previous drug B; i.e. E[Y(1)] > E[Y(0)] . Then the guideline recommends treating every patient with drug A. However, since drug A is newer and more costful, a practical choice is whether we can select patients who can be treated with drug B. If E[Y(g(X))] > E[Y(1)] where g(X) indicates an ITR, then adopting the choice is rationalized, so this comparison is of interest.

• In practice, we need to select those who need drug A, and contrasting the GATEs with a simple cutoff of zero (not by quartile etc) is informative.

• Thank you for integrating DML. I would like to check whether the first cross-validation is as follows, to maximize the efficiency: in R-learner, first data splitting is to develop nuisance model (step1) and calculate residuals (step2). If naively split data into half, residuals are calculated in only half sample size, which is not desirable. If do cross-validation, say 5-fold, residuals are calculated in every sample. For example in causalforest() function, we cannot specify such cross-validation, and it is generally not enough efficient in medical RCT data. I am currently manually coding such, while I thought it very helpful if your function integrates it.

Thanks for your consideration.

Rik

@MichaelLLi
Copy link
Owner

@Rikkkkkin We have released a brand new version of evalITR v1.0.0 that incorporated many of the comments you mentioned above, and also offers new functionality to integrate training and evaluation of ITRs in one step. We hope you can check it out and let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants