Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AutoTuner should not export the learner parameters that are being set by the tuner #141

Open
mb706 opened this issue Aug 9, 2019 · 0 comments

Comments

@mb706
Copy link
Collaborator

mb706 commented Aug 9, 2019

I am not sure this is possible with reasonable effort, because the tuner does not know what parameters are being set by it. Maybe using something like what I propose in mlr-org/paradox#225 would make this possible, because it would be a $trafo() that knows what the parameters are that are being set. Alternatively the user could somehow inform the AutoTuner what parameters are set by the tuner and should not be set by the user.

Example of a trafo that sets the maxdepth or the cp parameter, but that does not make the fact that it refers to maxdepth or cp obvious to the outside:

> ps = ParamSet$new(list(ParamLgl$new("x")))
> ps$trafo = function(x, param_set)
+   if (!"x" %in% names(x)) x else  # circumvent bug in EvalPerf
+   if (x$x) list(cp = 0.5) else
+   list(maxdepth = 1)

(Tuning happens the following way:)

> tune = TunerRandomSearch$new(
+   pe = PerfEval$new(task = "iris", learner = "classif.rpart",
+     resampling = "cv", measure = "classif.ce", param_set = ps),
+   terminator = TerminatorEvaluations$new(3))$tune()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants