Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adapt test-convGDX2mif.R to compareScenarios2 #238

Closed
orichters opened this issue Apr 26, 2022 · 11 comments · Fixed by #249
Closed

adapt test-convGDX2mif.R to compareScenarios2 #238

orichters opened this issue Apr 26, 2022 · 11 comments · Fixed by #249
Assignees

Comments

@orichters
Copy link
Contributor

orichters commented Apr 26, 2022

Is it really a good idea that test-convGDX2mif.R checks the mifs using the deprecated compareScenarios? Wouldn't it be preferable to use a more recent fulldata.gdx and check with compareScenarios2?

@Renato-Rodrigues
Copy link
Member

I understand that compareScenarios2 is being worked on to be the natural replacement to the original compare scenarios, but are we there already?

If yes, the correct approach would be to replace the compareScenarios function code with the new compareScenarios2 function code. If not, using the word deprecated is wrong in my opinion.

Tests should not be changed until that point is reached, and even them they do not need to be changed, as the default compareScenarios code will simply include the newer function code instead.

We shouldn't have as long term development goals functions named with numeric suffixes like this, or any duplicated objective in two different functions. The number suffix is just a temp thing to allow further testing before moving to the new approach.

@orichters
Copy link
Contributor Author

Ok, thanks for the explanation. The fact is that compareScenarios is not working on current gdx files (which certainly is not good), which makes compareScenarios2 the de facto standard for everyone using fairly recent REMIND runs, and it looked a bit strange to me that cs1 is used in the tests.

@LaviniaBaumstark
Copy link
Member

I would say: we can delete compareScenarios and to the test for compareScenarios2. this is the new standard and I think there is nothing missing which was provided by compareScenarios

@chroetz
Copy link
Contributor

chroetz commented May 2, 2022

The old compareScenarios() has an optional argument sr15marker_RCP, which is used in only one plot. As far as I know, this is the only feature that is not implemented in compareScenarios2().
The naming and order of arguments in cs2 is different from old cs. Furthermore, cs2 has additional arguments and features.
I also favor just deleting the old cs and using cs2 in the test, even though having functions named cs1, cs2, cs3, ... is not the cleanest solution...

@orichters
Copy link
Contributor Author

my impression is that compareScenarios2 takes substantially longer that the old compareScenario. I would rather prefer not to wait even longer for lucode2::buildLibrary, it is already annoyingly long.

@chroetz
Copy link
Contributor

chroetz commented May 3, 2022

I agree, the test takes annoyingly long!

cs2 takes roughly the same time as cs1 to create the PDF output, at least this was the case when it was new. Creating HTML takes longer.

It's probably not necessary to test cs2 on all plots. E.g., we could just test the creation of the summary section or even just the info section at the beginning. In contrast to cs1, cs2 is tolerates errors and missing variables in the plots without failing completely. The plotting sections basically cannot fail. So, there is no gain in testing these.

My suggestion would be:

tmpDir <- tempdir()
compareScenarios2(
  mifScen = myMifs,
  mifHist = histMif,
  outputFormat = "pdf",
  outputFile = "cs2_test",
  outputDir = tmpDir,
  sections = 0) # render only the info section

@orichters
Copy link
Contributor Author

orichters commented May 3, 2022

@chroetz: What does testing cs2 in this setting actually help us? If variables are missing in the mif file, it just issues a warning, right? So either we write a very careful testing with expect_warning or so, or I currently don't see much of a sense in doing a compareScenario2 at all? But I may miss something obvious ;)

Edit: Yes, the obvious thing was to check whether there is a execution error in cs2. But then, I agree, the first section should suffice.

And you are right with cs2 vs cs1, I did some NGFS cs2 yesterday and it took very long, but it was also 7 scenarios, which I didn't factor into my expected waiting time.

@chroetz
Copy link
Contributor

chroetz commented May 3, 2022

@orichters cs2 first loads the data and puts it into the right format. Then the plots are created as single and temporary pdf or png files. At last the final output file is compiled. We would test Step 1 and 3.

@Renato-Rodrigues
Copy link
Member

@chroetz

Would it be possible to add an option to run compareScenarios2 without creating an output file? Just save all plots to a object instead.

This way we would still be able to test all plot generation, and get all possible warnings, but we wouldn't lose time with any external output library, file access requests and so. I would guess this allows us to test much quicker the entire function in the remind2 building tests.

@giannou
Copy link
Contributor

giannou commented May 3, 2022

I like the idea of testing if the generation of a reduced pdf works, I wouldn't remove the pdf generation from the tests altogether as it is something that can easily break.

@chroetz
Copy link
Contributor

chroetz commented May 3, 2022

@Renato-Rodrigues It is possible using knitr::purl() to extract the R-code from an Rmd-file into an R-source file and then run the source file via source(). This could be easily added to cs2. But I agree with @giannou that it makes sense to have the pdf-generation in the test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants