-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adapt test-convGDX2mif.R to compareScenarios2 #238
Comments
I understand that compareScenarios2 is being worked on to be the natural replacement to the original compare scenarios, but are we there already? If yes, the correct approach would be to replace the compareScenarios function code with the new compareScenarios2 function code. If not, using the word deprecated is wrong in my opinion. Tests should not be changed until that point is reached, and even them they do not need to be changed, as the default compareScenarios code will simply include the newer function code instead. We shouldn't have as long term development goals functions named with numeric suffixes like this, or any duplicated objective in two different functions. The number suffix is just a temp thing to allow further testing before moving to the new approach. |
Ok, thanks for the explanation. The fact is that |
I would say: we can delete compareScenarios and to the test for compareScenarios2. this is the new standard and I think there is nothing missing which was provided by compareScenarios |
The old |
my impression is that compareScenarios2 takes substantially longer that the old compareScenario. I would rather prefer not to wait even longer for |
I agree, the test takes annoyingly long! cs2 takes roughly the same time as cs1 to create the PDF output, at least this was the case when it was new. Creating HTML takes longer. It's probably not necessary to test cs2 on all plots. E.g., we could just test the creation of the summary section or even just the info section at the beginning. In contrast to cs1, cs2 is tolerates errors and missing variables in the plots without failing completely. The plotting sections basically cannot fail. So, there is no gain in testing these. My suggestion would be:
|
@chroetz: What does testing cs2 in this setting actually help us? If variables are missing in the mif file, it just issues a warning, right? So either we write a very careful testing with Edit: Yes, the obvious thing was to check whether there is a execution error in cs2. But then, I agree, the first section should suffice. And you are right with cs2 vs cs1, I did some NGFS cs2 yesterday and it took very long, but it was also 7 scenarios, which I didn't factor into my expected waiting time. |
@orichters cs2 first loads the data and puts it into the right format. Then the plots are created as single and temporary pdf or png files. At last the final output file is compiled. We would test Step 1 and 3. |
Would it be possible to add an option to run compareScenarios2 without creating an output file? Just save all plots to a object instead. This way we would still be able to test all plot generation, and get all possible warnings, but we wouldn't lose time with any external output library, file access requests and so. I would guess this allows us to test much quicker the entire function in the remind2 building tests. |
I like the idea of testing if the generation of a reduced pdf works, I wouldn't remove the pdf generation from the tests altogether as it is something that can easily break. |
@Renato-Rodrigues It is possible using |
Is it really a good idea that
test-convGDX2mif.R
checks the mifs using the deprecatedcompareScenarios
? Wouldn't it be preferable to use a more recentfulldata.gdx
and check withcompareScenarios2
?The text was updated successfully, but these errors were encountered: