Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add model evaluation with ILAMB #193

Open
1 of 2 tasks
SeanBryan51 opened this issue Oct 27, 2023 · 2 comments
Open
1 of 2 tasks

Add model evaluation with ILAMB #193

SeanBryan51 opened this issue Oct 27, 2023 · 2 comments
Labels
priority:medium Medium priority issues to become high priority issues after a release.

Comments

@SeanBryan51
Copy link
Collaborator

SeanBryan51 commented Oct 27, 2023

The plan is to use ILAMB to evaluate CABLE model outputs for the offline spatial configuration.

I've made a start on an ILAMB configuration which we can use for evaluating CABLE model output with benchcab. The ILAMB configuration file will need to be iterated on, some input from the community on how to implement the configuration would be helpful (e.g. weights for different metrics, metrics to use, derived metrics).

TODO

  • Get in touch with modelevaluation.org folks to discuss how we can run ILAMB in me.org.
  • How do we organise the $ILAMB_ROOT directory structure? We will need to think about how this will impact the plots produced.
@SeanBryan51
Copy link
Collaborator Author

CABLE model output does not work directly with ILAMB (see CABLE-LSM/benchcab-ilamb-config#1).

We will need to either enforce CABLE versions to write ILAMB compliant output, and/or implement a post processing step to allow for usage with ILAMB.

@SeanBryan51
Copy link
Collaborator Author

SeanBryan51 commented Nov 1, 2023

ILAMB in modelevaluation.org

  • An ILAMB configuration file can be specified in a modelevaluation.org experiment. Me.org will parse the config file and build up the ILAMB_ROOT directory structure by inferring the paths specified in the config file.
  • Benchmark datasets are uploaded to a modelevaluation.org ILAMB experiment. The file names of the datasets must match the base name of a path specified in the ILAMB config file.
  • To run ILAMB experiments, select the ILAMB experiment when uploading model output. The upload will create a model output "instance" tied to that experiment. Behind the scenes, this will create a single "model" in the model root directory tree.
    • Each time we upload model output to an experiment, we are appending to the list of models. Me.org will know internally all the model outputs that have been uploaded to a specific experiment. When we hit "run analysis", me.org will run ILAMB for all the model output instances that are associated with the experiment.
      Gavin and I discussed whether a 1 to 1 relationship between model output and experiment would be better. This could be easier to upload model outputs in a single step instead of multiple upload steps (one for each "model").
  • Command line arguments to ilamb-run are currently not configurable. We discussed adding an additional form which could be used to supply extra command line arguments to ILAMB.
  • An ILAMB experiment exists in a test workspace set up by Gab which contains some benchmark datasets. We can clone this experiment (note we will need to get access to the test workspace to do this).
  • If we want to use the datasets in the ilamb-data collection, we may have to upload the collection manually to me.org.

@ccarouge ccarouge added the priority:medium Medium priority issues to become high priority issues after a release. label May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority:medium Medium priority issues to become high priority issues after a release.
Projects
None yet
Development

No branches or pull requests

2 participants