Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fairmetric has 16 metrices whereas only 14 are described on http://fairmetrics.org #138

Open
rickbeeloo opened this issue Jan 27, 2020 · 3 comments

Comments

@rickbeeloo
Copy link

We want to assess the FAIRness of our dataset according to the fairmetrics.org
However when filling in the form via FAIRshake there are 16 metrices whereas there are only 14 in the documentation of fairmetrics.org.

What causes this difference and how can we understand what description from fairmetrics.org corresponds to what question on FAIRshake

@u8sand
Copy link
Contributor

u8sand commented Jan 27, 2020

This rubric was, in-fact, created by Michel Dumontier, co-author of the FAIRMetrics paper whom we will reach out to about this.

From looking side-by-side at them myself, the metrics are basically in the same order with 2 discrepancies:

  • accessible usage license: was replaced by two separate metrics--
    • metadata for the license
    • metadata for the digital resource
  • standardized metadata was added as a metric

The metrics that directly correspond to fairmetrics have their metric identifier purl as a url. Perhaps this helps if you hope to programatically map these.

@rickbeeloo
Copy link
Author

Thankyou @u8sand for the quick reply!
Indeed I meant those 2 discrepancies. It's not that we need it for programatical access, but we want to ask others to fill in the matrices and would like to give more information about what they need to fill in excatly, and therefore would like to have the full descriptions from fairmetrics.org.

@u8sand
Copy link
Contributor

u8sand commented Jan 27, 2020

Interestingly enough, there is another version https://github.com/FAIRMetrics/Metrics/tree/master/MaturityIndicators -- unfortunately Gen 2 seems to have 15 metrics now.. Personally, I would go by the 16 metrics on fairshake given that there were already over 1,000 assessments using that rubric; as such your evaluations would be comparable to that rubric and furthermore, more can always be consolidated to less, not the other way around.

When you make an evaluation, a description is provided beyond the title of the metric, but if you want to have the exact copy of that document, you may very well construct a new rubric which matches those descriptions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants