Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HPC] Proposal: Update terminology for clarity #511

Open
nvaprodromou opened this issue Feb 7, 2023 · 4 comments
Open

[HPC] Proposal: Update terminology for clarity #511

nvaprodromou opened this issue Feb 7, 2023 · 4 comments

Comments

@nvaprodromou
Copy link
Contributor

Introduction:

After collecting feedback from engineers, clients, and press, NVIDIA presented a list of proposals that aim to improve the popularity of the MLPerf HPC benchmark suite. Please see our slide deck for more information on our feedback gathering process and insights.

Proposal: Rename strongly- and weakly-scaled benchmarks

Slide 12 in proposals slide deck.

We propose to update the benchmark's terminology to reduce confusion when parsing results:

  1. Rename "weak scaling" to "throughput"
  2. Rename "strong scaling" to "Time To Solution" (TTS)

This proposal aims to improve the popularity of the MLPerf HPC benchmark suite by improving on the following aspects:

  1. Simplifies results parsing and understanding [Improves press interest]

Discussion

Pros:

  1. Quick, easy fix
  2. Reduces publics/reporters’ confusion when faced with our results

Cons:

  1. Some reporters have already invested time to understand our current terminology.
    • We believe that reporters will appreciate the clarity and simplicity of the revised terminology.
@sparticlesteve
Copy link
Contributor

  • I support this change. It is easy and IMO better. I've already pretty much been using these terms anyway.
  • We could consider "time to train" instead of "time to solution".

@TheKanter
Copy link
Contributor

I would endorse "throughput" and then either "time to train" or "time to solution". I would suggest using a term that is consistent between Training and HPC, especially as we contemplate making the two more similar.

Appreciate the suggestions from the team at Nvidia!

@nvaprodromou
Copy link
Contributor Author

TTT sounds good too. Aligning with MLPerf-Training on it is a good idea as well.

@sparticlesteve
Copy link
Contributor

This can probably be closed, now. Is that right, @nvaprodromou ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants