Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New metrics for benchmark #652

Open
mathmax12 opened this issue Aug 29, 2024 · 2 comments
Open

New metrics for benchmark #652

mathmax12 opened this issue Aug 29, 2024 · 2 comments

Comments

@mathmax12
Copy link

🚀 The feature, motivation and pitch

Very nice work, Llama team. Llama is the most popular open-source LLM project and has been adopted for different platforms. It will be great to see more metrics, e.g., tokens/s, sample/s, and % TFLOps, collected during training for benchmarking purposes.

Alternatives

No response

Additional context

No response

@init27
Copy link
Contributor

init27 commented Aug 29, 2024

Thanks for the feedback!
Could you please share what metrics you'd love to see (if any) apart from the ones shared in our evals?

@mathmax12
Copy link
Author

Here are some thoughts that come to mind:
Average TFLOP/s per GPU
Average Tokens/s per GPU
Average Samples/s
GPU Utilization (% TFLOP/s per GPU)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants