Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the CM MLPerf inference docs for CUDA device running on host #307

Open
arjunsuresh opened this issue Sep 27, 2024 · 0 comments
Open
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@arjunsuresh
Copy link
Contributor

We need to update the MLPerf inference docs for native CUDA runs

  1. Add a remark that unless CUDA, cuDNN and TensorRT are available in the environment it is recommended to use the docker option
  2. In the run options specify the flags to pass in the cuDNN and TensorRT run files
@arjunsuresh arjunsuresh added documentation Improvements or additions to documentation enhancement New feature or request labels Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Development

No branches or pull requests

1 participant