Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive time spent on MPI Bcast in nnp-train #177

Open
mikejwaters opened this issue Oct 19, 2022 · 1 comment
Open

Excessive time spent on MPI Bcast in nnp-train #177

mikejwaters opened this issue Oct 19, 2022 · 1 comment

Comments

@mikejwaters
Copy link

mikejwaters commented Oct 19, 2022

I am running n2p2 on NERSC's newest, machine Perlmutter (https://docs.nersc.gov/performance/readiness/) using their PrgEnv-gnu compiler wrappers (https://docs.nersc.gov/systems/perlmutter/software/#compilers). After seeing terrible performance, the NERSC staff and I profiled the performance and found 98% of the time was spent on MPI with 96.4% of time spent on MPI_Bcast.

While the NERSC staff continues to work their vendors software team, is there anything I can try to mitigate this?

Edit: I'm on the master branch.

@singraber
Copy link
Member

Hello!

What kind of run did you execute? Was it nnp-train or a LAMMPS/n2p2 run? What kind of nodes and how many cores did you use? Please provide as much information as you have otherwise I can only speculate...

Best,
Andreas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants