Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some utils to support m-to-n restart. #973

Draft
wants to merge 41 commits into
base: node-local-partitioning
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
f9babfd
Add some utils to support m-to-n restart.
MTCam Sep 22, 2023
c815fb7
Add some utils to support m-to-n restart
MTCam Sep 25, 2023
2b662d6
Add stand-alone redist util.
MTCam Sep 29, 2023
a7ab1b2
Beef up comments
MTCam Sep 29, 2023
c0f06e6
Check that the restart path is a directory.
MTCam Sep 29, 2023
d1bb8a3
Merge branch 'node-local-partitioning' into m-to-n-restart
MTCam Sep 29, 2023
9bb89ff
Move redist to restart module.
MTCam Oct 1, 2023
a286c08
Move redist to restart module, correct glob path.
MTCam Oct 4, 2023
1cd912e
Merge remote-tracking branch 'origin/m-to-n-restart' into save-my-cha…
MTCam Oct 4, 2023
08622dd
Use array context for picklin, zeros_like, debugging diagnostics
MTCam Oct 4, 2023
cdded96
Get mesh on reader_rank 0, and share some timings with the user.
MTCam Oct 4, 2023
9a2d8a7
add options to specify target decomp map
MTCam Oct 5, 2023
5bf5d58
Refactor for access to multivol partition datastructure
MTCam Oct 6, 2023
a5fba6b
Fix bug in refactored
MTCam Oct 6, 2023
b4328bb
Back out changes to redist, for now.
MTCam Oct 6, 2023
2445a03
Add overlap mapping utility for multi-volume datasets.
MTCam Oct 9, 2023
51f5305
Util for overlap compute of inverted decomps, convenience
MTCam Oct 9, 2023
d995e65
Add some tests of multivol overlap mapping.
MTCam Oct 9, 2023
23e9fe7
Path checking on restart files, broken redist util
MTCam Oct 9, 2023
3ad363a
Add m-to-n api for multi-volume datasets.
MTCam Oct 10, 2023
3f4cef5
Sharpen doc slightly
MTCam Oct 10, 2023
6473120
Sharpen some docs
MTCam Oct 10, 2023
fd6225a
Extend redist to multi-volume datasets.
MTCam Oct 10, 2023
c35c86d
Unnewify function name
MTCam Oct 10, 2023
878412a
Cleanup after refac
MTCam Oct 11, 2023
4b2e565
Merge branch 'node-local-partitioning' into mrgup
MTCam Oct 11, 2023
82609e9
Deep changes to remove complexity, requires input mappings now.
MTCam Oct 11, 2023
89ea9ab
Deep refactor to simplify overlap mapping, processing.
MTCam Oct 11, 2023
43b2990
Deep refactor to simplify overlap mapping, processing.
MTCam Oct 11, 2023
0ceb081
More extensive testing to catch errors encountered in prediction case.
MTCam Oct 11, 2023
a00aee3
add force_compile function (#974)
matthiasdiener Oct 11, 2023
98c114f
Add a bunch of debugging/diagnostics for restart data structures and …
MTCam Oct 11, 2023
74c0edb
Add more diagnostics, handle ndarrays in zero util
MTCam Oct 12, 2023
170411a
Merge branch 'main' into m-to-n-restart
MTCam Oct 12, 2023
2e638d3
fix meshdist for dimensionality
anderson2981 Oct 14, 2023
09339c6
Add some timings, fix issue with parallel redist.
MTCam Oct 16, 2023
f80afc0
Merge branch 'update-m2n' into m-to-n-restart
MTCam Oct 16, 2023
99b4003
Fix dim parse bug in meshdist.
MTCam Oct 16, 2023
69a6f84
Add mapdecomp util.
MTCam Oct 16, 2023
b5c8d30
Merge remote-tracking branch 'origin/m-to-n-restart' into m-to-n-restart
MTCam Oct 18, 2023
a4eed52
Merge main
MTCam Nov 14, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -119,12 +119,16 @@ jobs:

- name: Run examples
run: |
set -x
MINIFORGE_INSTALL_DIR=.miniforge3
. "$MINIFORGE_INSTALL_DIR/bin/activate" testing
export XDG_CACHE_HOME=/tmp
mamba install vtk # needed for the accuracy comparison
[[ $(hostname) == "porter" ]] && export PYOPENCL_TEST="port:nv" && unset XDG_CACHE_HOME
# && export POCL_DEBUG=cuda

# This is only possible because actions run sequentially on porter
[[ $(hostname) == "porter" ]] && rm -rf /tmp/githubrunner/pocl-scratch && rm -rf /tmp/githubrunner/xdg-scratch

scripts/run-integrated-tests.sh --examples

doc:
Expand Down Expand Up @@ -195,8 +199,13 @@ jobs:
fetch-depth: '0'
- name: Prepare production environment
run: |
set -x
[[ $(uname) == Linux ]] && [[ $(hostname) != "porter" ]] && sudo apt-get update && sudo apt-get install -y openmpi-bin libopenmpi-dev
[[ $(uname) == Darwin ]] && brew upgrade && brew install mpich

# This is only possible because actions run sequentially on porter
[[ $(hostname) == "porter" ]] && rm -rf /tmp/githubrunner/pocl-scratch && rm -rf /tmp/githubrunner/xdg-scratch

MIRGEDIR=$(pwd)
cat scripts/production-testing-env.sh
. scripts/production-testing-env.sh
Expand Down
7 changes: 6 additions & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,12 @@ version: 2

conda:
environment: .rtd-env-py3.yml


build:
os: "ubuntu-22.04"
tools:
python: "mambaforge-22.9"

sphinx:
fail_on_warning: true

Expand Down
5 changes: 2 additions & 3 deletions .rtd-env-py3.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,12 @@ channels:
- conda-forge
- nodefaults
dependencies:
# readthedocs does not yet support Python 3.11
# See e.g. https://readthedocs.org/api/v2/build/18650881.txt
- python=3.10
- python=3.11
- mpi4py
- islpy
- pip
- pyopencl
- graphviz
- scipy
- pip:
- "git+https://github.com/inducer/pymbolic.git#egg=pymbolic"
Expand Down
106 changes: 106 additions & 0 deletions bin/mapdecomp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
"""Read gmsh mesh, partition it, and create a pkl file per mesh partition."""

__copyright__ = """
Copyright (C) 2020 University of Illinois Board of Trustees
"""

__license__ = """
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
import argparse
import pickle
import os

from meshmode.distributed import get_connected_parts
from grudge.discretization import PartID


def main(mesh_filename=None, output_path=None):
"""Do it."""
if output_path is None:
output_path = "./"
output_path.strip("'")

if mesh_filename is None:
# Try to detect the mesh filename
raise AssertionError("No mesh filename.")

intradecomp_map = {}
nranks = 0
nvolumes = 0
volumes = set()
for r in range(10000):
mesh_pkl_filename = mesh_filename + f"_rank{r}.pkl"
if os.path.exists(mesh_pkl_filename):
nranks = nranks + 1
with open(mesh_pkl_filename, "rb") as pkl_file:
global_nelements, volume_to_local_mesh_data = \
pickle.load(pkl_file)
for vol, meshdat in volume_to_local_mesh_data.items():
local_partid = PartID(volume_tag=vol, rank=r)
volumes.add(vol)
connected_parts = get_connected_parts(meshdat[0])
if connected_parts:
intradecomp_map[local_partid] = connected_parts
else:
break
nvolumes = len(volumes)
rank_rank_nbrs = {r: set() for r in range(nranks)}
for part, nbrs in intradecomp_map.items():
local_rank = part.rank
for nbr in nbrs:
if nbr.rank != local_rank:
rank_rank_nbrs[local_rank].add(nbr.rank)
min_rank_nbrs = nranks
max_rank_nbrs = 0
num_nbr_dist = {}
total_nnbrs = 0
for _, rank_nbrs in rank_rank_nbrs.items():
nrank_nbrs = len(rank_nbrs)
total_nnbrs += nrank_nbrs
if nrank_nbrs not in num_nbr_dist:
num_nbr_dist[nrank_nbrs] = 0
num_nbr_dist[nrank_nbrs] += 1
min_rank_nbrs = min(min_rank_nbrs, nrank_nbrs)
max_rank_nbrs = max(max_rank_nbrs, nrank_nbrs)

mean_nnbrs = (1.0*total_nnbrs) / (1.0*nranks)

print(f"Number of ranks: {nranks}")
print(f"Number of volumes: {nvolumes}")
print(f"Volumes: {volumes}")
print("Number of rank neighbors (min, max, mean): "
f"({min_rank_nbrs}, {max_rank_nbrs}, {mean_nnbrs})")
print(f"Distribution of num nbrs: {num_nbr_dist=}")

# print(f"{intradecomp_map=}")
with open(f"intradecomp_map_np{nranks}.pkl", "wb") as pkl_file:
pickle.dump(intradecomp_map, pkl_file)


if __name__ == "__main__":

parser = argparse.ArgumentParser(
description="MIRGE-Com Intradecomp mapper")
parser.add_argument("-m", "--mesh", type=str, dest="mesh_filename",
nargs="?", action="store", help="root filename for mesh")

args = parser.parse_args()

main(mesh_filename=args.mesh_filename)
27 changes: 17 additions & 10 deletions bin/meshdist.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
#!/usr/bin/env python
"""mirgecom mesh distribution utility"""
"""Read gmsh mesh, partition it, and create a pkl file per mesh partition."""

__copyright__ = """
Copyright (C) 2020 University of Illinois Board of Trustees
Expand Down Expand Up @@ -69,15 +68,18 @@ class MyRuntimeError(RuntimeError):


@mpi_entry_point
def main(actx_class, mesh_source=None, ndist=None,
def main(actx_class, mesh_source=None, ndist=None, dim=None,
output_path=None, log_path=None,
casename=None, use_1d_part=None, use_wall=False):

"""The main function."""
if mesh_source is None:
raise ApplicationOptionsError("Missing mesh source file.")

mesh_source.strip("'")

if dim is None:
dim = 3

if log_path is None:
log_path = "log_data"

Expand Down Expand Up @@ -177,7 +179,7 @@ def main(actx_class, mesh_source=None, ndist=None,
def get_mesh_data():
from meshmode.mesh.io import read_gmsh
mesh, tag_to_elements = read_gmsh(
mesh_source,
mesh_source, force_ambient_dim=dim,
return_tag_to_elements_map=True)
volume_to_tags = {
"fluid": ["fluid"]}
Expand All @@ -200,7 +202,7 @@ def my_partitioner(mesh, tag_to_elements, num_ranks):
if os.path.exists(output_path):
if not os.path.isdir(output_path):
raise ApplicationOptionsError(
"Mesh dist mode requires \"output\""
"Mesh dist mode requires 'output'"
" parameter to be a directory for output.")
if rank == 0:
if not os.path.exists(output_path):
Expand All @@ -218,7 +220,7 @@ def my_partitioner(mesh, tag_to_elements, num_ranks):
partition_generator_func=part_func, logmgr=logmgr)

comm.Barrier()

logmgr_set_time(logmgr, 0, 0)
logmgr
logmgr.tick_before()
Expand All @@ -239,12 +241,17 @@ def my_partitioner(mesh, tag_to_elements, num_ranks):
action="store_true", help="Include wall domain in mesh.")
parser.add_argument("-1", "--1dpart", dest="one_d_part",
action="store_true", help="Use 1D partitioner.")
parser.add_argument("-d", "--dimen", type=int, dest="dim",
nargs="?", action="store",
help="Number dimensions")
parser.add_argument("-n", "--ndist", type=int, dest="ndist",
nargs="?", action="store", help="Number of distributed parts")
nargs="?", action="store",
help="Number of distributed parts")
parser.add_argument("-s", "--source", type=str, dest="source",
nargs="?", action="store", help="Gmsh mesh source file")
parser.add_argument("-o", "--ouput-path", type=str, dest="output_path",
nargs="?", action="store", help="Output path for distributed mesh pkl files")
nargs="?", action="store",
help="Output path for distributed mesh pkl files")
parser.add_argument("-c", "--casename", type=str, dest="casename", nargs="?",
action="store", help="Root name of distributed mesh pkl files.")
parser.add_argument("-g", "--logpath", type=str, dest="log_path", nargs="?",
Expand All @@ -256,7 +263,7 @@ def my_partitioner(mesh, tag_to_elements, num_ranks):
actx_class = get_reasonable_array_context_class(
lazy=False, distributed=True, profiling=False, numpy=False)

main(actx_class, mesh_source=args.source,
main(actx_class, mesh_source=args.source, dim=args.dim,
output_path=args.output_path, ndist=args.ndist,
log_path=args.log_path, casename=args.casename,
use_1d_part=args.one_d_part, use_wall=args.use_wall)
Loading
Loading