Skip to content

WeeklyTelcon_20161122

Howard Pritchard edited this page Nov 22, 2016 · 4 revisions

Open MPI Weekly Telcon


  • Dialup Info: (Do not post to public mailing list or public wiki)

Attendees

  • Ralph
  • Howard
  • Josh Hursey
  • Josh Ladd
  • Sylvain Jeaugey
  • Tood Kordenbrock

Agenda

Review 1.10.x: v1.10.5

Review 2.0.x: v2.0.2

Review 2.x: v2.1.0

  • All issues and pull requests for v2.1.0
  • Desired / must-haves for v2.1.x series
  • Known / ongoing issues to discuss
    • PMIx 1.2.0: status?
      • Waiting on Issue #144 to be resolved. There is PR #217 pending review to fix this issue. Needs review/testing.
      • Once that fix is in then we need to measure the memory footprint again.
    • Performance Issue #1831
      • PSM2 seems to be impacted more than others. Intel to take another look.
      • Both master and v2.x branches are giving poorer performance compare to the v1.10 branch.
  • Probably release early 2017 (Jan.), after the v2.0.x release

Question to user community: should we make a v2.2.x?

  • This question posed to the user community in the BOF
  • We were requested to make two lists:
    1. Features that we anticipate we could port to a v2.2.x release
    2. Features that we anticipate would be too difficult to port to a v2.2.x release
  • Here are the lists as discussed on the con-call:
    1. Features that we anticipate we could port to a v2.2.x release
      • Improved collective performance (new “tuned” module)
      • Enable Linux CMA shared memory support by default
    2. Features that we anticipate would be too difficult to port to a v2.2.x release
      • THREAD_MULTIPLE improvements for MTLs (not sure about this being difficult)
      • Revamped CUDA support
      • PMIx 3.0 integration (actually rhc doesn't think this would be hard but questions whether we should do it if PMIx 3.0 functionality is not being used)
      • MPI_ALLOC_MEM integration with memkind
      • OpenMP affinity / placement integration
    3. Features that are not in master yet but might come in soon(ish) that would make keeping the branches in sync difficult.
      • ULFM changes
      • New CUDA features

Master review

  • N/A
  • We have been officially invited to SPI
  • Ralph's new information about the two organizations.
  • Will try to invite representatives from SPI and SFC to a teleconf so folks can ask questions.
    • Will discuss this more next week.

Review Master MTT testing (https://mtt.open-mpi.org/)

  • Not seeing Morning MTT reports, or tarball generation email or coverity.

MTT Dev status:

  • Not getting morning MTT result emails. Jeff looked into that last week, and went back and forth with Brian.

Open MPI Developer's Meeting


Status Update Rotation

  1. LANL, Houston, IBM
  2. Cisco, ORNL, UTK, NVIDIA
  3. Mellanox, Sandia, Intel

Back to 2016 WeeklyTelcon-2016

Clone this wiki locally