The message passing interface (MPI) is a community standard for inter-process communication and is used by the majority of the US Department of Energy’s parallel scientific applications running on pre-exascale systems. The MPI standard can be implemented on all the large systems. The OMPI-X project ensures that the MPI standard and its specific implementation in Open MPI meet the needs of the Exascale Computing Project (ECP) community in terms of performance, scalability, and capabilities.

Project Details

Since its inception, the MPI standard has evolved in response to the changing needs of massively parallel libraries and applications, as well as the systems on which they are run. With the impending exascale era, the pace of change and growing diversity and complexity of architectures pose new challenges that the MPI standard must address. The OMPI-X project team is active in the MPI Forum standards organization and works within it to raise and resolve key issues facing exascale applications and libraries.

The OMPI-X team also contributes to the development of Open MPI, an open-source, community-based implementation of the MPI standard that is freely available and used by several prominent vendors as the basis for their commercial MPI offerings. The OMPI-X team is focused on prototyping and demonstrating exascale-relevant proposals under consideration by the MPI Forum, as well as improving the fundamental performance and scalability of Open MPI, particularly for exascale-relevant platforms and job sizes. MPI users will be able to take advantage of these enhancements simply by linking against recent builds of the Open MPI library.

In addition to Open MPI, the OMPI-X project delivers two more products. The Process Management Interface—Exascale (PMIx) is a specification and reference implementation that Open MPI and a growing number of other software tools rely upon for the underlying startup and wire-up of the processes involved. It also provides key capabilities that can underpin work on runtime. Qthreads is a library for lightweight user-level threads, which, as part of the OMPI-X project, is being integrated into MPI implementations to improve support for and performance of threading within MPI libraries.

Principal Investigator(s):

David Bernholdt, Oak Ridge National Laboratory


Oak Ridge National Laboratory; Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Sandia National Laboratories; University of Tennessee, Knoxville

Progress to date

  • The OMPI-X team championed several significant additions, which will appear in the forthcoming MPI 4.0 version of the standard. These include: (1) partitioned communications, which support increased flexibility and the overlap of communication and computation in highly threaded environments—including GPUs—and (2) sessions, which enhance the flexibility in how complex MPI applications, such as coupled multiphysics simulations, are “constructed” and how communication is managed. OMPI-X also championed the standardization of error management within MPI. Implementations of all these capabilities are available in the Open MPI library.
  • The OMPI-X team delivered performance and scalability enhancements to the Open MPI implementation. Improvements were made to the remote memory access implementation to provide improved performance, scalability, and memory usage. Progress was made on incorporating topology and congestion awareness.
  • The team demonstrated that Qthreads can achieve equivalent performance to OpenMP.
  • The team made a concerted effort to enhance the quality assurance and testing of this project’s products, including improving the Open MPI testing and continuous integration infrastructure, deploying that testing infrastructure on pre-exascale platforms, and adding tests to the test suite that are relevant for exascale libraries and applications.

National Nuclear Security Administration logo U.S. Department of Energy Office of Science logo