Programming Models and Runtimes

Lead: Rajeev Thakur, Argonne National Laboratory

The team is developing exascale-ready programming models and runtimes, addressing in particular the important design and implementation challenges of combining massive intra-node and inter-node concurrency into an application. They are also developing a diverse collection of products that further address next-generation node architectures to improve realized performance, ease of expression, and performance portability.

Exascale MPI / MPICH

Objective: Enhance the MPI standard and the MPICH implementation of MPI for exascale

Efficient communication among the compute elements within high performance computing systems is essential for simulation performance. The Message Passing Interface (MPI) is a community standard developed by the MPI Forum for programming these systems and handling the communication needed. The goal of the Exascale MPI project is to evolve the MPI standard to fully support the complexity of exascale systems and deliver MPICH—a reliable, performant implementation of the MPI standard.

Principal Investigators: Yanfei Guo, Argonne National Laboratory


Objective: Develop/ enhance this task-based programming model

The complexity of the exascale systems that will be delivered, from processors with many cores to accelerators and heterogeneous memory, makes it challenging for scientists to achieve high performance from their simulations. Legion provides a data-centric programming system that allows scientists to describe the properties of their program data and dependencies, along with a runtime that extracts tasks and executes them using knowledge of the exascale systems to improve performance.

Principal Investigators: Pat McCormick, Los Alamos National Laboratory


Objective: Develop/ enhance this task-based programming model

One difficulty associated with programming exascale systems is expressing the tasks comprising a scientific simulation and then mapping them to the heterogenous computational resources on that system, while achieving high performance. PaRSEC supports the development of domain-specific languages and tools to simplify and improve the productivity of scientists when using a task-based system and provides a low-level runtime.

Principal Investigators: Hartwig Anzt, University of Tennessee, Knoxville

Pagoda: UPC++/GASNet

Objective: Develop/enhance a Partitioned Global Address Space (PGAS) programming model

A computation being performed on one part of a large system often needs to access or provide data to another part of the system in order to complete a scientific simulation. The Partitioned Global Address Space (PGAS) model provides the appearance of shared memory accessible to all the compute nodes while implementing this shared memory behind the scenes using physical memory local to the nodes and primitives, such as remote direct memory access.

Principal Investigators: Paul Hargrove, Lawrence Berkeley National Laboratory


Objective: Develop an interface and library for accessing a complex memory hierarchy

Exascale systems will have complex, heterogenous memories that need to be effectively managed either directly by the programmer or by the runtime in order to achieve high performance. Natively supporting each memory technology is challenging, as each has its own separate programming interface. The SICM project addresses the emerging complexity of exascale memory hierarchies by providing a portable, simplified interface to complex memory.

Principal Investigators: Scott Pakin, Los Alamos National Laboratory


Objective: Enhance the MPI standard and the Open MPI implementation of MPI for exascale

The Message Passing Interface (MPI) is a community standard for inter-process communication and is used by the majority of DOE’s parallel scientific applications running on pre-exascale systems. The MPI standard can be implemented on all the large systems. The OMPI-X project ensures that the MPI standard and its specific implementation in Open MPI meet the needs of the ECP community in terms of performance, scalability, and capabilities.

Principal Investigators: David Bernholdt, Oak Ridge National Laboratory


Objective: Develop abstractions for node-level performance portability

Exascale systems are characterized by computer chips with a large number of cores, a smaller amount of memory, and a range of various architectures, which can result in decreased productivity for library and application developers who need to write specialized software for each system. The Kokkos/RAJA project provides high-level abstractions, expressing the necessary parallel constructs that are then mapped onto a runtime to achieve portable performance across current and future architectures.

Principal Investigators: Christian Trott, Sandia National Laboratories


Objective: Optimize existing low-level system software components to improve performance and scalability and improve functionality of exascale applications and runtime systems

The operating system provides necessary functionality to libraries and applications, such as allocating memory and spawning processes, and manages the resources on the nodes in an exascale system. The Argo project is building portable, open-source system software that improves performance and scalability and provides increased functionality to exascale libraries, applications, and runtime systems, with a focus on resource management, memory management, and power management.

Principal Investigators: Pete Beckman, Argonne National Laboratory

National Nuclear Security Administration logo Exascale Computing Project logo small U.S. Department of Energy Office of Science logo