Software Technology

Programming Models and Runtimes

Exascale MPI

Principal Investigators: Pavan Balaji, lead, Argonne National Laboratory (ANL); Mark Snir, ANL; Abdelhalim Amer, ANL; Yanfei Guo, ANL; Rob Latham, ANL; Kenneth Raffenetti, ANL; Min Si, ANL; Kavitha Tiptur Madhu, ANL; Giuseppe Congiu, ANL; Neelima Bayyapu, ANL

This project addresses enabling applications to effectively use the latest advances in MPI to scale to the largest supercomputers in the world. It will produce a high-performance MPI implementation and effect changes to the MPI standard to meet application and architectural requirements. MPI is the de facto standard programming model for large-scale scientific computing today. The vast majority of DOE’s parallel scientific applications running on the largest HPC systems use MPI. These application codes represent billions of dollars of investment. Therefore, MPI must evolve to run as efficiently as possible on exascale systems. MPI is a viable programming model at exascale, however, both the MPI standard and MPI implementations have to address the challenges posed by the increased scale, performance characteristics and evolving architectural features expected in exascale systems, as well as the capabilities and requirements of applications targeted at these systems.