Modernizing Computing to Usher in the Next Generation of Scientific Discoveries

By Pat McCormick

Pat McCormick

Pat McCormick, Los Alamos National Laboratory

Exascale can mean different things based on your experience and the role you play in taking our computing capabilities to the next level. In the past, I was fortunate enough to see the Exascale Computing Project (ECP) as part of the initial leadership team. Today, as a technical project leader for three separate efforts, I see ECP from a similar yet different perspective. Among each of the projects I lead, there is a common theme of modernization. However, each project follows a different approach to achieving this goal by pushing boundaries and developing new technologies while also providing a stable future foundation for “tried-and-true” capabilities long established in the high-performance computing community.

For example, the Flang project is actively working to provide a modern, open-source implementation of the compiler infrastructure for the Fortran programming language—a language with roots that go back to 1953 when John Backus proposed it as an alternative to assembly language. It is hard to argue that any other programming language has had a more significant impact on computational science. This effort will help ensure that Fortran can continue its long-established role well into the exascale era of computing.

In contrast, the Legion project, which recently won an R&D 100 Award, aims to provide a new data-centric programming system explicitly targeted for the exascale generation of distributed memory and heterogeneous architectures. While Legion delivers a foundation for many forms of computing, it has most recently provided the foundation for accelerating the training of deep neural networks (DNNs) for machine learning workloads. The results have already delivered performance and scalability not accomplished using industry-established frameworks such as Keras and PyTorch and a promising capability for running large-scale problems on DOE’s upcoming exascale platforms.

Finally, in the Kitsune project, we are extending the foundation of today’s compiler infrastructure to become more aware of parallel applications and be more effectively analyzed and optimized. Today’s compilers reflect the evolution of technologies that followed hand-in-hand with Moore’s Law and Dennard scaling. This design makes them very good at optimizing sequential programs, but they rarely capture a programming paradigm–independent representation of the parallel semantics within an application. Our efforts aim to capture this parallelism and provide additional optimizations and improve performance portability across different architectures and underlying software mechanisms. Ideally, our efforts will enable a more productive suite of software development tools for the next several generations of cutting-edge scientific applications.

Regardless of these projects’ different technical viewpoints, the overarching goal to modernize aspects of our approach to computing remains intact. In my mind, exascale is not a predetermined set of guidelines but instead a mindset and software capability that should enable creative uses of exascale-class computing platforms to help usher in the next generation of scientific discoveries.