Ristra

The property and behavior of various materials under a wide variety of extreme conditions are central to many applications within the realm of national security. Such modeling requires multiple length and timescales and drives requirements for exascale computing. Los Alamos National Laboratory (LANL) is developing a next-generation multiphysics code for national security applications that focuses on 3D multiphysics, insight at the mesoscale for extreme condition materials, and low-energy density physics simulations.

Project Details

Computer science technologies that allow emerging high-performance computing (HPC) architectures to be used efficiently suggest a need for physics algorithms that permit increased concurrency at many scales. This motivates a fresh look at the numerical decisions made throughout the simulation process from setup through analysis. With this in mind, Ristra is casting a wide net across available physics algorithms for multiphysics simulations and is concurrently exploring novel programming models for emerging architectures.

Ristra’s focus is on two application domains, both of which feature multiscale methods that will be an important component of future extreme-scale multiphysics simulations.

Low-Energy Density Physics for Experiment Design. Ristra’s Moya code is an unstructured multimaterial Lagrangian hydrodynamics application that features. In addition, the Moya code uses the Tangram and Portage libraries for interface reconstruction and multimaterial remap to enable Arbitrary Lagrangian Eulerian (ALE) capabilities. This set of codes is targeting machines with accelerators, such as El Capitan at LLNL and Venado at LANL, and is under active development.

High-Energy Density Physics for Inertial Confinement Fusion. Ristra’s Symphony code is an unstructured multimaterial radiation hydrodynamics application that features a multiscale algorithm for the radiation solve. A fully coupled low-order radiation hydrodynamics system is updated by a high-order radiation solver that could be executed asynchronously. This is a work in progress.

FleCSI is a compile-time configurable framework designed to support multiphysics application development. As such, FleCSI provides a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. FleCSI provides an abstract data model that supports compile-time and runtime configurability for implementing a variety (i.e., mesh and mesh-free) of discretizations and physics fields and operators over them. FleCSI also provides an abstract execution model that can target a variety of underlying parallel programming runtimes from well-established options, such as message passing interface (MPI), and ambitious new programming systems, such as Legion, a data-centric model with out-of-order task execution. The intent is to provide developers with a concrete set of user-friendly programming tools that can be used now while allowing flexibility in choosing runtime implementations and optimizations that can be applied to future architectures and runtimes. This effort also provides a realistic infrastructure for evaluating programming models and data management technologies.

Over the course of the project, the Ristra team will continue to push the boundaries on the development of multiscale, multiphysics applications and on the programming  models needed to demonstrate performance on exascale-class computer architectures. Particular effort will be given to adding key physics capabilities needed for the effective solution of the inertial confinement fusion and multiscale hydrodynamics problems that are the focus of this effort.

This project will allow the Ristra team to solve the next-generation challenge problems associated with the national security problems of interest to LANL efficiently and flexibly on emerging HPC architectures. The separation of concerns between the computer science and the expression of complex physics will allow for much more agile responses to future drivers from mission needs and computing technologies.

Principal Investigator(s):

Chris Malone and Jonathan Pietarila Graham, Los Alamos National Laboratory

Collaborators:

Los Alamos National Laboratory, Sandia National Laboratories

Progress to date

The new version of the FleCSI infrastructure is nearly complete, undergoing a co-design effort with the multiphysics devleopers. The Symphony code currently utilizes an older version of FleCSI but is mature enough to support the required physics modules for target challenge problems of interest. The Moya code is built upon the new version of FleCSI, and is posed to serve as an eventual replacement for existing production code capability. Recent accomplishments of the project include the following.

  • Favorable results were demonstrated on 2D and 3D inertial confinement fusion (ICF) calculations from Ristra’s Symphony code, which is an unstructured mesh, multimaterial radiation hydrodynamics code that uses a novel high-order/low-order multiscale method for the radiation solver.
  • The fidelity in ICF simulations was enhanced by adopting a new discontinuous Galerkin low-order radiation solver.
  • FleCSI’s parallel back-end capability has been demonstrated in Moya by using the MPI and Legion back ends with no physics implementation changes.  An additional HPX backend is being developed.
  • Scaling studies were performed on half of LANL’s Intel Knight’s Landing-based HPC machine, Trinity; the entirety of Sandia National Laboratories’ ARM-based HPC machine, Astra; and about one-quarter of Lawrence Livermore National Laboratory’s GPU-based HPC machine, Sierra.
  • Moya’s physics capabilities and a majority of Symphony’s capabilities have been ported to GPUs by leveraging Kokkos.  Optimization efforts are underway to find the right balance of both task granularity and GPU kernel granularity.

National Nuclear Security Administration logo U.S. Department of Energy Office of Science logo