Adaptive Mesh Refinement

Project Details

Adaptive mesh refinement (AMR) is like a computational microscope; it allows scientists to “zoom in” on particular regions of space that are more interesting than others. Cosmologists might want to zoom in on detailed cosmic filaments, astrophysicists might focus on regions of nucleosynthesis, and combustion scientists might investigate the details of the chemistry near a flame front.

The AMReX (AMR for the Exascale) project supports the development of block-structured AMR algorithms for solving systems of partial differential equations on exascale architectures. Block-structured AMR provides the basis for the temporal and spatial discretization strategy for several ECP applications in the areas of accelerator design, additive manufacturing, astrophysics, combustion, cosmology, multiphase flow, and wind plant modeling.

AMReX is a software framework that provides a unified infrastructure with the functionality needed for these and other AMR applications to be able to effectively and efficiently use machines, from laptops to exascale architectures. AMR reduces the computational cost and memory footprint compared with a uniform mesh while preserving accurate descriptions of different physical processes in complex multiphysics algorithms.

AMReX supports a wide range of algorithms that solve systems of partial differential equations. In addition to the core capability to support field data on a hierarchical mesh, AMReX also provides data structures to represent particles and support for different particle and particle/mesh algorithms. AMReX also provides tools for an embedded boundary representation of complex problem geometry and components for designing algorithms that use that representation. Additionally, it provides native geometric multigrid solvers needed to support implicit discretizations and native asynchronous I/O capabilities used to write data for data analysis and visualization and for checkpoint/restart.

The AMReX design allows application developers to interact with the software at several different levels of abstraction. It is possible to use only the AMReX data containers and iterators and none of the higher level functionality. A more popular approach is to use the data structures and iterators for single- and multilevel operations but retain complete control over the time evolution algorithm (i.e., the ordering of algorithmic components at each level and across levels). In an alternative approach, the developer exploits additional functionality in AMReX that is designed specifically to support traditional subcycling-in-time algorithms. In this approach, stubs are provided for the necessary operations, such as advancing the solution on a level, correcting coarse grid fluxes with time- and space-averaged fine grid fluxes, averaging data from fine to coarse, and interpolating in space and time from coarse to fine. This layered design provides users with the ability to have complete control over their algorithm or to use an application template that can provide higher level functionality.

AMReX is designed for performance portability across a range of accelerator-based architectures for a variety of different applications. AMReX isolates applications from any particular architecture and programming model without sacrificing performance. A lightweight abstraction layer effectively hides the details of the architecture from the application. This layer provides constructs that allow users to specify what operations they want to perform on a block of data without specifying how those operations are carried out. AMReX then maps those operations onto the hardware at compile time so that the hardware is used effectively. For example, on a many-core node, an operation would be mapped onto a tiled execution model by using OpenMP to guarantee good cache performance, whereas on a different architecture, the same operation might be mapped to a kernel launch appropriate to a particular GPU.

Additionally, AMReX provides native functionality to support efficient parallel communication, parallel reductions, and memory management.

Principal Investigator(s):

John Bell, Lawrence Berkeley National Laboratory

Collaborators:

Lawrence Berkeley National Laboratory, Argonne National Laboratory, National Renewable Energy Laboratory

National Nuclear Security Administration logo U.S. Department of Energy Office of Science logo