Many scientific applications must access scalable algorithms for efficiency, must pass data between different grids with different parallel distributions, or need reduced representations of high-dimensional data, for example, to optimize storage. The Accelerated Libraries for Exascale (ALExa) project is providing technologies to address these needs for exascale applications, including applications written in Fortran.
Complex scientific applications might need to combine results from different computational grids to perform their required simulations, where each computational grid represents only part of the physics. Moreover, the simulations on each grid might be written in Fortran and require access to scalable solvers in C++. The ALExa project is developing four components to address these issues and enable applications to better use exascale systems: the Data Transfer Kit (DTK), ArborX, Tasmanian, and ForTrilinos.
The DTK provides the ability to transfer computed solutions between grids with different layouts on parallel accelerated architectures, enabling simulations to seamlessly combine results from different computational grids to perform their required simulations. The team is focused on adding new features needed by applications and ensuring that the library is performant on the pre-exascale and exascale architectures.
ArborX provides performance portable geometric search algorithms—such as finding all objects within certain distance (rNN) or a fixed number of closest objects (kNN)—similar to nanoflann and Boost.Geometry.Index libraries but in a high-performance computing environment. The team focuses on providing functionality required by other Exascale Computing Project projects (e.g., ExaWind and ExaSky), including clustering algorithms. ArborX is a required dependency of DTK.
Tasmanian provides the ability to construct surrogate models with low memory footprint, low cost, and optimal computational throughput, enabling optimization and uncertainty quantification for large-scale engineering problems, as well as efficient multiphysics simulations. The team is focused on reducing the GPU memory overhead and accelerating the simulation of the surrogate models produced.
ForTrilinos provides a software capability for the easy automatic generation of Fortran interfaces to any C/C++ library, as well as a seamless pathway for large and complex Fortran-based codes to access the Trilinos library through automatically generated interface code.