ALExa

The Accelerated Libraries for Exascale (ALExa) project provides efficient scalable algorithms for geometric search and clustering, transferring data between grid with non-matching parallel distributions, and constructing reduced representations of high-dimensional data (e.g., to optimize storage). The several libraries of the project address these needs for the exascale applications, including those written in Fortran.

Project Details

The ALExa project consists of four libraries to address the needs of the applications for an efficient use of the exascale systems: ArborX, Tasmanian, DataTransferKit, and ForTrilinos.

The ArborX library provides performance portable geometric search algorithms, such as finding all objects within certain distance or a fixed number of closest objects. While similar in scope to the well-known nanoflann and Boost.Geometry.Index libraries, the emphasis of the library is on providing efficient parallel algorithms for a high-performance computing (HPC) environment. The library further provides several algorithms built on top of the geometric search functionality, including popular density-based clustering algorithms such as DBSCAN and HDBSCAN.

The Tasmanian library provides the ability to construct surrogate models with low memory footprint, low cost, and optimal computational throughput, enabling optimization and uncertainty quantification for large-scale engineering problems, as well as efficient multiphysics simulations.

The DataTransferKit library provides the ability to transfer computed solutions between grids with different layouts on parallel accelerated architectures, enabling simulations to seamlessly combine results from different computational grids to perform their required simulations.

The ForTrilinos project developed SWIG-Fortran, a tool for easy automatic generation of Fortran interfaces to any C/C++ library. This tool generates the ForTrilinos interface library, as a seamless pathway for large and complex Fortran-based codes to access Trilinos numerical solvers. SWIG-Fortran also provides Fortran bindings to other scientific libraries, including DTK, STRUMPACK, SUNDIALS, Tasmanian, and numerical components of the C++ standard library.

Principal Investigator(s):

Andrey Prokopenko, Oak Ridge National Laboratory

Progress to date

  • The team developed a performance-portable indexing structure based on bounding volume hierarchy, including support for accelerators (through Kokkos) and distributed computations (through MPI). Novel approaches for clustering data on GPUs using DBSCAN and HDBSCAN algorithms resulted in 200x faster times to cluster the data on a single NVIDIA V100 GPU over a serial baseline implementation for a 37 million points 3D cosmology (provided by the ExaSky project).
  • The team enabled GPU-accelerated surrogate model simulations in Tasmanian, developed new algorithms for asynchronous surrogate construction that exploit extreme concurrency, and demonstrated a 100× reduction of memory footprint in sparse representation of neutrino opacities for the ExaStar project. In addition, Tasmanian has been used for calibration of several models used by ExaAM.
  • The team developed a SWIG/Fortran tool that automatically generates Fortran object-oriented interfaces and necessary wrapper code for any given C/C++ interface, demonstrated advanced inversion-of-control functionality that allows a C++ solver to invoke user-provided Fortran routines, and used this tool to provide Fortran access to a wide variety of linear and nonlinear solvers in the Trilinos library.
  • All GPU-accelerated algorithms are implemented for CUDA, HIP and SYCL environments, ensuring performance portability to Nvidia, AMD and Intel accelerators.

National Nuclear Security Administration logo U.S. Department of Energy Office of Science logo