Growing preCICE from an as-is Coupling Library to a Sustainable, Batteries-included Ecosystem
Jul 6 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC, and the DOE Exascale Computing Project (ECP), organizes the webinar series on Best Practices for HPC Software Developers.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The July webinar is titled Growing preCICE from an as-is Coupling Library to a Sustainable, Batteries-included Ecosystem; and will be presented by Gerasimos Chourdakis (Technical University of Munich). The webinar will take place on Wednesday, July 6, 2022 at 1:00 pm ET.


Starting humbly as a coupling library for fluid-structure interaction problems used by just a few academic groups in Germany, preCICE has grown to a complete coupling ecosystem used by more than 100 research groups worldwide, and for a wide range of multi-physics applications. How did that happen? Apart from the library itself, preCICE now maintains ready-to-use adapters for several open-source solvers, tutorial cases, documentation, and more. Users can thus easily couple popular open-source solvers (such as OpenFOAM, SU2, deal.II, or FEniCS) with their in-house simulation software (written in C++, C, Fortran, Python, Matlab, or Julia). In parallel to this, the developers of preCICE had to learn how to write more effective documentation (avoiding fragmentation and getting the user in the loop), how to manage the rapidly growing community (switching from a mailing list to a chatroom and then to a dedicated Discourse forum), and how to organize workshops and training courses. This webinar will focus on lessons learned that can help any research software project grow in a sustainable way.

Coordinating Dynamic Ensembles of Computations with libEnsemble
Jul 7 @ 1:00 pm – 2:30 pm


This tutorial will introduce libEnsemble, a Python toolkit for coordinating asynchronous and dynamic ensembles of calculations across massively parallel resources.

Target participants are researchers running large numbers of computations who would like to train models, perform optimizations based on simulation results, or perform other adaptive parameter studies. Participants will learn to use libEnsemble’s generation and simulation functions to express portable ensembles, and to utilize the growing library of example functions.

The presenters will address how to couple libEnsemble workflows with any user application and apply advanced features including the allocation of variable resources and the cancellation of simulations based on intermediate outputs. Using examples from current ECP software technology and application integrations, the presenters will demonstrate how libEnsemble’s mix-and-match approach can help interface libraries and applications with exascale-level resources.

The tutorial will be presented by Stephen Hudson, Jeffrey Larson and John-Luke Navarro.


  • Overview of libEnsemble
  • Simple sine tutorial (with hands-on)
  • Data workflow
  • Running with user applications (with hands-on)
  • GPU example
  • Variable resource management
  • Optimization using APOSMM
  • Running ensembles across multiple systems
E4S at NERSC 2022
Aug 25 @ 12:00 pm – 5:30 pm


The Extreme-scale Scientific Software Stack (E4S) is a collection of open source software packages for high performance computing. The E4S stack comes with up to 100+ HPC applications, libraries and tools, MPI, development tools such as HPCToolkit, TAU, PAPI, math libraries including PETSC and Trilinos. E4S is available for use via containers, buildcache, AWS EC2 image, and facility tuned spack environments in the form of spack.yaml. E4S provides a new model for providing a standard set of software stack to HPC centers with dedicated support to help bridge the gap between HPC facilities and application developers of E4S products. NERSC has several deployments of E4S on Cori and Perlmutter using the spack package manager. We plan to use E4S as the vehicle for installing and supporting much of the software we provide for users.

Richard Gerber, HPC Department Head will start with opening remarks and present a brief overview of current workloads, software usage, and science applications that run on the NERSC system. Mike Heroux, the lead for ECP Software Technology (ST) group whose focus area is developing applications to run efficiently on exascale systems. E4S consists of many open source products developed by ECP ST teams which are installed on DOE systems at OLCF, ALCF and NERSC. Sameer Shende, who leads the E4S project will present the components of E4S and the different modes on how to access the E4S stack.

Katie Antypas who leads the Hardware Integration (HI) whose focus is application integration at facility, hardware evaluation, training and productivity and software deployment at facility. Katie will present an update on current activities and roadmap for the upcoming year.

We will discuss the E4S software deployment process at HPC centers, with a particular focus on what we’re doing here at NERSC to bring you reliable, performant HPC software. Shahzeb Siddiqui will present an overview of E4S stacks installed at NERSC. This session will be a mix of hands-on and walkthrough the NERSC E4S Documentation. Participants are encouraged to follow the hands-on session if you have access to NERSC systems. Shahzeb will present the Spack Infrastructure project at NERSC that discusses how we leverage Gitlab to automate spack deployments using Continuous Integration capability.

The Software Deployment (SD) group is responsible for deploying ECP software at the DOE facilities via E4S. The SD group partners with Application Development (AD) and ST projects to properly tune their software to run efficiently on the facility system. This group is responsible for providing CI infrastructure to help AD/ST teams automate their workflows using GitLab CI. Ryan Adamson will provide an overview of the Software Deployment group including current challenges and future roadmap.

We will conclude this event with hands-on exercise on how to use spack on Perlmutter to deploy software stack. Sameer will present how to use E4S containers, replacing MPI in an E4S container with the host MPI, creating custom containers for your application, using E4S on AWS and DOE facilities, and building applications using E4S with a bare-metal installation. He will highlight the use of E4S on Perlmutter and answer questions about applying E4S to your projects.


  • Welcome, Richard Gerber
  • E4S for NERSC and its Users, Richard Gerber
  • What is E4S, Sameer Shende
  • Overview of Software Technology, Mike Heroux
  • Overview of Hardware Integration, Katie Antypas
  • NERSC Spack Infrastructure, Shahzeb Siddiqui
  • Software Deployment at the Facilities, Ryan Adamson
  • E4S User Documentation, Shahzeb Siddiqui
  • Spack Training on Perlmutter, Shahzeb Siddiqui
  • E4S Training, Sameer Shende
  • Q&A
Template Task Graph: a Task Programming Paradigm for Irregular Applications
Sep 15 @ 1:00 pm – 2:30 pm


The PaRSEC team will highlight the Template Task Graph (TTG) programming paradigm, the concepts, benefits and requirement of the programming approach as well as the practical aspects necessary to start using TTG on various platforms to write portable task-based applications. The team will provide direct support during the tutorial as well as through GitHub and a mailing list after the tutorial.

What are Template Task Graphs?

Template Task Graphs have been developed to enable a straightforward expression of task parallelism for algorithms working on irregular and unbalanced data sets. The TTG Application Programming Interface employs C++ templates to build an abstract representation of the task graph, and schedule it on distributed resources. It offers a scalable and efficient API to port complex applications on top of task-based runtime systems to gain access to asynchronous progress, computation/communication overlap, and efficient use of all computing resources available on the target system. In this tutorial, we will introduce TTG and its main concepts and features through a variety of applications, ranging from well-known regular to irregular and data dependent examples.

The tutorial, which features many hands-on examples, presents how to install TTG on the ECP platforms and other environments, how to integrate TTG with your application using CMake, how to express task-based data-dependent algorithms for irregular datasets using TTG, and how to integrate these task-based algorithms inside existing applications.

How to Attend:

The tutorial is available to everyone, and participants from any background are welcome to attend. A basic knowledge of C++ and templates will be helpful for participants who wish to try the hands-on.

Presenters will show in-depth demos during the tutorial. Presenters can provide support during and after the tutorial with setup and usage on supported architectures.

No-cost registration is necessary, see “Register” above.


Thomas Herault, Joseph Schuchart (University of Tennessee, Knoxville)