Apr
15
Thu
HPC System and Software Testing via Buildtest
Apr 15 all-day

This talk was held on April 15, 2021 as part of the 2021 ECP Annual Meeting.

HPC computing environment is a tightly coupled system that includes a cluster of nodes and accelerators interconnected with a high-speed interconnect, a parallel file system, multiple storage tiers, a job scheduler and a software stack for users to run their workflows. This environment is highly interdependent, therefore it is essential to regularly test various components of the HPC system and the software stack. There is significant progress in software build frameworks (spack, easybuild) for installing software packages for HPC systems, however there is little consensus on the testing front.

In this talk, we presented buildtest (https://buildtest.readthedocs.io/en/devel/index.html), an acceptance testing framework for HPC systems. In buildtest, tests are written in YAML called ‘buildspecs’ which are processed by buildtest into shell-scripts. These tests can be run locally or via a job scheduler (Slurm, LSF and Cobalt). Buildtest supports a rich YAML structure for writing buildspecs which is defined in JSON Schema for validating buildspecs. Currently, buildtest supports two major schema types (compiler and script) for writing shell and python scripts as well as single source compilation tests.

In this talk, we covered the core framework, its features and writing tests (i.e. buildspecs) using script and compiler schema. In addition, we presented a summary of Cori testsuite (https://github.com/buildtesters/buildtest-cori) that includes real tests for Cori system at NERSC.

In Jan 2021, we deployed Spack E4S 20.10 stack (https://docs.nersc.gov/applications/e4s/) on Cori for the NERSC user community. As part of this initiative, we test E4S stack via E4S testsuite (https://github.com/E4S-Project/testsuite) using buildtest with Gitlab scheduled pipelines. We concluded this talk with a brief demo of buildtest and additional resources to get started.

Apr
16
Fri
Using Spack to Accelerate Developer Workflows Tutorial
Apr 16 all-day

This tutorial was held on April 16, 2021 as part of the 2021 ECP Annual Meeting.

Spack is an open source tool for HPC package management that simplifies building, installing, developing, and sharing HPC software stacks. It is the official deployment and distribution tool for ECP, and it allows ECP developers to easily leverage each others’ work. Spack continues to grow in popularity among end-users, HPC developers, and the world’s largest HPC centers. It provides a powerful and flexible dependency model, a simple Python syntax for writing package build recipes, and a repository of over 5,000 community-maintained packages. The modern scientific software stack is complex and spans C, C++, Fortran, Python, and R; Spack can help reduce the integration burden and allow developers to spend more time on science and less on the drudgery of deployment.

This tutorial builds significantly on past Spack tutorials with a stronger focus on developer workflows. We covered the traditional topics of installation, package authorship, and Spack’s dependency model. We went in-depth on Spack environments and configuration, and gave examples of how Spack can be used to bootstrap a developer environment and concurrently develop multiple packages. Finally, we demonstrated how `spack external find` and Spack build caches (binary packages) can accelerate development and CI workflows. Participants can expect to come away from this tutorial with new skills, even if they have participated in Spack tutorials in the past.

Apr
19
Mon
Timemory ECP Tutorial
Apr 19 @ 12:00 pm – 3:00 pm

Software monitoring

Have you ever written a multi-level logging abstraction for your project? Created an error checking system? Written a high-level timer + label abstraction? Have you then added additional abstractions for logging data values and/or recording the memory usage? Did you add or want to add support for exporting these labels to external profilers like VTune, Nsight, TAU, etc.? Do you need to support flushing this data intermittently? If your answer to any of these questions is yes, this is the right tutorial for you.

Logging, error-checking, high-level timekeeping abstractions are a staple in HPC applications. As projects grow in complexity and users, the developers often end up having to provide these abstractions because these capabilities are generally viewed as necessary for debugging, validation, and ensuring optimal performance. Timemory aims to simplify monitoring the state and performance of your application so that relevant debugging, logging, and performance data can be trivially enabled or disabled in a consistent and portable manner.

Why timemory?

Timemory is designed as a toolkit for implementing profiling, debugging, and logging solutions as well as providing a holistic profiling solution. If you would like to keep all your current abstractions and only want type-safe handles for invoking groups of them in bulk, timemory can provide that; if you would like to simplify aggregating the data from different MPI/UPC++ ranks, timemory can provide that; if you only want to add support for exporting to JSON/XML/etc., timemory can provide that; if you want to create a new command-line tool which combines different measurements, timemory can provide the components to easily do that; if you want a holistic solution that you can easily extend or restrict, timemory can provide that.

What is timemory?

Timemory is a multi-purpose C++ toolkit and suite of C/C++/Fortran/Python tools for performance analysis, optimization studies, logging, and debugging. The primary objective of timemory is to create a universal instrumentation framework which streamlines building software monitoring interfaces and tools by coupling the inversion of control programming principle with C++ template metaprogramming. The original intention of the toolkit design was specific to performance analysis, however, it was later realized that the design allowed debugging and logging abstractions to co-exist seamlessly with the performance analysis abstractions.
The design allows developers to construct production quality implementations which couple application-specific software monitoring requirements with third-party tools and libraries. In order to help ensure this objective is fully realized, timemory provides a number of pre-built implementations of a generic C/C++/Fortran library interface, compiler instrumentation, dynamic instrumentation, various popular frameworks such as MPI, OpenMP, NCCL, and Kokkos, Python bindings, and an extended analogue of the UNIX time command-line tool.

Does HPC need another profiling tool?

No. HPC has a surplus of performance analysis tools and APIs: VTune, NSight, TAU, Caliper, Score-P, Callgrind, LIKWID, Arm-MAP, CrayPAT, OpenSpeedShop, ittnotify, NVTX, PAPI, CUPTI, MPI-P, MPI-T, OMPT, gperftools, ROC-profiler, ROC-tracer, and innumerable application-specific abstractions which perform anything from basic timekeeping and memory usage to implementations and callbacks for the aforementioned APIs. We designed timemory as a way to easily integrate and maintain the exact set of measurements/tools/features you want to support with an interface best suited for your application.

Contents of the Tutorial

This is a preliminary outline of the tutorial. The tutorial is divided into two days. The first day will cover the front-end tools for C/C++/Fortran/CUDA/Python. The second day will cover how to use the C++ toolkit. The interactive tutorials will be held on Mondays: 9:00 AM – 12:00 PM PT (12:00 PM – 3:00 PM ET).

Day 1: Tools and Library (04/19/2021)

Introduction to timemory

  • Motivation
  • Design philosophy and nomenclature
  • Installation

Command-line Tools

  • timemory-avail — information tool
  • timem — UNIX time + more
  • timemory-run — dynamic instrumentation and binary re-writing
  • timemory-plotter — matplotlib plotting of results
  • timemory-roofline — generate the roofline

Library API

  • Compiler instrumentation
  • Extern C interface

Python API

  • Decorators and context-managers
  • Iterating over results in-situ

Python Command-Line Tools

  • timemory-python-profiler — python function profiler
  • timemory-python-trace — python line-by-line tracing
  • timemory-line-profiler — classic line-profiler tool extended to collect different metrics

Visualizing and Analyzing Results

  • Converting timemory data to pandas dataframes via Hatchet
  • Manipulating dataframes
  • Visualizing in Jupyter notebooks

Day 2: C++ and Python Toolkit (04/26/2021)

Python

  • Using Individual Components to build your own tools

C++

  • Creating a new component
  • Using a custom component for timemory-run
  • Designing a customized profiling API for your project
  • Designing a customized debugging/logging interface for your project
    • Wrapping externally defined functions
    • Creating profiling/debugging libraries for your project
    • Insert measurements/logging/error-checking around C/C++ function calls
    • Auditing incoming arguments and return values
  • Replacing externally defined functions
    • Experiment with mixed-precision without modifying original source code

How to Attend

  • The lecture series is available to everyone.
  • No-cost registration is necessary, meeting password will be sent to registrants.
  • For the exercises, timemory can be installed locally or registrants may use a provided docker image.

Presenters

  • Jonathan Madsen
  • Laurie Stephey
  • Muazz Gul Awan
  • Rahulkumar Gayatri

Tutorial Material
Recording – Day 1
Recording – Day 2

Apr
20
Tue
ALCF GPU Hackathon 2021
Apr 20 all-day

Argonne GPU Hackathon 2021

The Argonne GPU Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

Dates

  • April 20, 27-29, 2021

Prerequisites

  • Teams are expected to be fluent with the code or project they bring to the event and motivated to make progress during the hackathon.
  • No advanced GPU skills required, but teams are expected to know the basics of GPU programming and profiling at the event. A collection of GPU lectures, tutorials, and labs are available for all participants at no fee.

See https://www.gpuhackathons.org/index.php/event/argonne-gpu-hackathon-2021 for eligibility and more information.

Apr
26
Mon
Timemory ECP Tutorial
Apr 26 @ 12:00 pm – 3:00 pm

Software monitoring

Have you ever written a multi-level logging abstraction for your project? Created an error checking system? Written a high-level timer + label abstraction? Have you then added additional abstractions for logging data values and/or recording the memory usage? Did you add or want to add support for exporting these labels to external profilers like VTune, Nsight, TAU, etc.? Do you need to support flushing this data intermittently? If your answer to any of these questions is yes, this is the right tutorial for you.

Logging, error-checking, high-level timekeeping abstractions are a staple in HPC applications. As projects grow in complexity and users, the developers often end up having to provide these abstractions because these capabilities are generally viewed as necessary for debugging, validation, and ensuring optimal performance. Timemory aims to simplify monitoring the state and performance of your application so that relevant debugging, logging, and performance data can be trivially enabled or disabled in a consistent and portable manner.

Why timemory?

Timemory is designed as a toolkit for implementing profiling, debugging, and logging solutions as well as providing a holistic profiling solution. If you would like to keep all your current abstractions and only want type-safe handles for invoking groups of them in bulk, timemory can provide that; if you would like to simplify aggregating the data from different MPI/UPC++ ranks, timemory can provide that; if you only want to add support for exporting to JSON/XML/etc., timemory can provide that; if you want to create a new command-line tool which combines different measurements, timemory can provide the components to easily do that; if you want a holistic solution that you can easily extend or restrict, timemory can provide that.

What is timemory?

Timemory is a multi-purpose C++ toolkit and suite of C/C++/Fortran/Python tools for performance analysis, optimization studies, logging, and debugging. The primary objective of timemory is to create a universal instrumentation framework which streamlines building software monitoring interfaces and tools by coupling the inversion of control programming principle with C++ template metaprogramming. The original intention of the toolkit design was specific to performance analysis, however, it was later realized that the design allowed debugging and logging abstractions to co-exist seamlessly with the performance analysis abstractions.
The design allows developers to construct production quality implementations which couple application-specific software monitoring requirements with third-party tools and libraries. In order to help ensure this objective is fully realized, timemory provides a number of pre-built implementations of a generic C/C++/Fortran library interface, compiler instrumentation, dynamic instrumentation, various popular frameworks such as MPI, OpenMP, NCCL, and Kokkos, Python bindings, and an extended analogue of the UNIX time command-line tool.

Does HPC need another profiling tool?

No. HPC has a surplus of performance analysis tools and APIs: VTune, NSight, TAU, Caliper, Score-P, Callgrind, LIKWID, Arm-MAP, CrayPAT, OpenSpeedShop, ittnotify, NVTX, PAPI, CUPTI, MPI-P, MPI-T, OMPT, gperftools, ROC-profiler, ROC-tracer, and innumerable application-specific abstractions which perform anything from basic timekeeping and memory usage to implementations and callbacks for the aforementioned APIs. We designed timemory as a way to easily integrate and maintain the exact set of measurements/tools/features you want to support with an interface best suited for your application.

Contents of the Tutorial

This is a preliminary outline of the tutorial. The tutorial is divided into two days. The first day will cover the front-end tools for C/C++/Fortran/CUDA/Python. The second day will cover how to use the C++ toolkit. The interactive tutorials will be held on Mondays: 9:00 AM – 12:00 PM PT (12:00 PM – 3:00 PM ET).

Day 1: Tools and Library (04/19/2021)

Introduction to timemory

  • Motivation
  • Design philosophy and nomenclature
  • Installation

Command-line Tools

  • timemory-avail — information tool
  • timem — UNIX time + more
  • timemory-run — dynamic instrumentation and binary re-writing
  • timemory-plotter — matplotlib plotting of results
  • timemory-roofline — generate the roofline

Library API

  • Compiler instrumentation
  • Extern C interface

Python API

  • Decorators and context-managers
  • Iterating over results in-situ

Python Command-Line Tools

  • timemory-python-profiler — python function profiler
  • timemory-python-trace — python line-by-line tracing
  • timemory-line-profiler — classic line-profiler tool extended to collect different metrics

Visualizing and Analyzing Results

  • Converting timemory data to pandas dataframes via Hatchet
  • Manipulating dataframes
  • Visualizing in Jupyter notebooks

Day 2: C++ and Python Toolkit (04/26/2021)

Python

  • Using Individual Components to build your own tools

C++

  • Creating a new component
  • Using a custom component for timemory-run
  • Designing a customized profiling API for your project
  • Designing a customized debugging/logging interface for your project
    • Wrapping externally defined functions
    • Creating profiling/debugging libraries for your project
    • Insert measurements/logging/error-checking around C/C++ function calls
    • Auditing incoming arguments and return values
  • Replacing externally defined functions
    • Experiment with mixed-precision without modifying original source code

How to Attend

  • The lecture series is available to everyone.
  • No-cost registration is necessary, meeting password will be sent to registrants.
  • For the exercises, timemory can be installed locally or registrants may use a provided docker image.

Presenters

  • Jonathan Madsen
  • Laurie Stephey
  • Muazz Gul Awan
  • Rahulkumar Gayatri

Tutorial Material
Recording – Day 1
Recording – Day 2

Apr
27
Tue
ALCF GPU Hackathon 2021
Apr 27 all-day

Argonne GPU Hackathon 2021

The Argonne GPU Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

Dates

  • April 20, 27-29, 2021

Prerequisites

  • Teams are expected to be fluent with the code or project they bring to the event and motivated to make progress during the hackathon.
  • No advanced GPU skills required, but teams are expected to know the basics of GPU programming and profiling at the event. A collection of GPU lectures, tutorials, and labs are available for all participants at no fee.

See https://www.gpuhackathons.org/index.php/event/argonne-gpu-hackathon-2021 for eligibility and more information.

Apr
28
Wed
ALCF GPU Hackathon 2021
Apr 28 all-day

Argonne GPU Hackathon 2021

The Argonne GPU Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

Dates

  • April 20, 27-29, 2021

Prerequisites

  • Teams are expected to be fluent with the code or project they bring to the event and motivated to make progress during the hackathon.
  • No advanced GPU skills required, but teams are expected to know the basics of GPU programming and profiling at the event. A collection of GPU lectures, tutorials, and labs are available for all participants at no fee.

See https://www.gpuhackathons.org/index.php/event/argonne-gpu-hackathon-2021 for eligibility and more information.

Apr
29
Thu
ALCF GPU Hackathon 2021
Apr 29 all-day

Argonne GPU Hackathon 2021

The Argonne GPU Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

Dates

  • April 20, 27-29, 2021

Prerequisites

  • Teams are expected to be fluent with the code or project they bring to the event and motivated to make progress during the hackathon.
  • No advanced GPU skills required, but teams are expected to know the basics of GPU programming and profiling at the event. A collection of GPU lectures, tutorials, and labs are available for all participants at no fee.

See https://www.gpuhackathons.org/index.php/event/argonne-gpu-hackathon-2021 for eligibility and more information.

Apr
30
Fri
Webinar: HDF5 Application Tuning (part 2)
Apr 30 @ 12:00 pm – 1:00 pm

HDF5 Application Tuning: There is more than one way to skin a cat(fish)

Before returning to application tuning (in part 3), in this second part of the series, we take a closer look at HDF5 performance variability. We highlight the main variability sources, their impact on performance, and considerations for HDF5 container design.

More information about the webinar as well as presentation materials can be found here.

May
12
Wed
Automated Fortran–C++ Bindings for Large-Scale Scientific Applications
May 12 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The May webinar is titled Automated Fortran–C++ Bindings for Large-Scale Scientific Applications, and will be presented by Seth Johnson (Oak Ridge National Laboratory). The webinar will take place on Wednesday, May 12, 2021 at 1:00 pm ET.

Abstract:

Although many active scientific codes use modern Fortran, most contemporary scientific software libraries are implemented in C and C++. Providing their numerical, algorithmic, or data management features to Fortran codes requires writing and maintaining substantial amounts of glue code. In the same vein, some projects are actively moving key kernels from Fortran toward C++ to support performance portability models and other rapidly-developing, dynamic programming paradigms. How can a project smoothly connect existing Fortran code to new internal C++ kernels or external C++ libraries? The webinar will introduce SWIG-Fortran, which provides a solution with a wide range of flexibility, including support for performant data transfers, MPI support, and direct translation of C++ features to Fortran interfaces.