May
27
Wed
ALCF/ECP UPC++ Webinar
May 27 @ 12:00 pm – 3:00 pm

UPC++: An Asynchronous RMA/RPC Library for Distributed C++ Applications

UPC++ is a C++ library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. The UPC++ API offers low-overhead one-sided RMA communication and Remote Procedure Calls (RPC), along with futures and promises. These constructs enable the programmer to express dependencies between asynchronous computations and data movement. UPC++ supports the implementation of simple, regular data structures as well as more elaborate distributed data structures where communication is fine-grained, irregular, or both. The library’s support for asynchrony enables the application to aggressively overlap and schedule communication and computation to reduce wait times.

UPC++ is highly portable and runs on platforms from laptops to supercomputers, with native implementations for HPC interconnects. As a C++ library, it interoperates smoothly with existing numerical libraries and on-node programming models (e.g., OpenMP, CUDA).

In this webinar, hosted by DOE’s Exascale Computing Project and the ALCF, we will introduce basic concepts and advanced optimization techniques of UPC++. We will discuss the UPC++ memory and execution models and walk through basic algorithm implementations. We will also look at irregular applications and show how they can take advantage of UPC++ features to optimize their performance.

This training requires registration so please click the “Tickets” link above to register.

Jun
2
Tue
Preparing applications for Aurora using the Intel DPC++ Compatibility Tool
Jun 2 @ 12:00 pm – 1:00 pm

Abstract

The Intel DPC++ Compatibility Tool has been designed to assist developers with the migration of existing CUDA codes to the newly developed DPC++ language. This presentation started by briefly introducing the tool and the workflow associated with migrating either a single source file or a larger codebase. Following the introduction, the presenters focused on the successful migration of a large kernel in the NWChemEx application, and described in detail each of the critical items changed in the process.

Organizers

  • Haritha Siddabathuni Som (ALCF)
  • Ray Loy (ALCF)
  • Yasaman Ghadar (ALCF)

Presentation materials

Jun
5
Fri
An Introduction to HDF5 in HPC Environments Supporting Materials Webinar
Jun 5 all-day

In this presentation, we introduce the concept and practices of data management based on HDF5. Our main goal is to let users with no previous HDF5 experience be productive in an HPC environment as quickly as possible. As a secondary goal, we want them to be aware of the resources that will let them take their mastery of HDF5 to the next level. Attendees with a working knowledge of C/C++, Fortran, or Python, plus basic MPI programming, will get the most out of this introduction.

We have organized this presentation into five sections. We begin with a few motivating examples and heuristics for mapping between ideas and their manifestations in storage structures. We will mention viable solutions without the use of HDF5, but point out their “atomistic” character as opposed to HDF5’s holistic approach. We then show the fastest known path, in terms of user effort /and/ run time, to transform in-memory structures into bytes in storage. Having seen HDF5 in action, we take a step back to reflect on our initial problem set and what HDF5 has to offer. We then make the transition into “proper” HPC with parallel HDF5. We will discuss the inevitable challenges of an environment in which there are many more moving parts above and below the HDF5 library. It’s all about finding balance, and we will present a few proven techniques without which no user of HDF5 should be.

In the last section of this presentation, we will survey the supporting ecosystem around HDF5 and preview the intermediate topics that will be covered in a future event.

 

Jun
17
Wed
SYCL – Introduction and Best Practices
Jun 17 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The June webinar is titled SYCL – Introduction and Best Practices, and will be presented by Thomas Applencourt (Argonne National Laboratory).The webinar has been rescheduled: it will now take place on Wednesday, June 17, 2020 at 1:00 pm ET.

Abstract:

SYCL is a single-source heterogeneous programming model based on standard C++. It uses C++ templates and lambda functions for host and device code. SYCL builds on the underlying concepts of portability and efficiency of OpenCL that enable code for heterogeneous processors; however, it is less verbose than OpenCL. The single-source programming enables the host and kernel code for an application to be contained in the same source file, in a type-safe way and with the simplicity of a cross-platform asynchronous task graph. The webinar provided an overview of the SYCL concepts, compilation, and runtime. No prior knowledge of OpenCL was required for this webinar. The presenter reviewed the core concepts of SYCL, and walked through several code examples to highlight its key features and illustrate best practices. SYCL by design is hardware agnostic and offers the potential to be portable across many of DOE’s largest machines.

Jun
24
Wed
OpenMP Offload Capabilities in the oneAPI HPC Toolkit
Jun 24 @ 12:00 pm – 1:00 pm

Abstract

OpenMP provides portable, performant, and productive parallel programming interfaces for applications on a wide range of platforms and is one of the programming models offered in the oneAPI HPC Toolkit. This talk presented the key capabilities of the C/C++/Fortran compilers in oneAPI, especially those to exploit the Intel Xe GPUs to be in Aurora, the ALCF’s forthcoming exascale system. Use cases of HPC applications from the Aurora Early Science Program were discussed.

Organizers

  • Ray Loy (ALCF)
  • Yasaman Ghadar (ALCF)

Presentation materials

Jun
26
Fri
Parallel I/O with HDF5 and Performance Tuning Techniques
Jun 26 @ 12:00 pm – 1:00 pm

This webinar is designed for users who have had exposure to HDF5 and MPI I/O and would like to learn about doing parallel I/O with the HDF5 library. Our main goal is to make our users aware of how to avoid poor I/O performance when using parallel HDF5 library and equip them with the tools to investigate performance.

In the first part of this presentation, we will cover HDF5 parallel library design, application programming model, and demonstrate capabilities of the HDF5 parallel library. Then we will give an overview of parallel file systems effects on HDF5 performance and will discuss the tools useful for performance investigations. We will use examples from well-known codes and use cases from HPC science applications to demonstrate these tools, along with HDF5 tuning techniques such as collective metadata I/O, data aggregation, parallel compression, and other HDF5 tuning parameters.

Jun
30
Tue
Strategies for Working Remotely Panel Discussion – Virtual Onboarding and Mentoring
Jun 30 @ 3:00 pm – 4:15 pm

In response to the COVID-19 pandemic and need for many to transition to unplanned remote work, the IDEAS-ECP Productivity project has launched the panel series Strategies for Working Remotely, which explores important topics in this area. The next panel discussion in the series was titled, “Virtual Onboarding and Mentoring”.

Abstract: As we head into the summer months, student internship programs are underway, albeit virtually as many of us are now working remotely in response to COVID-19 social distancing practices. Several laboratories have already onboarded interns and new team members to work remotely with geographically dispersed teams. What are some lessons learned and best practices about onboarding new hires that we can take away from this experience? In the fourth installment of this IDEAS-ECP panel discussion series, we brought together several staff members of DOE laboratories, who spoke about their experiences in onboarding and mentoring new hires virtually. Topics included challenges, lessons learned, unforeseen benefits, and opportunities to look for from this experience. Panelists made brief introductory comments followed by open discussion.

Panelists:

  • Helen Cademartori (Lawrence Berkeley National Laboratory)
  • Marcey Kelley (Lawrence Livermore National Laboratory)
  • Jay Lofstead (Sandia National Laboratories)
  • Beth Mccormick (Lawrence Livermore National Laboratory)
  • Raj Sankaran (Argonne National Laboratory)

Moderators:

  • Ashley Barker, ORNL
  • Rebecca Hartman-Baker, LBNL
  • Elaine Raybourn, SNL
Jul
15
Wed
ALCF/ECP CMake Workshop
Jul 15 – Jul 17 all-day

ALCF/ECP CMake Workshop

The Exascale Computing Project (ECP) is partnering with Kitware and the Argonne Leadership Computing Facility to offer a three-day workshop on CMake from July 15-17, 2020.

Open to all ECP project members, this workshop is designed to help attendees advance their use of CMake on ALCF computing resources, including the upcoming exascale system, Aurora. The event will assist exascale code developers in learning how to resolve issues outside of their control and provide guidance on writing a build system generator capable of seamlessly configuring for multiple unique architectures with a variety of compilers. The three-day workshop will be held online and connection information will be provided to registered attendees.

To see the full agenda and/or to register, click the Tickets link above.

What’s new in Spack?
Jul 15 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The July webinar was titled What’s new in Spack?, and was presented by Todd Gamblin (Lawrence Livermore National Laboratory). The webinar took place on Wednesday, July 15, 2020 at 1:00 pm ET.

Abstract:

Spack is a package manager for scientific computing, with a rapidly growing open source community. With over 500 contributors from academia, industry, and government laboratories, Spack has a wide range of use cases, from small-scale development on laptops and clusters, to software release management for the U.S. Exascale Computing Project, to user software deployment on 6 of the top 10 supercomputer sites in the world.

Spack isn’t just for facilities, though! As a package manager, Spack is in a powerful position to impact DevOps and daily software development workflows. Spack has virtual environments that enable the “manifest and lock” model popularized by more mainstream dependency management tools. New releases of Spack include direct support for creating containers and GitLab CI pipelines for building environments. This webinar covered new features as well as the near- and long-term roadmap for Spack.

Jul
24
Fri
Kokkos Online Class Series
Jul 24 @ 12:00 pm – 1:00 pm

1st Kokkos Lecture Series July-September

The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.

What is Kokkos?

Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.

The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).

Contents of the Tutorial

This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).

Module 1: Introduction 07/17/2020

  • Introduction
  • How to build
  • Data parallel execution patterns

Module 2: Views and Spaces 07/24/2020

  • Views
  • Memory Space and Execution Spaces
  • Memory access patterns (layouts)

Module 3: Data Structures and MDRange 07/31/2020

  • Subview
  • MDRange
  • Dual View
  • Atomics
  • Scatter View

Module 4: Hierarchical Parallelism 08/07/2020

  • Hierarchical parallelism
  • Scratch Space

Module 5: Streams, Tasking and SIMD 08/14/2020

  • Stream Integration
  • Tasking
  • SIMD

Module 6: MPI and PGAS 08/21/2020

  • MPI
  • PGAS

Module 7: Tools 08/28/2020

  • Profiling
  • Tuning
  • Static Analysis

Module 8: Kokkos Kernels 09/04/2020

  • BLAS
  • Sparse BLAS

Backup Day: 09/11/2020

How to Attend

  • The lecture series is available to everyone
  • No-cost registration is necessary, meeting password will be send to registrants.
  • For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.

For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38