Parallel I/O with HDF5: Overview, Tuning, and New Features
Mar 13 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Parallel I/O with HDF5: Overview, Tuning, and New Features, and will be presented by Quincey Koziol (NERSC). The webinar will take place on Wednesday, March 13, 2019 at 1:00 pm ET


HDF5 is a data model, file format, and I/O library that has become a de facto standard for HPC applications to achieve scalable I/O and for storing and managing big data from computer modeling, large physics experiments and observations. This webinar will give an introduction to using the HDF5 library, with a focus on parallel I/O and performance tuning options. The webinar will also provide an overview of the latest performance and productivity enhancement features being developed as part of the DOE’s Exascale Computing Project (ECP) ExaHDF5 effort, and will present optimizations used in improving I/O performance of ECP applications.

Performance Portability with Kokkos Bootcamp March 2019 @ Oakland Convention Center
Mar 26 @ 8:30 am – Mar 29 @ 12:30 pm

Performance Portability with Kokkos Bootcamp

March 26-29, 2019

We are pleased to announce that we are hosting the next Performance Portability with Kokkos Bootcamp March 26-29, 2019 at the Oakland Conference Center in Oakland, CA. This workshop is intended to teach new Kokkos users how to get started and to help existing Kokkos users to further improve their codes. The training will cover the minimum required topics to get your application started on using Kokkos, and Kokkos experts will be on hand to help the more advanced users.

What is Kokkos?
Kokkos is a programming model and library for writing performance portable code in C++. It includes abstractions for on-node parallel execution and data layout. These abstractions are mapped at compile time to fit a device’s architecture for best performance. It uses standard C++ in the same spirit as libraries such at Thrust and Thread Building Blocks.

Who should attend?
Anyone who has a C++ application, or would like to create C++ Kokkos kernels that hook onto an application, and would like to have a single source code run well on multiple platforms. We also encourage developers to bring applications that already use Kokkos since Kokkos experts will be available to help with more advanced use cases. Although we strongly suggest teams of two (or more) per application, please do not hesitate to apply if you are a single developer who wants attend this event.

What happens at the event?
We will have Kokkos experts to help you with your application. This event is a tutorial and a playground to experiment with integrating Kokkos with your application and to help optimize existing Kokkos applications.

What happens after the event?
Attendance to this event will help us create a relationship with your team that we hope to continue as you return home to continue your work. We plan to host regular office hours to tend to your teams questions in the initial stages and to help your team continue to make significant progress.

How should I prepare?
After signing up, we will contact you to discuss your application. If you are new to Kokkos, we can help you prepare a kernel for the event. If you have an existing Kokkos application, we would like to understand your needs before the event. We hope that doing this prep work will maximize your time learning from Kokkos experts.

How do I apply?
Click the blue “Tickets” button above or go directly to:  The last day to register for this event is March 19, 2019.  

If you have any questions, please contact one of the following organizers:

  • Graham Lopez,
  • Galen Shipman,
  • Christian Trott,
  • Geoff Womeldorff,