Oct
17
Wed
Open Source Best Practices: From Continuous Integration to Static Linters
Oct 17 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Open Source Best Practices: From Continuous Integration to Static Linters, and will be presented by Daniel Smith and Ben Pritchard (members of the NSF-funded Molecular Sciences Software Institute, or MolSSI). The webinar will take place on Wednesday, October 17, 2018 at 1:00 pm ET.

Abstract:

This webinar will continue the discussion of open source software (OSS) opportunities within the scientific ecosystem to include the many cloud and local services available to OSS free of charge. The services to be discussed include continuous integration, code coverage, and static analysis. The presenters will demonstrate the usefulness of these tools and how a small time investment at the beginning is traded for long-term benefits. These services and ideas are agnostic to software language or HPC software application and should apply to any party interested in tools that help ease the burden of software maintenance.

Dec
5
Wed
Introduction to Software Licensing
Dec 5 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Introduction to Software Licensing, and will be presented by David Bernholdt (Oak Ridge National Laboratory). The webinar took place on Wednesday, December 5, 2018 at 1:00 pm ET.

Abstract:

Software licensing and related matters of intellectual property can often seem confusing or hopelessly complicated, especially when many present their opinions as dogma. This presentation takes a different approach: getting you to think about software licensing from the standpoint of what you want others to be able to do (or not do) with your software. We started by developing a common understanding of the terminology used around software licenses. Then considered various scenarios of what you might want to accomplish with a software license, and what to look for in the license. We also discussed some pragmatic issues around actually applying a license to your software.

Jan
23
Wed
Quantitatively Assessing Performance Portability with Roofline
Jan 23 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Quantitatively Assessing Performance Portability with Roofline, and will be presented by John Pennycook (Intel), Charlene Yang (Lawrence Berkeley National Laboratory) and Jack Deslippe (Lawrence Berkeley National Laboratory). The webinar will take place on Wednesday, January 23, 2019 at 1:00 pm ET.

Abstract:

Wouldn’t it be great if we could port a code to a new high-performance architecture without substantially changing the code yet achieving a similar level of performance as hand-optimized code? This webinar will frame the discussion around ‘performance portability’, why it is important and desirable, and how to quantitatively measure it. The webinar started with a background check on how the concept of performance portability came about and past attempts to define it and quantify it. Then the speaker introduced a simple yet powerful metric and an empirical methodology to quantitatively assess a code’s performance portability across multiple platforms. The methodology uses the Roofline performance model to measure an ‘architectural efficiency’ term in the metric proposed by Pennycook et al. The speaker then dove into a few nuances of this methodology, for example, how and why empirical ceilings should be used for performance bounds, how to accurately account for complex instructions such as divides, how to model strided memory accesses, and how to select the appropriate Roofline ceilings and application performance points to make sure that the performance portability analysis is not erroneously skewed. We also showed some results of measuring performance portability using the aforementioned metric and methodology on two modern architectures, Intel Xeon Phi and NVIDIA V100 GPUs.

Feb
13
Wed
Containers in HPC
Feb 13 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Containers in HPC, and will be presented by Shane Canon (LBNL). The webinar will take place on Wednesday, February 13, 2019 at 1:00 pm ET

Abstract:

Containers have gained adoption in the HPC and scientific computing space through specialized runtimes like Shifter, Singularity and Charliecloud. Containers enable reproducible, shareable, portable execution of applications. In this webinar, we will give a brief introduction on how to build images and run containers on HPC systems. We will also discuss some best practices to ensure containers can take full advantage of HPC systems.

Mar
13
Wed
Parallel I/O with HDF5: Overview, Tuning, and New Features
Mar 13 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar was titled Parallel I/O with HDF5: Overview, Tuning, and New Features, and was presented by Quincey Koziol (NERSC). The webinar took place on Wednesday, March 13, 2019 at 1:00 pm ET

Abstract:

HDF5 is a data model, file format, and I/O library that has become a de facto standard for HPC applications to achieve scalable I/O and for storing and managing big data from computer modeling, large physics experiments and observations. This webinar gave an introduction to using the HDF5 library, with a focus on parallel I/O and performance tuning options. The webinar also presented an overview of the latest performance and productivity enhancement features being developed as part of the DOE’s Exascale Computing Project (ECP) ExaHDF5 effort, and optimizations used in improving I/O performance of ECP applications.

Apr
10
Wed
Testing Fortran Software with pFUnit
Apr 10 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Testing Fortran Software with pFUnit, and will be presented by Thomas Clune (NASA). The webinar will take place on Wednesday, April 10, 2019 at 1:00 pm ET

Abstract:

Over the past two decades, the emergence of highly effective software testing frameworks has greatly simplified the development and use of unit tests and has led to new software development paradigms such as test driven development (TDD).  However, technical computing introduces a number of unique testing challenges, including distributed parallelism and numerical accuracy.  This webinar will begin with a basic introduction to the use of pFUnit to develop tests for MPI+Fortran software and then present some of the new capabilities in the latest release.  We will also discuss some specialized methodologies for testing numerical algorithms and speculate about future framework capabilities that may improve our ability to test at exascale.

May
8
Wed
So, You Want to be Agile? Strategies for Introducing Agility into Your Scientific Software Project
May 8 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled So, You Want to be Agile? Strategies for Introducing Agility into Your Scientific Software Project, and will be presented by Michael Heroux (Sandia National Laboratories). The webinar will take place on Wednesday, May 8, 2019 at 1:00 pm ET.

Abstract:

Scientific software team cultures have natural consistencies with agile practices. Discovery-driven development, a focus on regular delivery of results, in-person discussions within and across research teams, and a focus on long-term sustainable research programs are commonplace dynamics on computational science teams that develop software. These dynamics are also particular expressions of core agile principles.

Many scientific software teams have already assimilated industry best practices in some aspects of their work. The advent of open software development platforms such as GitHub and GitLab have accelerated awareness and adoption, as have numerous on-line resources that enable a motivated person to continue learning new ideas and approaches. Even so, we propose that a healthy team habit is continued exploration and improvement of software practices, processes and skills.

In this webinar, we discuss a few agile practices and strategies that are readily adapted and adopted by scientific software teams. In addition, we describe an attitude and strategy for continual process improvement that enables computational science teams to simultaneously deliver science results and, at the same time, dedicate a slice of time to improving software practices on their way to delivering those results.

Jun
7
Fri
Webinar: Introduction to AMD GPU Programming with HIP
Jun 7 @ 1:00 pm – 3:00 pm

AMD Research presented a webinar titled, “Introduction to AMD GPU programming with HIP” on June 7th. HIP is a C++ runtime API that allows developers to write portable code to run on AMD and NVIDIA GPUs. It is an interface that uses the underlying Radeon Open Compute (ROCm) or CUDA platform that is installed on a system. The API is similar to CUDA so porting existing codes from CUDA to HIP should be fairly straightforward in most cases. In addition, HIP provides porting tools which can be used to help port CUDA codes to the HIP layer, with no overhead compared to the original CUDA application. HIP is not intended to be a drop-in replacement for CUDA, and this webinar included guidance on the manual coding and performance tuning work needed to complete the port.

Key features include:

  • HIP is a thin layer and has little or no performance impact over coding directly in CUDA.
  • HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.
  • The “hipify” tools automatically converts source from CUDA to HIP.
  • Developers can specialize for the platform (CUDA or HIP) to tune for performance or handle tricky cases.

Please see “Presentation Materials” below for links to the video, presentation slides, and Q&A from the event.

Jun
12
Wed
Modern C++ for High-Performance Computing
Jun 12 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Modern C++ for High-Performance Computing, and will be presented by Andrew Lumsdaine (Pacific Northwest National Laboratory & University of Washington). The webinar will take place on Wednesday, June 12, 2019 at 1:00 pm ET.

Abstract:

Since its creation by Bjarne Stroustrup in the early 1980s, C++ has steadily evolved to become a multi-paradigm programming language that fully supports the needs of modern programmers. Because C++ had its roots in the C programming language, conventional wisdom (and longstanding practice) had been to use C++ in a dichotomous fashion: abstractions for productivity with escape to C for performance. However, C++ today is best viewed holistically — as it is today — rather than as extension of C or even of earlier versions of C++. In this webinar I will give a tour of features from modern C++ relevant to HPC, along with guidelines for their use — and demonstrate that C++ can offer productivity and elegance while sacrificing nothing in performance.

Jun
16
Sun
Advanced MPI Tutorial at ISC High Performance 2019
Jun 16 @ 9:00 am – 6:00 pm

Advanced MPI Tutorial

  • When: Sunday, June 16, 9am – 6pm
  • Where: Applaus, Messe Frankfurt Tor Ost (East Gate) Hall 3, Frankfurt am Main, Germany
  • Presenters: Pavan Balaji (Argonne National Laboratory), Torsten Hoefler (D-INFK ETH Zurich), Antonio Pena (Barcelona Supercomputing Center) and Yanfei Guo (Argonne National Laboratory)

The ECP Project will host an advanced Message-Passing Interface (MPI) Tutorial on June 16. This tutorial will cover a vast of new features that are being introduced in MPI-3. The tutorial is offered as part of the ISC High Performance 2019. The registration is open to everyone through the ISC registration page.

The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now. However, a vast majority of applications only rely on basic MPI-1 features without taking advantage of the rich set of functionalities the rest of the standard provides. Further, with the advent of MPI-3 (released in September 2012), a vast number of new features are being introduced in MPI, including efficient one-sided communication, support for external tools, non-blocking collective operations, and improved support for topology-aware data movement. The upcoming MPI-4 standard aims at introducing further improvements to the standard in a number of aspects. This is an advanced-level tutorial that will provide an overview of various powerful features in MPI, especially with MPI-2 and MPI-3, and will present a brief preview into what is being planned for MPI-4.