Introduction to High-Performance Parallel Distributed Computing using Chapel, UPC++ and Coarray Fortran

Schedule:

This two-day tutorial will run July 26-27, 12:00pm-3:25pm ET.

Abstract:

A majority of HPC system users use scripting languages such as Python to prototype their computations, coordinate their large executions, and analyze the data resulting from their computations. Python is great for these many uses, but it frequently falls short when significantly scaling up the amount of data and computation, as required to fully leverage HPC system resources. In this tutorial, we show how example computations such as heat diffusion, k-mer counting, file processing, and distributed maps can be written to efficiently leverage distributed computing resources in the Chapel, UPC++, and Fortran parallel programming models. This tutorial should be accessible to users with little-to-no parallel programming experience, and everyone is welcome. A partial differential equation problem will be shown in all three programming models along with performance and scaling results on big machines. Attendees will be shown how to compile and run these programming examples, and provided opportunities to experiment with different parameters and code alternatives while being able to ask questions and share their own observations. Come join us to learn about some productive and performant parallel programming models!

Current OLCF users with access to Frontier will be able to access a reservation on Frontier to work the examples. Current NERSC users will be able to use Perlmutter. Training accounts on Perlmutter are available for participants who do not have access to either Frontier or are not NERSC users. The examples will also be available in a Docker container and a cloud-based virtual desktop environment for access by any attendee.

Keywords:

  • Basic and introductory topics for expanding broader engagement
  • Software engineering for portable performance and scalability
  • Parallel programming methods, models, languages and environments
  • Clusters and distributed systems