ALCF/ECP CMake Workshop
The Exascale Computing Project (ECP) is partnering with Kitware and the Argonne Leadership Computing Facility to offer a three-day workshop on CMake from July 15-17, 2020.
Open to all ECP project members, this workshop is designed to help attendees advance their use of CMake on ALCF computing resources, including the upcoming exascale system, Aurora. The event will assist exascale code developers in learning how to resolve issues outside of their control and provide guidance on writing a build system generator capable of seamlessly configuring for multiple unique architectures with a variety of compilers. The three-day workshop will be held online and connection information will be provided to registered attendees.
To see the full agenda and/or to register, click the Tickets link above.
The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.
As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The July webinar was titled What’s new in Spack?, and was presented by Todd Gamblin (Lawrence Livermore National Laboratory). The webinar took place on Wednesday, July 15, 2020 at 1:00 pm ET.
Abstract:
Spack is a package manager for scientific computing, with a rapidly growing open source community. With over 500 contributors from academia, industry, and government laboratories, Spack has a wide range of use cases, from small-scale development on laptops and clusters, to software release management for the U.S. Exascale Computing Project, to user software deployment on 6 of the top 10 supercomputer sites in the world.
Spack isn’t just for facilities, though! As a package manager, Spack is in a powerful position to impact DevOps and daily software development workflows. Spack has virtual environments that enable the “manifest and lock” model popularized by more mainstream dependency management tools. New releases of Spack include direct support for creating containers and GitLab CI pipelines for building environments. This webinar covered new features as well as the near- and long-term roadmap for Spack.
1st Kokkos Lecture Series July-September
The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.
What is Kokkos?
Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.
The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).
Contents of the Tutorial
This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).
Module 1: Introduction 07/17/2020
- Introduction
- How to build
- Data parallel execution patterns
Module 2: Views and Spaces 07/24/2020
- Views
- Memory Space and Execution Spaces
- Memory access patterns (layouts)
Module 3: Data Structures and MDRange 07/31/2020
- Subview
- MDRange
- Dual View
- Atomics
- Scatter View
Module 4: Hierarchical Parallelism 08/07/2020
- Hierarchical parallelism
- Scratch Space
Module 5: Streams, Tasking and SIMD 08/14/2020
- Stream Integration
- Tasking
- SIMD
Module 6: MPI and PGAS 08/21/2020
- MPI
- PGAS
Module 7: Tools 08/28/2020
- Profiling
- Tuning
- Static Analysis
Module 8: Kokkos Kernels 09/04/2020
- BLAS
- Sparse BLAS
Backup Day: 09/11/2020
How to Attend
- The lecture series is available to everyone
- No-cost registration is necessary, meeting password will be send to registrants.
- For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.
For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38
ATPESC is an intensive two-week training on the key skills, approaches, and tools to design, implement, and execute Computational Science and Engineering (CSE) applications on current and next-generation supercomputers.
PROGRAM CURRICULUM
Renowned computer scientists and high-performance computing (HPC) experts from U.S. National Laboratories, Universities, and Industry serve as lecturers and effectively guide hands-on training sessions.
ATPESC participants will be granted access to U.S. Department of Energy (DOE) Office of Science User Facilities, which are home to some of the world’s most powerful supercomputers, including upcoming exascale systems.
The core curriculum includes:
- Computer architectures and predicted evolution.
- Numerical algorithms and mathematical software.
- Approaches to building community codes for HPC systems.
- Data analysis, visualization, I/O, and methodologies and tools for Big Data applications.
- Performance measurement and debugging tools.
- Machine Learning and Data Science.
COST
There are no fees to participate. Domestic airfare, meals, and lodging are provided.
ELIGIBILITY
Doctoral students, postdocs, and computational scientists are encouraged to submit applications. Visit the website for eligibility details.
APPLICATION
The program provides advanced training to 70 participants.
Qualified applicants must have:
- Substantial experience in MPI and/or OpenMP programming,
- Used at least one HPC system for a complex application, and
- Plans to conduct CSE research on large-scale computers.
The call for applications for ATPESC 2020 is now open. Applications are due March 2, 2020.
SPONSORS
ATPESC is funded by the Exascale Computing Project, a collaborative effort of the DOE Office of Science’s Advanced Scientific Computing Research Program and the National Nuclear Security Administration.
TO APPLY extremecomputingtraining.anl.gov |
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.
A Study of HACC-IO Benchmarks
The HDF group intern, Chen Wang, presented a study of HACC-IO Benchmarks. The HDF group finished a study on I/O access pattern with several ECP applications (FLASH, NWChem, Chombo, QMCPack, and HACC I/O), using a Recorder tool that was added to Spack. The results of the study are described in a white paper. The paper looks at the steps of analyzing and tuning the HACC-IO benchmarks, the impact of different access patterns, stripe settings and HDF5 metadata. It also compares the five benchmarks on two different parallel file systems, Lustre and GPFS and shows that HDF5 with proper optimizations can catch up the pure MPI-IO implementations.
More information about the webinar as well as presentation materials can be found here.
1st Kokkos Lecture Series July-September
The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.
What is Kokkos?
Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.
The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).
Contents of the Tutorial
This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).
Module 1: Introduction 07/17/2020
- Introduction
- How to build
- Data parallel execution patterns
Module 2: Views and Spaces 07/24/2020
- Views
- Memory Space and Execution Spaces
- Memory access patterns (layouts)
Module 3: Data Structures and MDRange 07/31/2020
- Subview
- MDRange
- Dual View
- Atomics
- Scatter View
Module 4: Hierarchical Parallelism 08/07/2020
- Hierarchical parallelism
- Scratch Space
Module 5: Streams, Tasking and SIMD 08/14/2020
- Stream Integration
- Tasking
- SIMD
Module 6: MPI and PGAS 08/21/2020
- MPI
- PGAS
Module 7: Tools 08/28/2020
- Profiling
- Tuning
- Static Analysis
Module 8: Kokkos Kernels 09/04/2020
- BLAS
- Sparse BLAS
Backup Day: 09/11/2020
How to Attend
- The lecture series is available to everyone
- No-cost registration is necessary, meeting password will be send to registrants.
- For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.
For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38
1st Kokkos Lecture Series July-September
The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.
What is Kokkos?
Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.
The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).
Contents of the Tutorial
This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).
Module 1: Introduction 07/17/2020
- Introduction
- How to build
- Data parallel execution patterns
Module 2: Views and Spaces 07/24/2020
- Views
- Memory Space and Execution Spaces
- Memory access patterns (layouts)
Module 3: Data Structures and MDRange 07/31/2020
- Subview
- MDRange
- Dual View
- Atomics
- Scatter View
Module 4: Hierarchical Parallelism 08/07/2020
- Hierarchical parallelism
- Scratch Space
Module 5: Streams, Tasking and SIMD 08/14/2020
- Stream Integration
- Tasking
- SIMD
Module 6: MPI and PGAS 08/21/2020
- MPI
- PGAS
Module 7: Tools 08/28/2020
- Profiling
- Tuning
- Static Analysis
Module 8: Kokkos Kernels 09/04/2020
- BLAS
- Sparse BLAS
Backup Day: 09/11/2020
How to Attend
- The lecture series is available to everyone
- No-cost registration is necessary, meeting password will be send to registrants.
- For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.
For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38
The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.
As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The August webinar is titled Colormapping Strategies for Large Multivariate Data in Scientific Applications, and will be presented by Francesca Samsel (Texas Advanced Computing Center). The webinar will take place on Wednesday, August 12, 2020 at 1:00 pm ET.
Abstract:
In order for scientific visualizations to effectively convey insights of computationally-driven research, as well as to better engage the public in science, visualizations must effectively and affectively facilitate the exploration of information. The presenter and her team employ a transdisciplinary approach, that includes insights from artistic color theory, perceptual science, the visualization community, and domain scientists, to move beyond basic default colormaps. While color has always been utilized and studied as a component of scientific data visualization, it has been demonstrated that its full potential for discovery and communication of scientific data remains untapped.
The webinar will discuss how effective color use can reveal structures, relationships, and hierarchies between variables within a visualization, as well as practical strategies and workflows for tailor color application to the goals of the visualization. The presenter’s work is documented and freely available for use at SciVisColor.org, a hub for research and resources related to color in scientific visualization. SciVisColor provides tools and strategies that allow scientists to use color as a tool to better understand and communicate their data. Users can explore and download colormaps, color sets, and ColorMoves an interactive interface for using color in scientific visualization above.
The webinar will introduce concepts that can help developers make design decisions when writing simulation codes, to make better use of scientific visualization tools and visualize results more effectively.
1st Kokkos Lecture Series July-September
The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.
What is Kokkos?
Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.
The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).
Contents of the Tutorial
This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).
Module 1: Introduction 07/17/2020
- Introduction
- How to build
- Data parallel execution patterns
Module 2: Views and Spaces 07/24/2020
- Views
- Memory Space and Execution Spaces
- Memory access patterns (layouts)
Module 3: Data Structures and MDRange 07/31/2020
- Subview
- MDRange
- Dual View
- Atomics
- Scatter View
Module 4: Hierarchical Parallelism 08/07/2020
- Hierarchical parallelism
- Scratch Space
Module 5: Streams, Tasking and SIMD 08/14/2020
- Stream Integration
- Tasking
- SIMD
Module 6: MPI and PGAS 08/21/2020
- MPI
- PGAS
Module 7: Tools 08/28/2020
- Profiling
- Tuning
- Static Analysis
Module 8: Kokkos Kernels 09/04/2020
- BLAS
- Sparse BLAS
Backup Day: 09/11/2020
How to Attend
- The lecture series is available to everyone
- No-cost registration is necessary, meeting password will be send to registrants.
- For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.
For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38
1st Kokkos Lecture Series July-September
The Kokkos team will provide its first Kokkos Lecture Series, where attendees learn everything necessary to start using Kokkos to write performance portable code. This Kokkos Lecture Series will consist of a 2-hour online lecture every Friday and exercises as homework. The team will provide support via GitHub and Slack throughout the time of the training.
What is Kokkos?
Kokkos is a C++ Programming Model for Performance Portability developed by a team spanning some of the major HPC facilities in the world. It allows developers to implement their applications in a single source fashion, with hardware vendor agnostic programming patterns. Implemented as a C++ template meta programming library, Kokkos can be used with the primary tool chains on any HPC platforms. The model is used by many HPC applications both within and outside the US, and is the primary programming model for the efforts of the Sandia National Laboratory to make their engineering and science codes ready for exascale. At this point more than 100 projects are using Kokkos to obtain performance portability.
The tutorial will teach attendees the basics of Kokkos programming through a step-by-step sequence of lectures and hands-on exercises. Fundamental concerns of performance portable programming will be explained. At the end of the training, attendees will have learned how to dispatch parallel work with Kokkos, do parallel reductions, manage data, identify and manage data layout issues and expose hierarchical parallelism. Attendees will also learn about advanced topics such as using SIMD vector types, tasking and integrate Kokkos with MPI. Furthermore the Kokkos Lecture Series will cover the use of Kokkos Tools to profile and tune applications, as well as leveraging the KokkosKernels math library to access performance portable linear algebra operations. The material used during the training will be available online, including the exercises and their solutions. Support hours will be offered to answer questions and help with exercises – including access to Cloud Instances with GPUs to do the exercises (we may need to limit attendee numbers for those depending on demand).
Contents of the Tutorial
This is a preliminary outline of the training. We are keeping a 9th day in reserve for anticipated schedule slippage. The lectures will be held Fridays: 10:00-12:00 MT (12:00-14:00 ET; 9:00-11:00 PT).
Module 1: Introduction 07/17/2020
- Introduction
- How to build
- Data parallel execution patterns
Module 2: Views and Spaces 07/24/2020
- Views
- Memory Space and Execution Spaces
- Memory access patterns (layouts)
Module 3: Data Structures and MDRange 07/31/2020
- Subview
- MDRange
- Dual View
- Atomics
- Scatter View
Module 4: Hierarchical Parallelism 08/07/2020
- Hierarchical parallelism
- Scratch Space
Module 5: Streams, Tasking and SIMD 08/14/2020
- Stream Integration
- Tasking
- SIMD
Module 6: MPI and PGAS 08/21/2020
- MPI
- PGAS
Module 7: Tools 08/28/2020
- Profiling
- Tuning
- Static Analysis
Module 8: Kokkos Kernels 09/04/2020
- BLAS
- Sparse BLAS
Backup Day: 09/11/2020
How to Attend
- The lecture series is available to everyone
- No-cost registration is necessary, meeting password will be send to registrants.
- For the exercises access to an NVIDIA GPU system or AMD GPU system with up-to-date software stack is recommended.
For updates and questions visit: https://github.com/kokkos/kokkos-tutorials/issues/38