Jul
29
Mon
CMake Training @ National Energy Research Scientific Computing Center (NERSC)
Jul 29 @ 9:00 am – Jul 30 @ 5:00 pm

ECP is partnering with Kitware and NERSC to offer an introductory 1.5 day CMake training class at NERSC on July 29-30.  Registration is now closed for this event.

The training class will be held in 3101 in Wang Hall at Lawrence Berkeley Lab.  The tentative agenda for the training is given below.  The training is targeted at a deeper understanding of CMake. It seeks to assist ECP developers in learning how to resolve issues outside of their control, in addition to writing a build system generator capable of seamlessly configuring for multiple unique architectures with a variety of compilers.

 

For questions please send an email to [email protected].

We are looking forward to seeing you in Berkeley.

Aug
14
Wed
Software Management Plans in Research Projects
Aug 14 @ 9:00 am – 10:00 am

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) offers a monthly webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar in the series was titled Software Management Plans in Research Projects, and was presented by Shoaib Sufi (Software Sustainability Institute). The webinar took place on Wednesday, August 14, 2019 at 12:00 pm ET. (One hour earlier than the usual time.)

Abstract:

Software is a necessary by-product of research. Software in this context can range from small shell scripts to complex and layered software ecosystems. Dealing with software as a first class citizen at the time of grant formulation is aided by the development of a Software Management Plan (SMP). An SMP can help to formalize a set of structures and goals that ensure your software is accessible and reusable in the short, medium and long term. SMP’s aim at becoming for software what Data Management Plans (DMP’s) have become for research data (DMP’s are mandatory for National Science Foundation grants). This webinar takes you through the questions you should consider when developing a Software Management Plan, how to manage the implementation of the plan, and some of the current motivation driving discussion in this area of research management.

Sep
4
Wed
Machine Learning with Tensorflow, Horovod and PyTorch on HPC
Sep 4 @ 12:00 pm – 1:00 pm

Abstract

Running efficient and scalable deep learning applications on leadership computing systems, including future exascale supercomputers, requires good use of popular deep learning frameworks, such as TensorFlow, Horovod, and PyTorch. In this ESP Webinar, we covered the basics of when you should use these frameworks, how to build and deploy models on HPC systems, and how to get good performance. Additionally, deep learning workloads on HPC also require care when scaling to multi-node jobs, and HPC systems offer opportunities to perform hyperparameter searches as well. The presenters discussed some techniques for profiling deep learning workloads on HPC systems and how to solve bottlenecks.

Organizers

  • Haritha Siddabathuni Som (ALCF)
  • Ray Loy (ALCF)
  • Yasaman Ghadar (ALCF)

Webinar materials

Sep
11
Wed
Discovering and Addressing Social Challenges in the Evolution of Scientific Software Projects
Sep 11 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar in the series was titled Discovering and Addressing Social Challenges in the Evolution of Scientific Software Projects, and was presented by Rene Gassmoeller (UC Davis). The webinar took place on Wednesday, September 11, 2019 at 1:00 pm ET.

Abstract:

In recent years scientific software projects have increasingly incorporated state-of-the-art technical best practices like continuous integration into their development cycle. However, many projects still struggle to create and maintain an active and welcoming user/developer community, and there exists little documentation on what makes a scientific software community successful. In this webinar I will introduce my work — as a Better Scientific Software Fellow — typical social challenges and potential solutions that arise during the evolution of a scientific software project. Aimed at current and prospective software maintainers and community leaders, I will discuss topics such as building and maintaining a welcoming community atmosphere, overcoming skepticism of sharing science and software, mediating between users working on conflicting topics or publications, and providing credit and growth opportunities for community members. Finally, I hope to initiate a conversation among project and community leaders about what makes communities successful so that we can learn from each other and improve scientific software development together.

Oct
16
Wed
Tools and Techniques for Floating-Point Analysis
Oct 16 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar is titled Tools and Techniques for Floating-Point Analysis, and will be presented by Ignacio Laguna (Lawrence Livermore National Laboratory). The webinar will take place on Wednesday, October 16, 2019 at 1:00 pm ET.

Abstract:

Scientific software is central to the practice of research computing. While software is widely used in many science and engineering disciplines to simulate real-world phenomena, developing accurate and reliable scientific software is notoriously difficult. One of the most serious difficulties comes from dealing with floating-point arithmetic to perform numerical computations. Round-off errors occur and accumulate at all levels of computation, while compiler optimizations and low-precision arithmetic can significantly affect the final computational results. With accelerators such as GPUs dominating high-performance computing systems, computational scientists are faced with even bigger challenges, given that ensuring numerical reproducibility in these systems poses a very difficult problem. This webinar provides highlights from a half-day tutorial discussing tools that are available today to analyze floating-point scientific software. We focus on tools that allow programmers to get insight about how different aspects of floating-point arithmetic affect their code and how to fix potential bugs.

Nov
1
Fri
ECP/NERSC UPC++ Tutorial @ Lawrence Berkeley National Laboratory, Shyh Wang Hall, Bldg 59, Room 59-3101
Nov 1 @ 9:00 am – 2:00 pm

Registration is now open for the one day ECP/NERSC UPC++ tutorial.

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. UPC++ provides mechanisms for low-overhead one-sided communication, moving computation to data through remote-procedure calls, and expressing dependencies between asynchronous computations and data movement. It is particularly well-suited for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces are designed to be composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds.

In this tutorial we will introduce basic concepts and advanced optimization techniques of UPC++. We will discuss the UPC++ memory and execution models and walk through implementing basic algorithms in UPC++. We will also look at irregular applications and how to take advantage of UPC++ features to optimize their performance.

This event can be attended on-site at NERSC or remotely via Zoom.  The remote connection information will be provided to the registrants closer to the event.  Registration is required for this event and space is limited so please register as soon as possible. Registration closes for this event when the limit is reached or on October 18, 2019.

 

 

Dec
3
Tue
A Roadmap for SYCL/DPC++ on Aurora
Dec 3 @ 12:00 pm – 1:00 pm

Abstract

This talk introduced SYCL as a programming model for Aurora, the upcoming Argonne exascale machine. SYCL is a single source heterogeneous programming model based on standard C++. It uses C++ templates and lambda functions for host and device code. SYCL builds on the underlying concepts of portability and efficiency of OpenCL that enables code for heterogeneous processors, however it is less verbose compare to OpenCL. The single-source programming enables the host and kernel code for an application to be contained in the same source file, in a type-safe way and with the simplicity of a cross-platform asynchronous task graph. We will provide an overview of the SYCL concepts, compilation and runtime. No prior knowledge of OpenCL was required. Once the core concepts of SYCL were reviewed, the presenters walked through several code examples to highlight the key features of SYCL. SYCL by design is hardware agnostic and offers the potential to be portable across many of DOE’s largest machines.

Organizers

  • Haritha Siddabathuni Som (ALCF)
  • Ray Loy (ALCF)
  • Yasaman Ghadar (ALCF)

Presentation materials

Dec
11
Wed
Building Community through xSDK Software Policies
Dec 11 @ 1:00 pm – 2:00 pm

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar in the series was titled Building Community through xSDK Software Policies, and was presented by Ulrike Meier Yang (Lawrence Livermore National Laboratory) and Piotr Luszczek (The University of Tennessee, Knoxville). The webinar took place on Wednesday, December 11, 2019.

Abstract:

The development of increasingly complex computer architectures and software ecosystems continues. Applications that incorporate multiphysics modeling as well as the coupling of simulation and data analytics increasingly require the combined use of software packages developed by diverse, independent teams throughout the HPC community. The Extreme-scale Scientific Software Development Kit (xSDK) is being developed to provide coordinated infrastructure for independent mathematical libraries to support the productive and efficient development of high-quality applications. This webinar discussed the development and impact of xSDK community policies, which constitute an integral part of the project and have been defined to achieve improved code quality and compatibility across xSDK member packages and a sustainable software ecosystem.

Dec
16
Mon
ECP/NERSC UPC++ Tutorial @ Lawrence Berkeley National Laboratory, Shyh Wang Hall, Bldg 59, Room 59-3101
Dec 16 @ 9:00 am – 2:00 pm

This event was a repeat of the tutorial delivered on November 1, but with the restoration of the hands-on component which was omitted due to uncertainty surrounding the power outage at NERSC.

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. UPC++ provides mechanisms for low-overhead one-sided communication, moving computation to data through remote-procedure calls, and expressing dependencies between asynchronous computations and data movement. It is particularly well-suited for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces are designed to be composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds.

In this tutorial we introduced basic concepts and advanced optimization techniques of UPC++. We discussed the UPC++ memory and execution models and walked through implementing basic algorithms in UPC++. We also discussed irregular applications and how to take advantage of UPC++ features to optimize their performance. The tutorial included hands-on exercises with basic UPC++ constructs. Registrants were given access to run their UPC++ exercises on NERSC’s Cori (currently the #14 fastest computer in the world).

Jan
14
Tue
Kokkos Bootcamp / Training @ Buffalo Thunder Resort, Santa Fe, NM
Jan 14 – Jan 17 all-day

We are pleased to announce that we are hosting the next Performance Portability with Kokkos Bootcamp January 14-17, 2020 at the Buffalo Thunder Resort in Santa Fe NM. This training is intended to teach new Kokkos users how to get started and to help existing Kokkos users to further improve their codes. The training will cover the minimum required topics to get your application started on using Kokkos, and Kokkos experts will be on hand to help the more advanced users.

A room block for this event has been reserved for January 13, 2020 – January 17, 2020 at the Hilton Santa Fe Buffalo Thunder.  The deadline to book a room within the room block is January 1, 2020.

See tickets to register or to get more information.