Mike Heroux is a Senior Scientist at Sandia National Laboratories, Director of Software Technology for the US DOE Exascale Computing Project (ECP) and Scientist in Residence at St. John’s University, MN. His research interests include all aspects of scalable scientific and engineering software for new and emerging parallel computing architectures. He is the founder of the Trilinos scientific libraries, Kokkos performance portability, Mantevo miniapps and HPCG Benchmark projects, and is presently leading the Extreme-scale Scientific Software Stack (E4S) project in DOE, a curated collection of HPC software targeting leadership platforms.
Shahzeb Siddiqui is a HPC Consultant/Software Integration Specialist at Lawrence Berkeley National Laboratory at NERSC. He is part of User Engagement Team where he is responsible for engaging with NERSC user community through user support tickets, user outreach, training, and documentation. Shahzeb is part of the Exascale Computing Project (ECP) in Software Deployment (SD) group where he is responsible for building Extreme-Scale Scientific Software Stack (E4S) at the DOE facilities. Shahzeb has experience installing and managing large software stack, cluster manager (Bright Cluster Manager, Cobbler), configuration management (Ansible), GPFS, Slurm and LSF. Shahzeb is an experienced Developer, Dev-Ops, System Administrator and involved in several open-source projects. Shahzeb started out his career in High Performance Computing (HPC) in 2012 at King Abdullah University of Science and Technology (KAUST) while pursuing his Masters. His focus in HPC includes Parallel Programming, Performance Tuning, Containers (Singularity, Docker), Linux System Administration, Software Deployment and Testing, and Scheduler Optimization. Shahzeb has held multiple roles in his HPC career in the following companies: Dassault-Systemes, Pfizer, Penn State, and IBM. Prior to 2012, he was a software engineer holding multiple roles at Global Science & Technology, Northrop Grumman, and Penn State.
Sameer Shende has helped develop the TAU Performance System, the Program Database Toolkit (PDT), the Extreme-scale Scientific Software Stack (E4S) [https://e4s.io] and the HPCLinux distro. His research interests include tools and techniques for performance instrumentation, measurement, analysis, runtime systems, HPC container runtimes, and compiler optimizations. He serves as a Research Associate Professor and the Director of the Performance Research Laboratory at the University of Oregon, and as the President and Director of ParaTools, Inc. and ParaTools, SAS.
Ryan Adamson leads the HPC Security and Information Engineering Group at the Oak Ridge Leadership Computing Facility (OLCF). His group is responsible for delivering highly-scalable and reliable security services and telemetry platforms to the high-performance computing resources and staff at the OLCF. He also leads the Software Deployment at Facilities (SD) area of the Exascale Computing Project. Its mission is to ensure that the AD and ST products funded by ECP are buildable, testable, and available for use by ECP at the Office of Science supercomputing facilities. Major components of this work include developing supercomputing-specific enhancements to continuous integration tools like GitLab server and runners. Other effort includes using Spack to install scientific software included in E4S along with CI/CD pipelines to automatically produce build artifacts that users at facilities can pull in to their own from-source builds.
Katie Antypas is the NERSC Division Deputy and leads the Data Department at NERSC. As the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, she has oversight of the Data Science Engagement, Data and Analytics Services, Storage Systems, and Infrastructure Services groups. Katie is the Director of the Hardware and Integration are of the Exascale Computing Project. She is also the co-PI on a ASCR funded research project called ScienceSearch: Enabling Automated Metadata through Machine Learning. Katie has expertise in system architectures, parallel I/O, application performance, and user science requirements. From 2012-2017, Katie led the NERSC-8 system procurement resulting in the deployment of the Cori system (named after Nobel Laureate Gerty Cori). The Cray XC system features 9300 Intel Knights Landing processors. The Knights Landing processors have over 60 cores with 4 hardware threads each and a 512-bit vector unit width. It is crucial that users can exploit both thread and SIMD vectorization to achieve high performance on Cori. Additionally, the Knights Landing architecture features high bandwidth on-package memory significantly faster than DRAM memory. The Cori system also features the Cray Aries interconnect, 28 PB for a Lustre-based file system, and a “burst buffer” layer of NVRAM that sits between the compute node memory and file system to accelerate I/O. Cori debuted as #6 on the Top500 list. From 2010 to 2013 Katie was the group leader for User Services at NERSC, a team of consultants who work directly with scientists to help them use apply NERSC resources effectively to their research and to optimize applications. Prior to becoming the Group Leader of USG, Katie was a consultant in the group from 2006-2010. She was the co-implementation team lead on the Hopper system. Hopper was NERSC’s first petaflop system, a Cray XE6 with over 150,000 compute cores which delivered more than 3 million computing hours to scientists each day. Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago as a parallel programmer developing the FLASH code, a parallel adaptive mesh refinement astrophysics application. She also spent 2 years as a management consultant at Cambridge Strategic Management Group building financial models and conducting market research. She has a Master’s degree in Computer Science from the University of Chicago and a Bachelor’s degree in Physics from Wellesley College.
Richard Gerber is NERSC’s Senior Science Advisor and Head of the HPC Department. Richard has been involved with leading-edge High Performance Computing systems for 30 years, using early Cray vector systems at the National Center for Supercomputing Applications (NCSA), the Connection Machine while a National Research Council Postdoctoral Fellow at NASA-Ames Research Center, and many generations of distributed-memory parallel computers as a staff member at NERSC since 1996. He holds a B.S. in Physics from the University of Florida, and a M.S. and Ph.D. degrees in Physics from the University of Illinois at Urbana-Champaign. At NERSC, he has been at the forefront at providing scalable HPC consulting services to NERSC’s users, gathering HPC needs from scientific communities and getting them implemented at the center. As HPC Department Head he oversees the Advanced Technologies, Application Performance, and User Engagement Groups at NERSC. As Senior Science Advisor he coordinates science outreach and engagement efforts and helps communicate the value of scientific computing and science in general. Richard is keenly interested in automatic collection of HPC performance and runtime data and making it available on the web to help users monitor, debug, and optimize their applications. Richard was the Deputy Project Lead on the “NERSC 7” procurement of the NERSC Edison Cray XC30 system and leads the Application Performance and User Support effort for the “NERSC 9” Perlmutter system. Richard has a Ph. D. in physics, specializing in computational astrophysics, from the University of Illinois at Urbana-Champaign. After obtaining his Ph.D., he completed a National Research Council postdoctoral fellowship at NASA Ames Research Center. His specialty is using N-body and Smoothed Particle Hydrodynamics to study colliding galaxies, “ring” galaxies in particular. He has a broad interest in science and particularly in the physical sciences.