Tools and Techniques for Floating-Point Analysis

The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) has resumed the webinar series on Best Practices for HPC Software Developers, which we began in 2016.

As part of this series, we offer one-hour webinars on topics in scientific software development and high-performance computing, approximately once a month. The next webinar was titled Tools and Techniques for Floating-Point Analysis, and was presented by Ignacio Laguna (Lawrence Livermore National Laboratory). The webinar took place on Wednesday, October 16, 2019 at 1:00 pm ET.


Scientific software is central to the practice of research computing. While software is widely used in many science and engineering disciplines to simulate real-world phenomena, developing accurate and reliable scientific software is notoriously difficult. One of the most serious difficulties comes from dealing with floating-point arithmetic to perform numerical computations. Round-off errors occur and accumulate at all levels of computation, while compiler optimizations and low-precision arithmetic can significantly affect the final computational results. With accelerators such as GPUs dominating high-performance computing systems, computational scientists are faced with even bigger challenges, given that ensuring numerical reproducibility in these systems poses a very difficult problem. This webinar provided highlights from a half-day tutorial discussing tools that are available today to analyze floating-point scientific software. We focused on tools that allow programmers to get insight about how different aspects of floating-point arithmetic affect their code and how to fix potential bugs.