By Coury Turczyn
The fundamental properties of materials—how they form, what their structures look like, and the ways they interact—have a direct impact on our daily lives. From the smart phone in your pocket to the airline flight you booked with it, the tools we use in the modern world are designed around our understanding of how their materials function.
After 6 years of development by a team of researchers supported by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), the open-source simulation code QMCPACK has been redesigned to run on everything from laptop PCs to high-performance computing systems. But its ability to model the electronic structures of materials with new levels of speed and accuracy will truly be unleashed on the DOE’s latest GPU-accelerated exascale supercomputers, which will allow scientists to quicken the pace of discovery in developing more efficient, lower-cost technologies for sustainable energy sources and beyond.
“What is special about QMCPACK is that it can provide a capability that simply doesn’t exist right now. There is a real need to have a methodology and tools that can give us very reliable predictions and explanations of material properties,” said team leader Paul Kent, a computational nanoscience researcher at the DOE’s Oak Ridge National Laboratory (ORNL). “What we are really after is a method in which we can run longer simulations to steadily improve their accuracy—and we’re taking a big step toward that on this project.”
Advancing Materials Science Research
Discovering new materials properties, or even whole new materials, has usually been conducted through laboratory experiments that test for electrical, mechanical, and thermal properties, among others. But it’s often a slow and laborious process, especially considering the vast number of materials to test. Traditionally, it has taken many years to develop materials suitable for deployment. To speed the process, computational scientists have been using ab initio quantum mechanics simulations, using the least amount of experimental data—typically only the material composition. This strategy should result in more accurate predictions of physical properties on an atomic level at the cost of potentially expensive compute time.
By redesigning and optimizing QMCPACK to run effectively on GPUs (graphics processing units) from AMD, Intel, and NVIDIA, Kent’s team aims to put the code’s ab initio simulations into hyperdrive. The power of exascale supercomputers such as ORNL’s Frontier—which is currently the fastest in the world with a theoretical peak of 2 exaflops or 2 quintillion calculations per second—will allow scientists to simulate much larger atomic systems, made from a much wider range of elements, and at around 50× faster than before.
“To get a good description of a material, we must be able to simulate hundreds or thousands of atoms and therefore thousands, maybe ten thousand, electrons. To do that, we need the computing power of Frontier and beyond,” Kent said.
QMCPACK uses quantum mechanics—the physics of how matter and energy interact on an atomic level—to calculate the electronic structures of molecular systems by computing the necessary Schrödinger equation with a class of methods called Quantum Monte Carlo. Its models can be far less approximate than other simulation approaches, and consequently its results are more reliable.
“Despite all the angst around how to interpret quantum mechanics, for our purposes, it is a very practical and useful method,” Kent said. “For example, we could analyze an electrode material in a battery or a catalyst, or new metal alloys, and learn both the fundamentals of how they are working and gain insight into how they might be improved. The methods can therefore help us improve a lot of technology, materials, and chemistry that surround our daily lives.”
Finding the Right Tools to do the Job
The QMCPACK team—composed of researchers from ORNL, Lawrence Livermore, Sandia, and Argonne national laboratories as well as North Carolina State University—faced an early challenge in choosing efficient programming tools. The choice of compiler has always been an essential question in redesigning QMCPACK to successfully run on DOE’s upcoming exascale supercomputers, but 6 years ago, few of the potential options were mature enough for the job. Compilers convert the programming languages used to create the software to optimized instructions that the CPUs and GPUs can understand and execute.
The team’s choices ultimately worked, but the compilers required their own redesigns first. In particular, the implementations of OpenMP—a popular API that is used to specify high-level parallelism and exploit both CPUs and GPUs to attain faster compute times—needed maturation and optimization of its implementations. Likewise, the open-source compiler LLVM, which contains an implementation of OpenMP, had to be optimized. Fortunately, the ECP has a team that focuses on exactly that mission: SOLLVE.
“We don’t write compilers, so we have been working closely with SOLLVE and the vendors to improve them by providing a lot of pointed and persistent feedback. We found gaps, bugs, and performance issues with just about every single compiler—be it open-source or vendor supplied,” Kent said. “The good news is that these problems are mostly fixed, and once fixed, they are fixed for everyone in the ecosystem. For example, recently, it became possible to start science production on NVIDIA GPU-powered systems with the new QMCPACK implementation and LLVM.”
QMCPACK at Work
As the principal investigator of the Center for Predictive Simulation of Functional Materials, which is funded by the DOE’s Office of Basic Energy Sciences, Kent is in a good position to make early science runs of QMCPACK on Frontier. He is currently examining a range of metal oxides and 2D nano materials that are expected to show novel properties. The project includes experimentalists who are conducting validation experiments to parallel the predictions.
“These are materials for which we know that increased accuracy is needed, and we’re now able to see the speed and performance increase from moving those calculations onto Frontier,” Kent said. “The main thing we’re looking to do in the beginning of the exascale era with this methodology is to start doing materials with elements from across the whole periodic table.”
In particular, Kent is interested in investigating quantum materials, which have novel behaviors that could be used in low-power electronics, quantum computers, or in new ways of generating, storing, and transporting energy.
“We work closely with our experimentalist colleagues, and we want to give them a list of the top three or five materials to work on that we think will have the properties we’re all after,” Kent said. “And we want to have the most confidence that we can in those findings so we can proceed much more quickly and efficiently to really speed up the process of discovery.”
This research is part of the DOE-led Exascale Computing Initiative (ECI), a partnership between DOE’s Office of Science and the National Nuclear Security Administration. The Exascale Computing Project (ECP), launched in 2016, brings together research, development, and deployment activities as part of a capable exascale computing ecosystem to ensure an enduring exascale computing capability for the nation.