Robustly Delivering Highly Accurate Computer Simulations of Complex Materials

By Scott Gibson

The theory of quantum mechanics underlies explorations of the behavior of matter and energy in the atomic and subatomic realms. Computer simulations based on quantum mechanics are consequently essential in designing, optimizing, and understanding the properties of materials that have, for example, unusual magnetic or electrical properties. Such materials would have potential for use in highly energy-efficient electrical systems and faster, more capable electronic devices that could vastly improve our quality of life.

Quantum mechanics-based simulation methods render robust data by describing materials in a truly first-principles manner. This means they calculate electronic structure in the most basic terms and thus can allow speculative study of systems of materials without reference to experiment, unless researchers choose to add parameters. The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.

A subproject of the US Department of Energy’s Exascale Computing Project (ECP) is developing a QMC methods software named QMCPACK to find, predict, and control materials and properties at the quantum level. The ultimate aim is to achieve an unprecedented and systematically improvable accuracy by leveraging the memory and power capabilities of the forthcoming exascale computing systems.

Greater Accuracy, Versatility, and Performance

One of the primary objectives of QMCPACK is to reduce errors in calculations so that predictions concerning complex materials can be made with greater assurance.

“We would like to be able to tell our colleagues in experimentation that we have confidence that a certain short list of materials is going to have all the properties that we think they will,” said Paul Kent of Oak Ridge National Laboratory and principal investigator of QMCPACK. “Many ways of cross-checking calculations with experimental data exist today, but we’d like to go further and make predictions where there aren’t experiments yet, such as a new material or where taking a measurement is difficult—for example, in conditions of high pressure or under an intense magnetic field.”

The methods the QMCPACK team is developing are fully atomistic and material specific. This refers to having the capability to address all of the atoms in the material—whether it be silver, carbon, cerium, or oxygen, for example—compared with more simplified lattice model calculations where the full details of the atoms are not included.

The team’s current activities are restricted to simpler, bulk-like materials; but exascale computing is expected to greatly widen the range of possibilities.

“At exascale not only the increase in compute power but also important changes in the memory on the machines will enable us to explore material defects and interfaces, more-complex materials, and many different elements.” —Paul Kent, QMCPACK principal investigator

With the software engineering, design, and computational aspects of delivering the science as the main focus, the project plans to improve QMCPACK’s performance by at least 50x. Based on experimentation using a mini-app version of the software, and incorporating new algorithms, the team achieved a 37x improvement on the pre-exascale Summit supercomputer versus the Titan system.

One Robust Code

“We’re taking the lessons we’ve learned from developing the mini app and this proof of concept, the 37x, to update the design of the main application to support this high efficiency, high performance for a range of problem sizes,” Kent said. “What’s crucial for us is that we can move to a single version of the code with no internal forks, to have one source supporting all architectures. We will use all the lessons we’ve learned with experimentation to create one version where everything will work everywhere—then it’s just a matter of how fast. Moreover, in the future we will be able to optimize. But at least we won’t have a gap in the feature matrix, and the student who is running QMCPACK will always have all features work.”

As an open-source and openly developed product, QMCPACK is improving via the help of many contributors. The QMCPACK team recently published the master citation paper for the software’s code; the publication has 48 authors with a variety of affiliations.

“Developing these large science codes is an enormous effort,” Kent said. “QMCPACK has contributors from ECP researchers, but it also has many past developers. For example, a great deal of development was done for the Knights Landing processor on Argonne’s Theta supercomputer with Intel. This doubled the performance on all CPU-like architectures.”

A Synergistic Team

The QMCPACK project’s collaborative team draws talent from Argonne, Lawrence Livermore, Oak Ridge, and Sandia National Laboratories. It also benefits from collaborations with Intel and NVIDIA. The composition of the staff is nearly equally divided between scientific domain specialists and people centered on the software engineering and computer science aspects.

“Bringing all of this expertise together through ECP is what has allowed us to perform the design study, reach the 37x, and improve the architecture,” Kent said. “All the materials we work with have to be doped, which means incorporating additional elements in them. We can’t run those simulations on Titan but are beginning to do so on Summit with improvements we have made as part of our ECP project. We are really looking forward to the opportunities that will open up when the exascale systems are available.”

Researchers

Oak Ridge National Laboratory

Richard Archibald
Peter Doak
Paul Kent (PI)
Jaron Krogel
Graham Lopez
Erica Valentine (project admin)

Sandia National Laboratories

Raymond Clay
Luke Shulenburger
Joshua Townsend
Christian Trott

Lawrence Livermore National Laboratory

Miguel A. Morales-Silva
Fionn Malone
Alfredo Correa

Argonne National Laboratory

Anouar Benali
Mark Dewing
Ye Luo
Nichols Romero
Hyeondeok Shin

Intel

Jeongnim Kim

NVIDIA

Jeff Larkin