Today, Oct. 18, is Exascale Day. This date was not chosen by chance; 10/18 pays homage to the power of exascale computing, in which systems are capable of performing 1018 calculations per second.
Though Exascale Day has been officially recognized for three years now, this year is particularly special as the Frontier system at the Department of Energy’s Oak Ridge Laboratory, where I serve as associate laboratory director for computing and computational sciences and director of DOE’s Exascale Computing Project, or ECP, has broken the exascale barrier.
Specifically Frontier, an HPE-Cray EX system powered by AMD processors, achieved 1.1 exaflops on the LINPACK benchmark at the ISC High Performance Conference in May.
Frontier is seven times more powerful at current speed than its predecessor Summit, the world’s previous fastest machine, which also happens to be at ORNL. Not only that, but it ranks first on the Green500, a tally of the world’s top supercomputers in terms of energy efficiency.
Such massive enhancements in speed and efficiency are possible thanks to advancements in traditional computing hardware, ECP’s targeting of critical R&D investments for our U.S. vendor partners, and the incorporation of graphics processing units, or GPUs. When combined with traditional CPUs, GPUs excel at performing the repetitive, parallel calculations necessary for the simulations used to solve the most complex scientific problems.
ORNL worked with its vendor partners to help pioneer this CPU/GPU hybrid architecture a decade ago on its Titan system, paving the way for one of computing’s most spectacular achievements in Frontier.
Exascale supercomputers have been a dream of mine – and of virtually everyone involved in high-performance computing – for more than a decade. The reason is simple: these systems hold the potential to propel humanity to new heights by revealing nature’s workings at the most minute, and most massive, scales.
Frontier and future exascale systems will allow researchers to simulate, and by extension rapidly and efficiently design, materials necessary for the most extreme environments, such as the inside of fusion reactors where temperatures can reach 100 million degrees Celsius; model complex chemical processes involving thousands of atoms for the engineering of chemical catalysts critical to a wide spectrum of industrial processes; and unravel the secrets of the cosmos via sophisticated, large-scale models of cosmic structure formation, including questions around dark matter and dark energy.
Exascale computing will bring plenty of unforeseen benefits as well. Just as the Space Race ushered in new technologies that led to such modern miracles as GPS and CT scans, the cutting-edge innovation required to stand up exascale systems will fortuitously advance a wide range of research and development and accelerate the evolution of consumer devices.
These numerous benefits will ensure continued global technological leadership for the U.S. It’s this leadership across the computing spectrum that makes Frontier possible and available for scientific research.
Machines of this magnitude require an entire computing ecosystem to take advantage of such massive hardware. While ORNL’s industrial partners were building Frontier, an army of computer scientists, engineers and scientific researchers were designing the necessary applications and systems software.
After all, what good is a computer the size of Frontier if not useful and affordable?
These efforts were largely the mission of the ECP, which is tasked with accelerating delivery of an exascale computing ecosystem to allow researchers to harness the power of Frontier in the modeling and simulation of critical scientific and national security challenges. That ecosystem consists of the exascale software stack and selected applications that will take full advantage of exascale-level computation. It also encompasses a wealth of knowledge accumulated over the past six years on the co-design and integration necessary to take advantage of new architectures, as well as a massive amount of connected processing nodes such as those in Frontier.
The ECP project will wrap up in 2024, and Frontier and the forthcoming exascale systems at Argonne and Lawrence Livermore national laboratories serve as testaments to ECP’s success.
I am fortunate to be part of such a remarkable effort, along with more than 1,000 others who worked tirelessly across the national labs, industry and academia to make this dream come true.
Our collaborative mission is the result of a public-private partnership that allowed us to eclipse the petascale barrier nearly 15 years ago and bypass the terascale 25 years ago. It’s a symbiotic model that has continuously propelled America forward and, as Frontier and future systems clearly demonstrate, is more critical than ever.
Frontier will go into full operation early next year and is being prepared to run a large suite of applications. But even as we look forward to realizing its potential, we also know science never sleeps.
The same people who made Frontier possible are designing the systems that will follow. No one knows quite what the future holds, but whether it’s quantum accelerators, AI-driven applications, coupled-facility workflows, or even the next great scale barrier in computing, the zettascale, Frontier and other upcoming DOE systems have laid the groundwork.
So, for today, let’s take a moment to appreciate those who made the monumental achievement of exascale possible and ponder the possibilities of the next great era in computing.