By Lawrence Bernard
Researchers supported by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP) have created software to help wind farms more efficiently produce clean energy and reduce reliance on fossil fuels, thereby helping to minimize global warming.
Those giant, wind-driven blades that spin atop 300-foot poles are a marvel of engineering. As they capture wind energy to convert into electricity, the blades appear to rotate effortlessly in an even, methodical, and aerodynamic way to provide green power.
In reality, those blades must manage a host of physics issues to work well. The terrain, wind speed, weather conditions, and atmospheric turbulence all impact the efficiency of a wind turbine. But what if you could model all these factors on a computer and design better and more efficient turbines without the extensive time and cost required to make traditional, physical prototypes?
Enter exascale computing—with this level of computing power, researchers and engineers can model operational control, optimal siting, and reliable integration within the power grid for wind energy at scales that were simply intractable in the pre-exascale era. Pre-exascale computers lack the computational throughput necessary to model interacting turbines in a wind farm at full scale and in a reasonable time frame, according to Michael Sprague, chief wind computational scientist at DOE’s National Renewable Energy Laboratory (NREL) in Golden, Colorado.
Sprague is Principal Investigator (PI) of ExaWind, the ECP subproject that aims to create predictive-simulation codes for wind turbines and wind farms.
“We need to have models to predict how a turbine is going to respond if you want it to be successful,” Sprague said.
Wind turbines are the largest rotating machines in the world. Offshore wind turbines can have blades more than 100 yards long with a 240-yard rotor emitting 12 megawatts of power, which is enough to power a small town. The airflow boundary layer around the blades is extremely thin at only a few microns wide. The wind farms themselves can span well over 10 miles.
“If you want to predict the fluid dynamics of all this, you need to resolve this giant difference in scale. That’s why we need supercomputing to solve these grand-challenge simulations,” Sprague said.
Updated Codes for the Exascale Era
Sprague and the ExaWind team established a new suite of software to leverage the first exascale computers as they tackle the wind-energy problem. To run their models on Oak Ridge National Laboratory’s (ORNL’s) Frontier, the world’s first exascale computer, the team is coupling two computational fluid dynamics codes and adapting data structures and algorithms, and leveraging abstraction layers in Kokkos and AMReX for GPU performance portability to perform on GPUs by introducing the right equations and improving the linear-system solvers.
So far, the team has had success on ORNL’s Summit, which also boasts a hybrid hardware architecture (i.e., CPUs + GPUs), as well as on Crusher, the Frontier test and development system, as they prepare their codes for Frontier.
By capitalizing on exascale, ExaWind will create a virtual test environment and demonstrate that the code can simulate a real system. Furthermore, this new platform will enable scientists to perform the highest-fidelity simulations of a wind farm ever conducted and become the gold standard for developing optimized models.
“You need confidence that the results are an accurate representation of reality,” Sprague said. “This is very powerful and can lead to new ideas for how to optimize these systems. With exascale computing, we will be able to simulate how the system will perform without having to build one in the field.”
Scaling Challenge Led to Improvements
One of the biggest challenges was how to scale a simulation problem with billions of equations to the millions of computational cores on Frontier and obtain the shortest possible time to solution. For perspective, perfect linear scaling would be running a given problem on twice as many cores and cutting the time to solution in half (i.e., a 2× increase in processors resulting in a 2× speedup).
However, complex codes rarely scale perfectly, and there is usually a limit to how much a given model can be distributed over a supercomputer. The team has been improving ExaWind scaling by profiling hot spots and tuning algorithms and solver settings.
Additionally, Sprague is no stranger to another kind of scaling. As a rock-climbing enthusiast and competitive-climbing referee, he has ascended some of the highest peaks in Colorado. He earned a bachelor’s degree from the University of Wisconsin–Madison and a master’s degree and doctorate from the University of Colorado at Boulder—all in mechanical engineering.
ExaWind has more than 50 collaborators and is a close partnership between NREL, ORNL, and Sandia National Laboratories, which is home to ExaWind co-PI Paul Crozier.
The ExaWind application codes perform well on the Summit supercomputer, and the team is readying them for their ultimate target—Frontier. Simulating a wind farm with 15 turbines that produce 5 megawatts each would require a model with at least 20 billion grid points, and Frontier is the obvious choice for a model this complex.
With support from the DOE Wind Energy Technologies Office, the team is validating ExaWind by simulating a real 2-megawatt turbine with an 80-meter rotor, and the researchers will be able to compare the data from the model against real field measurements.
“We have to move away from fossil fuels and make renewable energy economically more appealing than fossil fuels and create wind power without subsidy,” Sprague said. “The complexity of the system is huge, and we need these models to find the pathways to extract clean energy from the wind.”
This research is part of the DOE-led Exascale Computing Initiative (ECI), a partnership between DOE’s Office of Science and the National Nuclear Security Administration. This research is also funded by the DOE EERE Wind Energy Technologies Office. The Exascale Computing Project (ECP), launched in 2016, brings together research, development, and deployment activities as part of a capable exascale computing ecosystem to ensure an enduring exascale computing capability for the nation.