Optimizing the North American Power Grid to Improve Reliability and Support Grid Decarbonization

Exascale Computing Project · Episode 83: Optimizing North American Power Grid to Improve Reliability and Grid Decarbonization
Slaven Peles, Pacific Northwest National Laboratory

Slaven Peles, Pacific Northwest National Laboratory

 

Hello, this is where we explore the efforts of the Department of Energy’s Exascale Computing Project (ECP)—from the development challenges and achievements to the ultimate expected impact of exascale computing on society.

ECP has an Application Development project called ExaSGD. The name is shorthand for Optimizing Stochastic Grid Dynamics at Exascale. The word “stochastic” means that results are given in terms of probabilities instead of exact predictions—weather forecasting may be the best example. ExaSGD aims to optimize North American power grid planning, operation, and control and improve reliability and efficiency.

Joining us is the project’s principal investigator, Slaven Peles of Pacific Northwest National Laboratory. His expertise is in computational challenges arising in power systems and similar engineered systems.

Our topics: A high-level clarification of what the term power grid means, why computer simulations are so important relative to power systems, why large computations are needed for the power grid, and more.

Interview Transcript

Gibson: Before we unpack what ExaSGD is doing, what is meant by the term power grid? I think many people have only a vague idea.

Peles: Of course. In layman’s terms, the power grid is the entire infrastructure that is in between power plants that generate electricity and your home.

Gibson: How big and complex is the power grid?

Peles: So, when we talk about the power grid, what we typically think is the North American transmission grid. This is what our project is focusing on. The North American transmission grid has over 100,000 miles of transmission lines and has roughly 500 different companies managing different aspects of it. Arguably, this is one of the most complex engineering systems ever built.

Gibson: Why are computer simulations so important with respect to power systems?

Peles: One reason why simulations are very important and why computer analysis is very important for power systems is that the power grid is a system that has been in development for the last 120 years. The first large-scale transmission line was built in 1895 and connected the hydroelectric power plant at Niagra Falls with the city of Buffalo, New York, and ever since then the new transmission lines, the new power plants were built and expanded. This is a very different feat of engineering than, say, for example, building an aircraft or jet engine.

When we build a new car or a new aircraft, we can build a prototype and test it and crash it if needed just to see how it behaves in extreme situations. We have far less opportunities to experiment with the power grid because our critical resources depend on the power supply. That is why computer simulations and computer analysis are extremely important for power systems.

One of the critical aspects of maintaining the power supply to the national power grid in the US is to meet demand with equal supply. We cannot store electricity once we generate it, so whatever we generate needs to be spent. Now, you can never have exactly the same supply and demand, of course, because that is kind of difficult to manage and predict.

But the way power systems have operated so far is that large generation plants have huge inertia, so if demand suddenly increases, that’ll slow down a little bit the large power generators. But that slowdown would be miniscule, and this would be a fraction of a percentage of frequency kind of a change, almost unnoticeable. This will give sufficient time to grid operators to ramp up supply to meet the new demand. The same thing happens if the demand drops and the power generators start speeding up a little bit, and then that gives sufficient time to power systems operators to slow down generators and to implement adequate action.

wind and solar renewable energy sourcesNow, with renewables, such as wind and solar, they have far less inertia than large power plants, and that means that demand and response changes have to happen at much faster timescale. Furthermore, we cannot control wind; we cannot control sun as well as we can control thermal power plants or hydroelectric power plants. So, there is a big factor of uncertainty there.

So if we really want to decarbonize the power grid, we have to bring new technologies in that will allow us to control power systems and make sure that demand and supply are always met.

Gibson: Why are large computations needed for the power grid?

Peles: Typically, grid operators would look at different contingencies. That means whatever can go wrong with the power grid. They have very well-developed processes to looking at these different contingencies. First, to screen contingencies that are not credible—in other words, contingencies that will not cause any significant damage—and then they can identify somewhere in the neighborhood of less than 100 credible contingencies that require further analysis.

Now, this entire methodology was developed under the assumption that you have a huge inertia in power plants and that you will have sufficient time to respond to these contingencies. With adding more renewables to the power grid, first, response times become much faster, much shorter, and the other aftereffect of more renewables is that we have less predictable outcomes due to different changes in weather and there are more things that can go wrong. So the expectation is that we will need contingency analysis that includes different weather scenarios and that includes much more different things that could possibly go wrong.

What we are developing within our project, ExaSGD, is the capability to look at thousands of different contingencies at the same time and thousands of different weather scenarios. So with that computational capability, we can provide a very dramatic change in how the grid is operated and how short-term planning is done for the power grid.

One misconception about exascale computing in general is that it is something that requires a massive computational facility. It is true that for the ExaSGD project we are looking at extreme size cases and we are trying to scale up our computations and push the limits of what is possible. But for the day-to-day use, even a subset of the hardware that we are using in the Exascale Project will provide dramatic improvements over the current state of the art, and in our particular case, it would allow grid operators and grid planners to do analyses that are way beyond what they are able to do now with the more traditional computational technology.

Gibson: What are some of the possible disruptions to the power grid that can occur?

Peles: There are quite a few things that can go wrong. We are talking often about cyberattacks. We are talking about natural disasters. But one thing that I would like to mention is the climate change and its effects on power systems and power grid operation.

For example, last year was the year when we recorded the largest number of hurricanes ever, and from 2016 we have 5 consecutive years where we had an above-average hurricane season. Last year we also had unprecedented forest fires in California. And this winter, in February, we had, again, an unprecedented snowstorm and freezing temperatures in Texas. So the impact of climate change is very hard to overestimate.

We are seeing weather patterns that we haven’t seen before, and this seems to be only to get worse. And we need to address that in terms of better planning and better operation. But we also need to address that by decarbonizing the grid and removing the main root cause for the climate change.

Gibson: Your team models and simulates possible power grid disruptions. What level of fidelity are you aiming for, and what is involved in developing the computer models?

Peles: What we’re aiming at, really, is much higher fidelity level than what is typically used today. In particular, we want to capture all the uncertainties due to the changing weather patterns.

We want to be able to test how the grid would operate under different weather scenarios, and we want to do it fast so that we can have a robust day-ahead planning, for example, for the power grid.

We also want to throw in ramping constraints into our power systems analysis, and ramping constraints are essentially how fast you can ramp up electricity output from a certain generator when demand suddenly increases. And this is another thing that has not been traditionally typically used in power systems analysis.

Gibson: What is the biggest computational challenge that ExaSGD faces, and how is the team tackling it?

Peles: This is an excellent question. Most of the numerical algorithms and methods that are used for power system analysis are sequential by their nature, so one instruction follows the other, whereas the new hardware that has been developed within the Exascale Computing Project is massively parallel.

We as humans tend to think in sequential terms. We are not very good at multitasking—the exception, of course, is my son, who can text his friends and talk to me at the same time. But, in general, think of our algorithms as one instruction after another. Now, we have to change that mindset around several computations in parallel or a large number of computations in parallel in order to take advantage of the new computational hardware.

Power systems were not traditional high-performance computing disciplines, such as, for example, molecular dynamics or computational fluid dynamics, and similar. So in our project we actually had to come up with new algorithms that are tailored for massively parallel architectures as opposed to the traditional ones that were designed for sequential execution.

Gibson: What is your view of the importance of the Exascale Computing Project?

Peles: I think the Exascale Computing Project in general is a once-in-a-lifetime opportunity to contribute to a transformational change in computational technology overall. And I have to say, I have been blessed to be given this opportunity to participate in the Exascale Project and to work with all of the wonderful people on my team, and also to be able to talk and interact with all the computational science experts that are participating in our sister Exascale Computing Project [Application Integration at Facilities—Frontier]. This is an amazing feat of science, and the Exascale Computing Project brought all the best minds in computational science under the same roof.

Gibson: What is the ultimate hoped-for outcome of your efforts with ExaSGD?

Peles: I would love to see what we develop here to be used by independent systems operators and adopted by software vendors and be used in the day-to-day power grid operation. Right now, we are trying to demonstrate that these computations are possible, that we can scale them up to unprecedented levels, and that we can provide information that is critically needed for power systems operators to operate a new grid with a large number of renewable sources.

Gibson: All right. Well, Slaven Peles, thank you very much for joining us on ECP’s podcast.

Peles: Thank you. My pleasure.

Related Link

ExaSGD project description