Exploiting Exascale Computing to Make Earthquake Simulation Codes More Powerful

Exascale Computing Project · Episode 76: Exploiting Exascale Computing to Make Earthquake Simulation Codes More Powerful


By Scott Gibson

David McCallen, Lawrence Berkeley National Laboratory

David McCallen, Lawrence Berkeley National Laboratory (Credit: Lawrence Berkeley National Laboratory)

Historically, engineers and scientists have studied past earthquakes to try to predict future ones. But in the last decade or so, interest in using physics-based models and high-performance computing to simulate earthquake processes on large computers has grown exponentially.

Scientists and engineers want to simulate what can be referred to as the full end-to-end earthquake processes. This means understanding what takes place from the time of the earthquake rupture through the damaging effects of the seismic waves on bridges and buildings. Our guest for this episode is an expert in earthquake computer modeling and simulation.

David McCallen is professor and director of the Center for Civil Engineering Earthquake Research in the Department of Civil and Environmental Engineering at the University of Nevada, Reno. And he is a senior scientist at Lawrence Berkeley National Laboratory. For the US Department of Energy’s Exascale Computing Project, he leads a subproject called Earthquake Sim, or EQSIM.

Man-made structures vibrate at a high frequency that’s tough to match in ground motion simulations. The mismatch has been a barrier to conducting end-to-end earthquake simulations. However, with the help of the coming exascale computing platforms, the EQSIM project aims to push past that barrier to help inform the design of bridges and buildings that will be more resilient when the next major earthquake hits.

Our topics: the shortcomings of the traditional approach to earthquake prediction, what’s involved in simulating earthquake processes from end to end and why the simulations are important, how the EQSIM project is pioneering more effective earthquake simulations, and what EQSIM has accomplished and where it’s headed.

Interview Transcript

McCallen: I thought I would begin the discussion today by talking a little bit about how we’ve done things historically in terms of estimating earth ground motions and earthquake hazard and risk. And then I will transition into what we’re doing in the EQSIM project, which is a really transformational, different approach that is really focused on reducing the uncertainties in earthquake ground motion and infrastructure response predictions as well as informing us and giving us more insight into earthquake phenomena. Both of those objectives are part of what we’re doing.

Historically, if you think back about what has been done up to this point in time, to come up with an estimate of future earthquake motions, engineers and scientists have really looked toward the past. By that I mean it is an empirically based method, and we have looked at past earthquakes and recordings of earthquakes to try to predict the future earthquake ground motions that we would expect from large future earthquakes. This has really got some significant limitations, as you might expect.

Typically, we don’t have historical earthquake measurements for every potential future earthquake location, so we have to rely on measurements that were made elsewhere. There’s something called the ergodic assumption, where we assume that observations of earthquake motions at some location—maybe around the world—are relevant to the sites that we’re concerned about for future earthquakes.

That’s a bit of a stretch, and that assumption only allows you to get so far in terms of predicting future earthquake motions. And there are still a lot of uncertainties. For example—and we’ll talk about this in a few minutes—we’re looking at ground motions in the San Francisco Bay area. As part of our EQSIM project and case study, we’re actually simulating large earthquakes on the Hayward Fault, which runs along the eastern San Francisco Bay.

The earthquake that occurred on the Hayward Fault was in 1868, and as you can imagine, there were no seismic instruments around in those days, so we have no historical observations whatsoever about earthquake ground motions from the Hayward Fault for a large event, at least. So, we have to rely on data that was recorded maybe in other parts of California, in Japan, in Taiwan, all around the world, and try to extrapolate that data to predict the ground motions, for example, a Hayward Fault earthquake. And that certainly is not optimal and it’s fraught with some uncertainties.

In the last decade or so—maybe decade and a half—there’s been this increasing interest in utilizing high-performance computing in physics-based models to actually simulate on a large computer the earthquake processes, including the earthquake rupture, which releases the energy initially at the fault, and the propagation of energy that’s released from that fault through the earth, and then, finally, how that energy arrives at a site at a particular piece of infrastructure and those seismic waves interact with that infrastructure to damage that infrastructure. It’s a very, very complex process that really crosses the disciplines of earth science and geotechnical engineering and structural engineering.

This notion has been growing—and I would say with exponential interest—in applying high-performance computing to modeling that full end-to-end process and complexities of that process for two reasons.

Number one: just to give us more fundamental insight into that earthquake phenomena and how each step of that process unfolds and the physics of that process. So, the first, just deeper understanding.

Second, we actually want to be able to better quantify earthquake ground motions and infrastructure response in a regional scale. And when I say regional scale, I think again of something, if you could imagine, like the entire San Francisco Bay area stretching all the way from San Francisco down to Silicon Valley and east into the East Bay. We’d really like to better understand the complex distribution of earthquake ground motions and damage for a region like that.

And, so, the earthquake sim project, the ECP application development, is really focused on the ambitious task of modeling all the way from the fault rupture and the initiation of energy at the fault, the propagation of the waves through the earth, and then, finally, how those waves interact with the infrastructure to shake that infrastructure. It’s all a very, very complex process.

Even though the interest has been exponentially growing and being able to do this, the complications to being able to accomplish something like this have been a tremendous barrier. Just to slip into some technical jargon for a moment, historically, simulating earthquakes at regional scale, scientists and engineers have been able to accomplish that only to about 1 or 2 hertz, or 1 or 2 cycles per second, in a vibration of the ground motions. We can represent regional-scale motions at low frequency—very, very long wavelength—and that’s been accomplished; that’s been done for about the last decade.

The real challenge to complete this end-to-end simulation computationally is to be to simulate those ground motions at much higher frequency, up to maybe 10 hertz, and the reason for that, our infrastructure—our bridges and our buildings and so forth—those types of engineered systems, man-made systems, can have frequency content and vibrations all the way up to 5 or 10 hertz. That’s 5 or 10 vibrations per second, very stiff. And, so, really there’s a mismatch between what we’ve been able to do in ground motion simulations historically—1 or 2 hertz—as limited by our high-performance computing capabilities with our ability to simulate those ground motions at frequencies relevant to structures.

The earthquake sim project is all about pushing the envelope on the high-performance computing and allowing us to match up, or sync up, the frequency of resolution of our ground motion simulations with the natural frequency of vibration of infrastructure. So, we want to go, again, from maybe 1 or 2 hertz to 10 hertz, and that doesn’t sound dramatic until you recognize that the computational effort to do these ground motion simulations varies as frequency to the fourth power. And, so, if you double your frequency resolution, it’s 16x more computational effort to do that computation. So, it’s an extremely steep performance curve that we have to climb in order to be able to do these end-to-end simulations.

So, how are we doing that? We’re really doing that in three ways.

Number one: We are improving the algorithms and the sophistication of existing codes for ground motion simulation, and we’re working with the SW4 code that was originally developed at Livermore Lab. And we’ve made a number of optimizations to that code to achieve speedup.

Number two: We are translating and porting that code to the latest high-performance computers, the leadership computers, GPU-based machines like the Summit machine at Oak Ridge National Laboratory. Both of those activities—the algorithmic advance and the transition to the leadership computers—effectively, in doing all the work that needs to be done to run well on those machines—has really given us tremendous speedup. We have gone from being able to simulate the Bay area at 2 hertz to now on Summit, our latest simulations, being able to achieve 10 hertz simulations. Not quite at the geologic structure that we’d like yet. That’s going to have to wait to be accomplished on the exascale platforms that are coming, but, nevertheless, we have achieved, even to date in this project, a significant speedup.

And then, finally, once we get the ground motions, the linking of those ground motions to the infrastructure in a rigorous way is very, very important. Because there’s been generally a lack of understanding of those complex three-dimensional incident waves impinging on infrastructure historically, engineers have simplified the assumptions of what those incident waveforms look like. While we’re modeling end-to-end, we don’t have to invoke that simplification. We can rigorously look at these very complex three-dimensional waves that are impinging and arriving at a soil structure site and thereby model those explicitly and couple our geophysics models for our local engineering and structure models so that we rigorously represent the complexity and richness of those 3D incident waveforms. That is really how we complete the end-to-end simulation in the EQSIM framework. And we have some animations that show quite nicely the effects of these complex interactions between the ground motions and the infrastructure response.

So, what’s next?

We have developed this end-to-end workflow. We have advanced our algorithms to really efficient mesh refinement that is adapted to the geologic properties of the earth, and we have rigorously coupled and demonstrated the coupling between ground motion and infrastructure response. We are extremely excited given what we’ve achieved so far, and where we’re going with the opportunities that are going to arrive from the exascale platforms.

We got a tremendous boost in going to the Summit platform. Our figure of merit in EQSIM development went from a factor of 66 prior to jumping on Summit, to 189 once we jumped on Summit—a tremendous boost. And, so, we are anticipating as we move forward a similar-type boost as we move on to exascale platforms. That is going to be a tremendously enabling activity for earthquake simulation, and it’s going to be transformational, the ability to simulate these end-to-end processes on a computer in compute times that are maybe on the order of 5 to 6 hours for an earthquake simulation, which is very important, because we don’t know a priori exactly how the fault is going to rupture—whether it’s going to rupture north to south, south to north. So, we have to run a number of realizations and explore the parameter space in order to do an efficient earthquake hazard and risk assessment. And, so, the exascale is going to be tremendously enabling and transformational for doing this type of work.

Although our focus is performance, performance, performance of these codes, and getting to this plateau we need to get to, to really, really effectively couple geophysics and engineering simulations, the ultimate endgame is one of societal risk and safety. Earthquakes are a tremendous worldwide hazard, with thousands of people killed in an average year. There are certainly hotspots in the US. We’ve been lucky. We’ve had a relatively quiescent period, but there will be large future earthquakes in the San Francisco Bay area, in Washington state, and the Cascadia subduction zone in the future. And, really, the types of codes that we’re developing will ultimately—once we achieve the simulation performance we need to achieve—help inform how to better design and better account for all this complexity in earthquake science and earthquake engineering, and really design more efficient and more effective structures and infrastructure to resilient when the next major earthquake hits.

Related Link

Listen to our previous podcast interview with David McCallen on the EQSIM project (06/11/18).