Exascale Holds the Key to Generating Realistic and Accurate Scenarios of Future Earthquakes

A conversation with David McCallen of Lawrence Berkeley National Laboratory

 

One hundred and fifty years after the last major earthquake struck California’s Hayward Fault, an area that runs all the way from north of Berkeley and Oakland down to Hayward, we present a conversation with ECP researcher David McCallen, Critical Infrastructure Program Leader at Lawrence Berkeley National Laboratory. In this podcast interview, David discusses how computer simulation has become an essential component of earthquake design for major infrastructure, and the work of his team to develop more realistic simulations of earthquake scenarios for more accurate hazard assessment and understanding of societal impact. This is an edited transcript.

 

Lawrence Berkeley National Laboratory is located in the hills above the University of California, Berkeley, a quarter of a mile from the Hayward Fault Zone, considered one of the most dangerous geologic fault zones in the world. According to the State of California Department of Conservation, many scientists and geologists agree that this area, which runs under a densely populated portion of California, is overdue for a large earthquake. Credit: Berkeley Lab


David, the topic of earthquakes is one that always gets the public’s attention, but in recent months it’s been an elevated conversation, particularly with the focus and visibility of earthquake hazards in California at the top of many news cycles. We’ve always had earthquakes as a potential disaster situation on this planet, so why is the conversation getting so much attention now and what is the motivation for increasing our understanding of earthquake hazards and risks?

I don’t know if it was your intent, but I think you’ve actually, through your question, put your finger on one of the challenges we face in addressing earthquake hazard and risk. So let me explain that a bit, and I really believe it is a public policy issue surrounding earthquakes that is challenging and we have to wrestle with.

If you look worldwide, I think everybody knows that there are many, many damaging earthquakes around the globe. It seems like you hear about one almost every week. And if you look at those statistics, you can see that there’s approximately 1,500 magnitude 5 to 6 earthquakes that occur per year worldwide, which is a lot. There’s approximately 130 magnitude 6 to 7 earthquakes that occur around the world per year, and there’s even about 15 on average of magnitude 7 to 8, very large earthquakes, that occur around the world per year. And up to 300,000 people have actually been killed in a given year, extreme year when we have very, very large earthquakes. So clearly it is a major international societal challenge and problem with a major threat to life and property.

However, if you think about any specific location where one might live, the chance of an earthquake occurring on any given day or even any given year is relatively remote because earthquakes tend to occur in a specific location relatively infrequently, which is great. So the point of that is, although people are aware of earthquakes in a global, in a general sense, they often don’t think about what could happen to them near term, so that’s a big challenge. And so I think it’s very hard to socialize the risk and generate focus, public attention and action when it comes to earthquakes, and it’s always a challenge, but I think there are two things that tend to focus the attention for the policymakers when it comes to earthquakes, and this sort of gets to the second part of your question.

One, when a major earthquake occurs, for example, like when we had the 2011 earthquake, a great Tohoku earthquake in Japan, people visually these days, with all of the media attention and the web and so forth, they really see the potential extent of tremendous damage. And so I think that brings awareness and gets people thinking about it. So that’s the first thing I would say.

Second, I think very, very important, and it’s kind of germane to the discussion we’re having today, is that the science and engineering community continually attempts to raise awareness through the communication and socialization of major studies or new findings about earthquakes, and so I think in particular what you’re referring to in California right now is a result of some of the recent scientific discoveries and discussion of major earthquakes. So let me speak to those just for a moment to give a good example of this.

And I’m sitting today at Lawrence Berkeley National Laboratory, and that happens to be in the hills right above the University of California-Berkeley campus, and I am literally a quarter to a half a mile from the Hayward fault, which is a very, very major earthquake fault that runs along the entire east side of the San Francisco Bay area. And if you’re familiar with the area, it runs all the way from north of Berkeley and Oakland down to Hayward, so it’s a very long fault and certainly capable of generating magnitude-7-type earthquakes.

And there’s been new scientific understanding over the last decade that has really given us insight about the Hayward fault and the risks posed by that fault. And by that, as a way of example, the Hayward fault, if you look at the last five major earthquakes that have occurred on the Hayward fault, the average interval between those earthquakes has been 140 years. So in other words, on average, every 140 years, we have a major earthquake on the Hayward fault.

Well, if you look at the data, the last major Hayward fault event, the last major earthquake, was in October 1868, and that was the last one, and not in anyone’s lifetime who is alive today. But this October, it turns out, 2018 will be the 150th anniversary of that quake. So if you think and juxtapose those two dates, one earthquake occurs on average every 140 years, and it’s been 150 years since the last earthquake, that’s not a good thing. So this type of scientific understanding and evidence is really helpful in informing the public about the types of hazards and risks that we could be subjected to from major earthquakes, and that’s sort of generating a lot of publicity, a major discussion of the Hayward fault that I think you’re referring to when you talk about the renewed interest in earthquakes in California.

You bring up a good point here. I typically associate earthquakes with California, first and foremost. But I think what I’m hearing you saying—maybe you could expand on this—is this seems to be relevant to many other regions of the US as well.

Yeah, that’s certainly true, and I think your impression is very much in line with what most people in the US think. You know, California for good reason is really synonymous with large earthquakes and earthquake risk, and of course everyone, I think, is aware or has heard of the great San Francisco earthquake of 1906 and the fires that almost destroyed the city of San Francisco. So if you think of that, a very prominent historical event, as well as earthquakes in California, not always large but very frequent, then you begin to see that people think of California as earthquake country, and they’re right. However, there are other areas of the United States which have a tremendous earthquake risk as well that people may not be as aware of.

And I would point to the Pacific Northwest, where the Cascadia subduction zone is capable of generating very, very large earthquakes and tsunamis because of the characteristics of that fault. So if you look at the Cascadian subduction zone, it extends all the way from Canada down through Washington and Oregon and actually down to the tip of the northern coast of California. That fault is capable of generating tremendously large earthquakes. We know that from looking at some of the history, and so if you think of those images from the great Tohoku earthquake in 2011, you know, Cascadia subduction zone could have something akin to that. So that is a tremendously hazardous area to worry about.

And not everybody knows this, but one of the largest earthquakes ever observed and recorded in the US was in 1811, and occurred in the Midwest, in New Madrid, Missouri.

So even though people think California, those earthquake hazards really are distributed to certain spots and hot spots throughout the United States, It’s really a germane problem going east to west in certain areas.

That’s very interesting. I don’t remember ever reading about the 1811 earthquake. So, David, it sounds like much of this earthquake modeling that we’re doing today, from what I’ve seen, is already being done on supercomputers, why is exascale so important to advancing this area of research?

Yeah, that’s a very good question so let me try to explain that. Computer simulation has really become an essential and core component of earthquake design for major infrastructure. And I’m talking now about buildings, bridges, nuclear power plants, and so almost all major buildings and bridges are now designed through advanced computer simulation with a model of the entire structure that is created on the computer so we can understand how that structure behaves. And this is crucial because we can test pieces of infrastructure and subsystems of infrastructure in the laboratory on earthquake shake tables that are available at universities, but we really can’t shake an entire full bridge system or building system with an earthquake motion to proof test it, if you will. So we have really, through the last four or five decades, the engineering community has really built advanced computer capabilities to essentially do a virtual proof test for us on the computer and subject that building or bridge to earthquake motion and assure that that bridge or building will hold. So computer simulation on the engineering side has really become core.

However, there’s another area where computer simulation has tremendous potential that has not been tapped yet, I would argue, and is really the focus of our work, and that is using high-performance computing to understand and quantify better than we have done historically the ground motions that we can expect from future earthquakes, because after all, to understand the structural response, you need to know how the ground is moving and how that ground is shaking the structures. And so high-performance computing is really in its infancy in being applied to that particular problem, and that’s what we’re so excited about with the exascale developments.

We know from observations of actual earthquakes that the earthquake ground motions are highly variable and complex and are the result of complex processes. We know that the ground motions at any particular site depend specifically on how the fault ruptures that’s causing the earthquake as well as how those seismic waves propagate through the earth and arrive at that site. And so the problem really tends to be what engineers would refer to as a very site-specific problem. The ground motion at any point is very specific to that site.

So historically what have we done? We have looked at records from past earthquakes and looked at ground motions from past earthquakes and tried to use those motions to predict the motions for future earthquakes. But there’s a challenge there. Precisely because those motions are so site-specific, it’s hard to use data from some other location to understand the motions necessarily at your site. And that is exactly why we want to use high-performance computing to model these complex earthquake processes and develop truly site-specific estimates of ground motion that we can use to better quantify the ground motion.

However, you know, the big challenge—one of the big hurdles to date—has been the tremendous computational horsepower that’s required to do regional-scale ground motion simulation. Even today’s biggest computers can’t get us there in terms of the frequency resolution that we like.

So our team and others in the community are extremely excited because with exascale computing, we’re entering into an era where, for the first time, we are going to have the computational resources available so we can do these types of very, very large-scale regional simulations, and it’s really something that we’re quite excited about.

Let’s take a look, maybe a bit closer if you would, at your specific research as part of the exascale computing project, and maybe you could tell us a little bit more about your collaborators and partners in this effort.

Sure. Happy to. And let me focus on that for a moment. So our exascale application development, the software that we are developing to be prepared for exascale level computers is really focused on developing a framework for simulating both earthquake hazard, which is future ground motion, as well as earthquake risk, which is the response of the infrastructure to those ground motions.

And to accomplish that, we are really coupling two technologies. We are coupling a regional scale advanced geophysics model, and it’s under development, which will absolutely be able to model the fault rupture process and the subsequent propagation of seismic waves of the size in question, and then we’re taking those motions and coupling those to structural models of the type that I mentioned earlier, what the engineering community uses.

And so really our objective is to have what I would refer to as an end-to-end simulation that goes all the way from the rupture of the fault to the final response of particular infrastructure. This is really a multidisciplinary problem, and so our team that we’ve put together for our exascale application development includes computational sciences, because when you’re using these emergent, very sophisticated computers, you really need to have the people that have a lot of expertise and understanding of how to make those computers go and operate most efficiently. So we have a computational scientist as well as engineers, computational mechanics engineers and earth scientists who are developing the necessary physics models that feed this framework.

We have an integrative broad team. We have participants from Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and the University of California on our team. We are developing a numerical test bed for our co-development, and so we are creating an extremely detailed, impressively detailed regional scale model of the entire San Francisco Bay area, and that will include the Hayward fault that we spoke to earlier. To achieve the resolution we need, this is going to require dividing this region up in the computer in to hundreds of billions of individual segments on our computer and solving literally hundreds of billions of mathematical equations over the duration of simulation of an earthquake, which is on the order of maybe one, two or maybe more minutes.

And so it is a tremendously big computational task that we have ahead of us.

So it sounds like you’ve made a really great case for why exascale is needed in this area of research, but could you say a little bit more about how the exascale computing project itself has made a difference in your research effort?

Yeah, so let me tell you the specific things that we’re going to be able to do as a result of exascale capabilities that have just not been possible here before, up to this point in time. So we are relying on exascale computing to push the frequency resolution of our ground motion model. And by frequency, you can just think of that, if you’re not familiar with that term, as the number of vibrations per second that we’re going to model. So it’s a measure of how fast the ground is vibrating, if you will.

So traditional simulations that we have been able to compute at regional scale, for example, on our San Francisco Bay Area model, have been of the order of maybe one or two hertz max. And so we can replicate or simulate ground motions that vibrate back and forth one or two times per second. The computational capability has really limited us to that. However, to look at and be relevant for structural evaluation, those ground motions really need to be resolved at up to maybe five or ten hertz, which is much higher. And the reason for that is structures tend to vibrate in the range of five to ten hertz, their natural vibration. So you can see there’s a mismatch between what we’re able to do up to now and what we’d like to be able to do to really understand the response of the structure.

One of the major thrusts of our project is to push to a much higher frequency resolution our regional scale geophysics model. And to give some sense of how big a lift that is, if you look at the computational requirements – as you increase frequency, a doubling of the frequency that you’re resolving results in 16 times more computational effort. And so to go from two hertz to five to ten hertz is a huge effort and it is certainly an exascale level problem. But it’s also why exascale is so enabling because for the very first time, we’re going to be able to resolve ground motions in the frequency range that are of most interest to infrastructure, and that is, in fact, what’s most valuable.

In addition, as we move forward in this project, we are going to reach a point where our ability to compute is going to be better than our ability to characterize the subsurface geology based on existing data. So a second very important computational piece of our project is to do what’s called Full Waveform Inversion and we will use these frequently occurring small earthquakes, like the ones we have in California all the time – magnitude 2, magnitude 3, and we’ll look at the measured response from those earthquakes, and we will use our computational models to do an inversion and actually improve our geologic models based on those small earthquakes so that our models will be more accurate and better for simulating very large earthquakes. So that’s a second very computationally intensive piece that exascale is really enabling. This effort couldn’t move forward without exascale and we just couldn’t get there.  This is a really exciting time for us.

So as you look ahead, David, what could be the tangible, the practical outcomes of this work in terms of protecting against these major risks?

Yeah, so I’ll mention two overarching things and then a DOE-centric objective. First of all, you know, there are two major objectives of our work, I think long-term, high-level objectives.

One is simply to add new insight into our understanding of earthquake processes. And earthquake data is hard to get and it’s extremely sparse. If you look around the world and you think of a magnitude 7 or greater earthquakes, we have only 80 or 90 actual ground motion records in the very near field of those earthquakes. And when I say the very near field, those are for sites within 10 kilometers. That’s not a lot of data. We’re doing simulations right now that will give us literally tens of thousands of synthetic records that represent the ground motion in the near field, so just the ability to have more data, synthetic data, will help us get more insight into the variability and types of earthquake motions. So understanding is number one.

Second, we really want to reduce the uncertainties in our predictions of ground motion and infrastructure response. And I mentioned early on that ground motion tends to be very, very site-specific, as dictated by the physics of the site and the propagation of the waves to the site. So empirically based methods, that is looking at historical records, is always going to have significant limitations and you’re going to have significant uncertainty. We want to drive those uncertainties of the ground motions as low as we can, and so the second big thing is really driving uncertainties in the ground motion through these large-scale simulations.

And then finally a DOE issue. DOE, interestingly enough, is one of the biggest owners of mission-critical unique facilities in the United States. The DOE infrastructure has been estimated to be worth somewhere between $5 billion and $10 billion, so DOE has a lot of unique facilities with seismic hazard and risk problems. So these technologies will be relevant to both DOE facilities as well as facilities critical to DOE missions such as energy systems and so forth. So I think there are three potential very beneficial outcomes of this work.

So let me just say we essentially want to have a computational capability, an end-to-end capability that can allow us to generate realistic and accurate scenarios of future earthquakes on high-performance computers for the future. So thinking about looking ahead, the great opportunity for exascale to positively impact the economics and the safety—life safety—of many of these systems in many of these places in the US is really a critical feature of the outcome of this project.

Related Links

Article: Assessing Regional Earthquake Risk and Hazards in the Age of Exascale

EQSIM project: High-Performance, Multidisciplinary Simulations for Regional-Scale Earthquake Hazard/Risk Assessments

Topics: