Getting Computing Luminary Jack Dongarra’s Perspective on the Exascale Computing Project

Exascale Computing Project · Episode 105: Computing Luminary Jack Dongarra’s Perspective on the Exascale Computing Project

By Scott Gibson

Hi. Welcome to the Let’s Talk Exascale podcast from the US Department of Energy’s Exascale Computing Project. I’m your host, Scott Gibson. I’m joined in this episode by Jack Dongarra, a computing pioneer. Our discussion took place on June 19.

He is an R&D staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory.

I will touch upon some highlights of his exceptional career.

Jack was recently elected to the National Academy of Sciences, or NAS, for his distinguished and continuing achievement in original research.

In 2022, he received the ACM Turing Award from the Association for Computing Machinery. That honor recognized his innovative contributions to numerical algorithms and libraries that enabled high-performance computational software to keep pace with exponential hardware improvements for over four decades.

Jack is professor emeritus at the University of Tennessee, Knoxville, where he recently retired as founding director of UT’s Innovative Computing Laboratory, or ICL, an ECP collaborator.

Along with his roles at ORNL and UT, he has served as a Turing Fellow at the University of Manchester in the United Kingdom since 2007.

He earned a bachelor’s degree in mathematics from Chicago State University, a master’s in computer science from the Illinois Institute of Technology, and a doctorate in applied mathematics from the University of New Mexico.

Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory

Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory.

Jack is a fellow of the ACM, the Institute of Electrical and Electronics Engineers, the Society of Industrial and Applied Mathematics, the American Association for the Advancement of Science, the International Supercomputing Conference, and the International Engineering and Technology Institute. Additionally, he has garnered multiple honors from those organizations. He is also a member of the National Academy of Engineering, and a foreign member of the British Royal Society.

[Scott] So first of all, thanks for joining me. Thanks for being on the program. The end of the Exascale Computing Project is in sight, with the technical work wrapping up in December of this year. This has been quite a journey. And ECP teams have developed a software ecosystem for exascale. They’ve provided scientists with very versatile tools. Will you share your perspective on how the project has progressed over its lifetime? And please tell us what you’ve observed from the vantage point afforded by the participation of the Innovative Computing Laboratory at the University of Tennessee.

[Jack] Sure. Let me first say thanks for the opportunity to be on the show here. Let me just say that the end of Exascale [Computing Project] is really both a success and a huge risk. The project has delivered great capabilities to the Department of Energy, both in terms of human and technical accomplishments. But now, however, the DOE is highly vulnerable to losing the knowledge and skill of this trained staff, as future funding is unclear.

So ECP is ending, and there’s no follow-on project, no follow-on to roughly 1,000 people, 800 at the DOE labs and around 200 at universities, who have been engaged in ECP. And it’s really been a terrific project from the standpoint of getting together with application people designing algorithms, and software people working together on this common vision of working towards exascale. And hardware vendors have been involved in that as well.

Today, without funding, those 1,000 people are really uncertain about their future, and that uncertainty generates great anxiety among lab staff. When I talk to people at the labs, I sense that anxiety, in particular junior researchers who really have started almost their career based on this project, which has been going 7 years. And we don’t have a follow-on project at the scale that would be able to use their talents.

In some sense, we’ve not really accomplished the end of this project in a very satisfying way. The project is ending; we’ve delivered exascale machines. We have applications running on at least one of those machines today and showing very impressive results. But, you know, the follow-on isn’t there.

In 2019, the DOE, under the Advanced Scientific Computer organization, put together a set of workshops/town halls that were supposed to address AI for science, energy, and security. Those were well attended. Reports were written that discussed the challenges and how to overcome some of them. And then I guess what happened next was COVID. So the pandemic happened—that slowed everything down. Things didn’t really get as much traction as they probably should have. And in some sense, we haven’t recovered from that.

There’s a great deal of effort going on behind the scenes. Many colleagues are trying to work with DOE and Congress to put together a plan. I know Rick Stevens has put together a plan for AI for science, energy, and security. But that’s something that’s going to take time before funds can be appropriated, and the program actually put together. The unfortunate part is that the exascale computing program is about to end, and there’s no follow-on project at that scale that would be able to engage those people; so that’s really the crisis.

One thousand people have been devoted to putting together the ECP program. And that’s about to end, with about 6 months left in the program before that thing hits the wall. With the uncertainty there, and with many other opportunities for people with the talents that ECP put together, I’m sure they would unfortunately find jobs and other areas. The cloud vendors are seeking just this kind of talent to move them forward.

So it’s been a great, great success. On one hand, it’s been very challenging—we always like challenging problems. I think we’ve put in place solutions for many of the issues that we had. And we see great promise for the future in terms of using those exascale machines and the applications. The unfortunate part is that we don’t have a way to retain the talent, the cadre of scientists who are well educated and well trained who can continue on with the program and scientific computing for DOE.

[Scott] You’ve said that pursuing exascale computing capability is all about pushing back the boundaries of science to understand things that we could not before. In what ways do you believe ECP has put the right sophisticated tools in place to reach that objective?

[Jack] Now, one of the nice things working on this project is that adequate funding was there to develop applications and software; of course we can always use more. But in this case, there was a substantial amount of funding put in place to target 21 applications. The whole point of the ECP project is about the science, and those 21 applications were identified. They’re all energy related—wind energy, dealing with carbon capture, dealing with nuclear energy, protons science, chemistry, QCD, astrophysics, and the list goes on. And those exascale computers were put in place to help meet the challenges of those applications and push back the boundaries of science for those applications.

So part of the money went for the applications; another sizeable amount of money went for the algorithms and software. The software stack for ECP has, I think, 84 projects on it, and they cover a whole range of things, from some core things that are needed to run on those exascale machines, to compiler support, to numerical libraries, to tools and technologies, to developing this software development stack that’s been put place, dealing with many of the major components that are used in applications, visualization, being able to minimize communication, doing checkpoints, and providing a larger ecosystem for exascale.

Those 84 projects are being worked on and they’re coming to conclusion. They’re being worked on at the labs and the universities, trying to, again, meet the challenge of developing components that will run at a reasonable rate at scale on those exascale machines.

You know, it’s really been a pleasure working with colleagues in different areas helping to put together those tools. For my group in Tennessee, we’re working on six components of that software stack. We’re working on the numerical library for linear algebra called SLATE. We’re working on a numerical set of routines that have worked for GPUs called MAGMA, working on some iterative solvers in a project called Ginkgo, working on some performance tools called PAPI. We’re working on development of some programming aids that will help you effectively use the large amount of parallel processing that we called PaRSEC. And we’ve been working on OpenMPI for a long time, providing the basic fabric under which all of these applications and software will run on those exascale machines.

So, it’s been an engaging project for the last 7 years. It’s been a project that I think has developed many very worthwhile components. It’s been very rewarding from the standpoint of the application scientists and the software developers having adequate resources to really invest in that. And then seeing those tools be used or picked up in applications and driving those applications to get much higher levels of performance than we had in the previous generation of machines. It’s almost something that I would consider a highlight of my career, working with the DOE, putting together software, putting them in place so that the applications can effectively use them.

[Scott] That’s saying a lot … a highlight from your career. Has the Department of Energy ever done anything like ECP before? To my knowledge, they haven’t.

[Jack] Yeah, this is really something of a first in some sense. They’ve done things, of course, at a lower, smaller level. But this is the first at such a broad level. Basically, the whole ECI project, the Exascale Computing Initiative, was to develop these three exascale machines [Frontier, Aurora, and El Capitan] and then put in place the applications, the algorithms, and the software. So, the ECP part of that is the $1.8 billion that was devoted to those areas. The whole ECI was about $4 billion over the 7 years, and that was purchasing the hardware, or putting in place the hardware that can be used to solve those very challenging science projects.

This is the first time in my career where I’ve been engaged in a project with 1,000 people working on it for that one goal of developing tools and applications for those science problems, putting in place the hardware that can effectively deal with it, and putting in place a whole software stack that can be used across those applications. So it really is a great project. It was a great project. It is a great project. There’s many accomplishments, and the unfortunate part is there’s nothing to follow on.

[Scott] You mentioned the science being the focus of the work—that’s what it’s all about. With respect to what you just said, is there more you could say about the uniqueness of ECP in terms of the magnitude of its accomplishments and its importance to science?

[Jack] Well, again, getting 1,000 people on the same page working with a common goal of developing those applications. And putting in place the infrastructure that’s needed to have those things run on this very sophisticated set of hardware that’s being deployed is really a great vision to watch.

Going to an annual meeting for ECP is really another engaging thing. So here we have a room full of 400, 500, or 600 people. We’re talking about the applications, the hardware, the software, the issues involved and getting everything to work correctly. It was really a very energizing experience for myself and for the team at Tennessee who are working on these various software components. It’s something that I would say we haven’t experienced from the standpoint of developing high-performance computing technologies. But we haven’t experienced something like this before, at this level.

[Scott] Jack, what sorts of initiatives or actions will help carry ECP’s legacy into the future properly? We’ve talked a lot about that aspect, the need to do that, do you have ideas or suggestions?

[Jack] Well, I think the project that was supposed to be the follow-on—that’s AI for science, energy, and security—that has the potential to be at the right level to make the considerable impact in terms of science problems and drive a lot of the technology that we’ve put in place forward. And it’s a question of getting the right level of funding in place so that project can get initiated.

So I think AI is going to have a tremendous impact. It is already having an impact science. And going forward, we see that as really complementing how we do physics-based simulation. It’s going to help us in that way of complementing what we’ve done in the more traditional ways of doing it. It’s going to provide us with more effective, better solutions and in a shorter time. And, you know, we see the fruits of that already being reported. I see that really is the next major phase in terms of how we address these major challenges for advanced computing.

[Scott] Yeah, AI and machine learning and deep neural networks and all those sorts of things have really permeated the field in a big way. Are there other aspects of ECP or exascale in general that you feel like we should discuss that we’d be remiss in not mentioning?

[Jack] Well, I guess the one thing that we can say is that we should really in the future engage in co-design. So we talk about co-design; co-design is where we get the hardware people together with the application people together with the algorithms people along with the software people and help design a hardware that can effectively meet some of the challenges that we have. So we talk about that, but we really haven’t done it; we haven’t done it in the ECP project.

And I think moving forward, we’re going to have a situation where business as usual will not really work going forward. We really need to think about end-to-end co-design, developing hardware that can effectively match the kinds of problems that we have today. When we take a look at the performance that we see on these exascale machines really capturing just a small percent of the theoretical peak performance. And that’s in some sense because of the way in which those machines are architected using commodity processors, putting them together with an interconnects, and then using them to drive exascale was not really the most effective way.

If we take a look at our cloud providers, the hyperscaler guys, those guys are designing their own hardware. So, they’re designing hardware which is specific for the problems that they have. They’re not relying on commodity processors—they’re designing hardware which will match their application needs in the scientific area. We should be taking a step back and doing exactly that kind of thing. Look at what we can do, designing hardware that matches in a better way.

The applications that we’re going to be dealing with in the next round, put the hardware together in a way that we can see very effective use of that hardware going forward. And we won’t have this problem of getting just a few percent return on our investment in that hardware. I think end-to-end co-design is something that we need to do. We also need to be prototyping hardware and doing it at a scale that makes sense so that we can put together hardware and see if it’s the kind of hardware that’s required or necessary to help us in solving our application problems.

I can remember a time when we had many, many exploratory hardware projects going on at universities and even at some labs that were looking in probing of the architecture space to see what the right match would be. And I think we need to go back and do some of that today. We have the ability to design chiplets. That could be used to solve certain aspects of our application problems. We should be looking at that and trying to understand how we can effectively use that kind of technology in building the next high-performance machines.

[Scott] In terms of the social dynamics or ways of putting this sort of environment in place, what do you think could or should happen to create this sort of collaborative co-design like the situation you were just describing?

[Jack] Firstly, it is crucial to secure adequate funding for the task at hand. Therefore, we must find a way to reorient our approach. The first step is to adopt the mindset that investing in a program focused on designing hardware specifically tailored to meet application needs is essential, rather than relying solely on readily available commercial products. Currently, the prevailing model for constructing exascale machines involves allocating a certain amount of funding and setting a target performance, with the intention of building a machine that meets that specific benchmark. Unfortunately, this approach often relies on measuring performance using the Linpack benchmark. Consequently, the machine designed to achieve the Linpack benchmark may not perform optimally when applied to various real-world applications.

The Linpack benchmark primarily focuses on modeling matrix multiplication, which is not the core problem in many of our applications. Therefore, it is essential to develop architectures that prioritize the fundamental operations relevant to our specific applications and maximize their efficiency. This involves experimenting with innovative architectures that aim to alleviate the challenges associated with memory usage. Data movement currently incurs substantial costs, and one potential solution is to design machines that can efficiently handle data transfer or bring the computational process closer to the data itself. By addressing this deficiency, we can effectively accommodate the needs of our applications.

[Scott] Fantastic. Any else to add?

[Jack] I think that’s it. I’ll get off my soapbox.

[Scott] Well, thanks so much for being on ECP’s podcast.

[Jack] It was my pleasure, and I look forward to doing it again in the future.

Related Links

  • Exascale Computing Project research
  • SLATE numerical library for linear algebra
  • MAGMA numerical set of routines that have worked for GPUs
  • Ginkgo iterative solvers
  • PAPI performance tools
  • PaRSEC programming aids for the effective use of abundant parallel processing
  • Innovative Computing Laboratory at the University of Tennessee

 

Scott Gibson is a communications professional who has been creating content about high-performance computing for over a decade.