Supporting Scientific Discovery and Data Analysis in the Exascale Era

Exascale Computing Project · Episode 75: Supporting Scientific Discovery and Data Analysis in the Exascale Era

 

By Scott Gibson

James Ahrens, senior scientist in the Applied Computer Science Group at Los Alamos National Laboratory

Jim Ahrens, Los Alamos National Laboratory

The Data and Visualization portfolio in the US Department of Energy’s (DOE) Exascale Computing Project (ECP) is delivering data management software necessary to store, save state, share, and facilitate the analysis of exascale data. The portfolio aims to provide scalable analytics and visualization software that effectively supports scientific discovery and the understanding of massive data. This work will support Energy Department mission-critical codes on the coming exascale computing platforms.

The person who leads ECP’s Data and Visualization portfolio project, Jim Ahrens, a senior scientist in the Applied Computer Science Group at Los Alamos National Laboratory, is a guest on the Let’s Talk Exascale podcast.

Our topics: A summary of the challenges the portfolio teams face, the project’s mission, a high-level way of thinking about the portfolio, the existing packages that are the starting point for the portfolio, breakdowns of the goals (storage, services, and visualization), the new storage technology from Intel called DAOS, and how the broader high-performance computing community can gain value from the project’s hard work right now.

Interview Transcript

Gibson: We are pleased to have Jim Ahrens of Los Alamos National Laboratory to talk with us about the Exascale Computing Project Data and Visualization portfolio. Hey, Jim, thank you for being on the podcast.

Ahrens: Thanks, Scott. Glad to be here.

Gibson: We want to start with context. The ECP Data and Visualization portfolio supports achieving exascale by addressing data and visualization challenges. How would you summarize those challenges?

Ahrens: A key part of any computing system is your data. It’s your legacy of your results. It’s the thing you analyze, and it’s what’s preserved after your computation. We can compute much faster than we can save and store data these days. Specifically, exascale system concurrency is expected to grow by five or six orders of magnitude, yet system memory and I/O [input/output] bandwidth is only expected to grow by one or two orders of magnitude. And this discrepancy is really front and center for us to address. We need to figure out methods to address this issue.

Gibson: What’s the mission of the data and visualization portfolio, and how do the subproject teams in the portfolio plan to accomplish it?

Ahrens: The mission is to deliver software. One of the great things about exascale is to put software front and center, and so we will have solutions for our ECP application teams in the 2023 timeframe when the exascale systems are delivered.

One way I think about this portfolio is as a set of interfaces. Here’s the data and then [we apply] services, what you can do with it [the data], like store it, do a checkpoint restart with it, do some analysis. And so that informs the projects in the portfolio. Each project has that notion of, ‘OK, give me the data, and I will perform a service on it.’

Gibson: The portfolio is extensive. Can you give us a high-level way of thinking about it?

Ahrens: As I was saying, with these interfaces and services, what we’d like to do is have this interface described in a very declarative sense. What I mean by declarative is what the service is that you want to perform, not how to perform it.

For example, for compression you might want to reduce the size of your scientific data on storage, but you might want to provide a user-defined level of precision. And this is, for example, what the SZ project does. It says, “Compress my floating point scientific data but maintain a particular precision level so I can do analysis afterwards.”

Gibson: The Data and Visualization portfolio started with certain existing software packages. Will you tell us about those?

Ahrens: Sure. In the portfolio, there are storage [projects], checkpoint restart packages, and post-processing visualization tools. So, in terms of the storage tools, we have all the major players like ADIOS, MPI-IO, HDF5, and PnetCDF. For checkpoint restart, we’re building on things like SCR as the starting point [to produce VELOC]. And for [in situ visualization and analysis as well as] post-processing visualization tools, we are building on ParaView and VisIt. All these tools are a great starting point for us.

Gibson: The goals of the Data and Visualization portfolio involve storage, services, and visualization. Let’s examine each of the goals individually, beginning with storage.

Ahrens: Storage is a key part of the ECP portfolio. It provides services, including the ability to store, write, archive your data, that kind of idea. Sometimes it also provides checkpoint restart capability.

Having top DOE HPC storage teams ADIOS, MPI-IO, HDF5, and PnetCDF, we’re really in a good spot to evolve the software with more functionality as well as to get it in with applications and then make sure it works on our new exascale architectures.

As an example, let me talk a little bit about HDF5, but all the projects are doing this. For HDF5, they work with a particular ECP application. In this case, the one I’m thinking about is an astrophysics application. They were able to look at the code, do some analysis with a specific I/O characterization tool called Darshan, which is supported by the DataLib project.

[With] The Darshan tool [the HDF5 team] basically looks at the I/O of, say, a particular astrophysics code called Athena and identifies what’s going on. It was a collection of small writes, which typically on one rotating storage are really slow. You do not want to do a bunch of small writes. You really want to have a large read or write of a big block of data. And, so, they were able to take those many small writes and put them together into one large block. That was able to improve their I/O performance. It was taking 40 percent of the execution time, and by overlapping that write in an asynchronous way and then writing it out, they were able to basically make I/O as it appeared to the simulation go away. So, that gave them much, much faster performance of their application code and really significantly saved even the amount of CPU hours because you’re not waiting for the I/O system to complete.

Options that all these storage projects have include asynchrony, so, doing asynchronous I/O; being topology aware, that is, understanding the network and storage resources that are available to them as well as having an interface layer, in this particular case, for HDF5, there’s something called the virtual object layer, or VOL, which allows you to map the HDF5 API to HPC storage systems. By having different driver-based back ends, you can greatly improve performance. So, from an application’s perspective, you don’t see any difference [to the application’s HDF5 I/O interfaces], but, meanwhile, there are engineers working on the back end making your I/O more performant over time.

Gibson: Interesting solutions. I understand you have a new storage technology, DAOS from Intel.

Ahrens: Yep, you bet. That’s coming for the exascale machines. DAOS is out there now. It’s an open-source project from Intel. It’s Distributed Asynchronous Object Store. In the past, we had what I call more disk-drive-based solutions, where you have this POSIX interface, which basically means you expect synchronous behavior where you write big blocks of data. DAOS, what it tries to do—and you see solutions like this across the architectures—is [the use of] burst buffer technology where you have really fast memory like a cache to read and write from and then bleed that data out to the larger storage system, which might be rotating storage, but the user doesn’t see that. What they see is the fast writes and reads from this NV RAM, and, meanwhile, they can keep going on, so, they’re able to, say, dump their data out and write to the DAOS system. The DAOS system organizes it in a way where it writes really quickly, and then, meanwhile, the simulation continues and DAOS is dealing with making sure everything is stored safely in the background.

Gibson: Again, we’re talking about the ECP Data and Visualization portfolio goals. We just covered storage, so, Jim, let’s look at services.

Ahrens: I think services is sort of the future, right? Essentially, what you really want to do in all these things—and data has a particular advantage here—is really what the user wants to do, not how, maybe under some constraints. A great example of this is something like scientific data compression.

We have two data compression projects in the portfolio. One is ZFP, and one is SZ. SZ, in particular, I’ll give you a highlight. They were able to work with a cosmology application, ExaSky, and working with the PI, Salman Habib, Salman basically said, ‘I’d like to compress my data by a factor of 5, but my analysis requires accuracy up to 10–3. Can you do that?’ So, the team set about reducing I/O overhead by reducing the storage read time but keeping that accuracy that Salman requested. And, so, now, Salman’s data dumps are five times smaller, and so he’s able to save more data and do more science and yet still have it be of the accuracy that he needs. So, that was compression. Let me talk about some other pieces in the portfolio.

There’s also a checkpoint restart project called VeloC. VeloC works in the same way I was describing with the asynchronous I/O. What it does, is it’ll take checkpoints, and, typically, what would happen is you’d run your application for a while, get some data, a point in the simulation where you want to save everything so that if the machine died, you could go back and start things up again. VeloC takes that data dump and saves it out but saves it out in an asynchronous manner. Then the application can keep going, and, meanwhile, if the machine were to die in the intermediate point, you can still restore your state and still get it back with the VeloC system.

Gibson: OK. Let’s cover the visualization goal.

Ahrens: In visualization, the key point going to the original challenge I described is that this is the place that you really see directly the notion that you can generate a lot more data than you can save out, and then that really causes you a problem with your analysis. So, the idea was to move the analysis onto the machine and do it in situ. In other words, run your analysis as the data is generated and do the data reduction on the machine as the data is generated and save out the important data.

In the ALPINE project, we’re both working on algorithms that do that, like sampling, things like topology analysis, things like compression, and doing that on the fly based on features that you care about. You might, for example, use some statistical analysis to tell you what interesting things are going on while you’re doing your run. There’s also infrastructure work as well for ParaView, VisIt, and a new in situ infrastructure called Ascent to express the commands that you want in your simulation code that tell the in situ analysis Ascent which algorithms to run, how to run them, and do the control flow that you need to get a good in situ workflow going. One key problem that comes up is all this work needs to happen in an automated fashion. Traditionally, years ago what we were successful at was a user-base post-processing workflow where the user would interactively make decisions about what they wanted to analyze. What in situ analysis is pushing us into is really understanding an automated workflow. And we see things like machine learning really help us to make some of those decisions.

In addition, there are other projects that are in the portfolio. One is to keep some of that interactivity. That’s the Cinema project. The idea here is to save as much data as you can while you’re running. In the Cinema project, the idea is to save imagery. You can imagine in an exabyte of data you might have many images that you could fit into an exabyte. So, instead of saving a single time dump from a simulation, you might save many, many images. And then that gives you a sense of interactivity. You do need to save while you’re working in situ, but you get quite a broad range of things to look at in a post-processing sense. So, that’s the Cinema project.

All the visualization projects depend on the visualization tool kit [VTK]. [The] Visualization tool kit did not have good threading or GPU performance, and, so, VTK-m is a project to really look at the algorithms and data structures and get great threaded GPU performance going. And that project’s been working really hard and focused on the new exascale machines. So, working hard to understand what those are. It’s based on data parallel abstraction, and we’re getting great performance out of it. It’s a good long-term play for the visualization tool so that they can work in situ with the applications on the GPU or on the CPU to be doing analysis while the simulation is running.

Gibson: That’s a nice overview of ECP’s Data and Visualization portfolio. During the course of ECP’s existence, the portfolio has gone through a research and development phase. Now the teams are meeting what are referred to as their key performance parameters, or KPPs—specifically, what are called KPP3s—and they are hardening functionality on the pre-exascale machines. The final phase will be deployment on the exascale machines. However, in the meantime, the HPC community can get value right now from all the very hard work the teams have done so far. How can the public access the software for their use?

Ahrens: All the software from the [ECP] Software Technology [portfolio], which includes Data and Visualization, there are plans to make it available in something called E4S.io [http://e4s.io], so that’s a website that you can go to. Specifically, Spack packages are there, containers, a whole bunch of options to download the latest and greatest of the exascale software, and, specifically, one of the things that people worry about—particularly, Sameer Shende at U. Oregon, who’s leading the e4S project—is how things work together. So, what are the dependencies between the builds, and do they work together? And not only do they work together, but do they work together on various architectures? So, you should be able to go to that site, download containers of exascale software, and go ahead and use them.

One project that I’ve been involved in is the Pantheon project, and the Pantheon project looks to bring both applications and software technology onto ECP machines. So, we have open-source simulations that are ECP based, then we run some analysis software like ALPINE and Cinema, for example, then those are built and run on different machines. You have all the parts of the exascale portfolio coming together, being tested and run, and I think that’s really a key feature of the Exascale Project—not just that we’re running on particular machines and showing things, but we’re really distributing that ability to the community. So, I think there’s going to be a lot of benefit to the community with these packages. Of course, the Spack project plays a big role in that. You express your dependencies, and it allows you to build and make things work together and understand where there’s problems when you have multiple packages and putting them together.

Finally, I’d like to conclude for Data and Vis. You know, when we started this project, we had this really big challenge of worrying about data reduction and how we’re going to get on the exascale machines. I think in all the areas—in our storage, in our services, in our visualization, and analysis—we’re being successful, we’re taking some of the R&D ideas that we developed during the first few years of the project and we’re working on productizing them, developing functionality that we’re going to need, and then working with applications to apply that functionality and making sure they’re going to work on ECP machines. And, as I said before, I think overall the community really benefits from that, and I’m excited and interested in community involvement in the software. So, go out and take a look at it and let us know what you think. And let us know how it helps your project.

Gibson: Thank you, Jim, for sharing all of this great information with our audience.

Ahrens: OK. Great. Thanks, Scott. And just a shout out if you want a written version of what’s going on in the ECP project, I encourage you to take a look at the CAR report. That’s a Capabilities Assessment Report, and that’s available on the exascaleproject.org site.

Related Links

The US DOE Exascale Computing Project Announces the Availability of the Extreme-scale Scientific Software Stack (E4s) v1.2

The Extreme-scale Scientific Software Stack (E4S): A New Resource for Computational and Data Science Research