ECP’s WarpX Team Successfully Models Promising Laser Plasma Accelerator Technology

Exascale Computing Project · Episode 99: ECP’s WarpX Team Successfully Models Promising Laser Plasma Accelerator Technology

By Scott Gibson

The ECP WarpX team: Jean-Luc Vay, Axel Huebl, Henri Vincenti, Luca Fedeli

The ECP WarpX team, clockwise from top left: Jean-Luc Vay, Lawrence Berkeley National Laboratory (Berkeley Lab); Axel Huebl, Berkeley Lab; Henri Vincenti, Commissariat a l’Energie Atomique, France (CEA); and Luca Fedeli, CEA.

This time around on the Let’s Talk Exascale podcast, behind the microphones are researchers from WarpX, a subproject of the Department of Energy’s Exascale Computing Project. WarpX is a finalist for the 2022 ACM Gordon Bell Prize, which is widely considered the highest prize in high-performance computing.

We’ll hear how the close and lengthy collaboration between a team at Lawrence Berkeley National Laboratory and the French Alternative Energies and Atomic Energy Commission, known as CEA, provided a synergy that’s propelled WarpX to make exciting contributions to high-performance computing. Joining us from Berkeley Lab are Jean-Luc Vay and Axel Huebl, and from CEA, Henri Vincenti and Luca Fedeli.

Additionally, the power of that collaboration was coupled with ECP’s “team of team” ethos to tap into the expertise of groups that are advancing methods for homing in on areas of particular interest in simulations and for managing massive data sets to maximally exploit the latest computing technologies for societal benefit.

We cover the following topics:

  • The aim and history of WarpX
  • The journey that led to the team’s becoming a Gordon Bell finalist
  • Science runs on some of the world’s largest supercomputers and what the research revealed
  • Major innovations to the WarpX code
  • Accomplishments of the WarpX simulations
  • A perspective on ECP’s “team of teams” via WarpX
  • The composition of the WarpX team
  • Shout-outs for team member contributions
  • What’s next for WarpX
  • Closing thoughts

Transcript

[Scott] This is Let’s Talk Exascale. I’m your host, Scott Gibson, from the Department of Energy’s Exascale Computing Project.

A particle accelerator is a machine that uses electromagnetic fields to propel a beam of elementary particles, such as electrons or protons, to very high speeds and energies.

These machines are applied in many areas of fundamental research, having directly contributed to many Nobel Prizes in physics, and in chemistry, medicine, and physics with x-rays.  Particle accelerators are also used for radiotherapy to treat cancer, produce short-lived medical isotopes, perform medical and pharmaceutical research, fabricate microcircuits, and irradiate food—a technology that improves the safety and extends the shelf life of food by reducing or eliminating microorganisms and insects.

Particle accelerators are even important to maintaining national security, in that they’re used for cargo inspection, nuclear stockpile stewardship, and materials characterization.

A barrier to progress in accelerator application development is the size and cost of the next-generation machines. Alternative and sustainable techniques are very much needed. This is where the idea for compact accelerators enters the picture.

Among the compact candidates, so-called plasma-based particle accelerators could be game-changers. However, developing the devices critically depends on high-performance, high-fidelity computer modeling and simulation. Researchers could use that tool to explore unanswered questions concerning the physics of the transport and acceleration of particle beams in long chains of plasma channels.

The arrival of exascale computing presents an opportunity to tap into the speed afforded by the exascale systems to capture the full complexity of acceleration processes that develop over a large range of space and time scales in plasma-based particle accelerators.

Researchers within ECP have been addressing that need by developing an application called WarpX.

WarpX is a highly parallel and highly optimized application that can run on GPUs and multi-core CPUs. It includes the capability to balance computational workloads across the biggest supercomputers. Additionally, WarpX is a multi-platform code that runs on Linux, macOS, and Windows.

Jean-Luc Vay is head of the accelerator modeling program at Lawrence Berkeley National Laboratory and principal investigator of ECP’s WarpX.

[Jean-Luc] So the goal of the ECP WarpX project is to use some of the most impactful large-scale tools of science that are supercomputers to develop the next generation of another essential large-scale tools of science that are particle accelerators. And incidentally, particle accelerators are facing a dilemma that is similar to the one that supercomputers faced with CPUs in the last decades where scaling up the technology is reaching its limits in terms of size, cost, and power consumption.

So just as the answer for supercomputers has been the so-called compute Cambrian explosion, with a more diverse ecosystem of microprocessors, the particle accelerator community is exploring new ways of accelerating particles much more efficiently. And one technology that is very promising is called plasma acceleration, where one uses a plasma, also known as the fourth state of matter, where atoms are ionized to create the conditions for electric fields that are many times larger than with conventional technologies. Now, the catch is that plasmas are notoriously difficult to control and very complex. And this is why it requires high-performance computing and exascale, because it needs really the largest and fastest supercomputers.

[Scott] Did the WarpX application exist before ECP?

[Jean-Luc] Well, yes and no. Yes, in the sense that many of the methods that we implemented in WarpX were already developed in several codes that we wrote over the years, and that includes the predecessor of WarpX that was called Warp. But the answer is no in the sense that WarpX was written from scratch in C++ using a portability layer developed in the AMReX library on which WarpX is built so that it has a single source code that can be compiled for various flavors of CPUs and GPUs, and by contrast, its predecessor, Warp, was a mix of Python and Fortran and ran only on CPUs.

[Scott] The work of the WarpX team has earned them a spot as one of six finalists for the 2022 ACM Gordon Bell Prize, which is widely considered the highest prize in high-performance computing. The winner will be announced at the SC22 supercomputing conference in Dallas November 13–18. WarpX was one of two finalists to use Oak Ridge National Laboratory’s Frontier exascale supercomputer. The paper that outlines the science runs of the research is titled “Pushing the Frontier in the Design of Laser-Based Electron Accelerators with Groundbreaking Mesh-Refined Particle-In-Cell Simulations on Exascale-Class Supercomputers.” Henri Vincenti, who leads a French research team collaborating with WarpX, described the journey that led to the project’s selection as a Gordon Bell finalist.

[Henri] So the actual Gordon Bell team is mainly the result of long-standing collaboration between the team of Jean-Luc at Berkeley and my team at Commissariat a l’Energie Atomique, so CEA, in France. Actually, this collaboration was initiated 5 years ago from now, when I founded my proper group at CEA and right after finishing my post doc in Jean-Luc’s group. So after leaving Berkeley.

Since then, both of our teams have worked hand-in-hand in the co-development of the code WarpX and the use of WarpX on the largest machines to address tough physical problems that we encounter in high-field science and accelerator physics. And more recently, the team was joined by other industrial and academic partners, so the RIKEN Computing Center in Japan, The GENCI Grand Equipement de Calcul Scientifique in France, and the industrial partners Atos and Arm. And these partners helped us a lot porting the code on Arm architecture on the FUGAKU supercomputer.

And I would like also to emphasize that there is a huge synergy between both teams, so the team of Jean-Luc and my team, regarding the modeling of the science case that we propose in our Gordon Bell submission. So actually, the Gordon Bell science case relies on a novel physical concept called a plasma mirror injector, and this plasma mirror injector has the potential to remove a major limitation of laser plasma accelerators and enable their use in promising applications such as, for instance, ultrahigh dose rates in radiotherapy in medicine.

And actually the successful modeling of the whole physical system, the plasma mirror injector and the subsequent laser plasma accelerator stage, was possible thanks to the combined expertise on accelerator modeling. So on the Berkeley side, they are expert on the accelerator modeling, and plasma mirror modeling that is a core expertise at CEA.

[Scott] The previously mentioned paper revolves around kinetic plasma simulations. The supercomputers used for the efforts explained in the paper were Frontier at the Oak Ridge Leadership Computing Facility (OLCF), which is the world’s first exascale system; Fugaku at RIKEN in Japan; Summit, also at OLCF; and Perlmutter at the National Energy Research Scientific Computing Center, or NERSC. So, what did the science runs on the machines entail and what information did they yield?

[Henri] I’ll start by giving a little bit of context on this science case that was performed as part of our Gordon Bell application. The science case is related to laser plasma accelerators that Jean-Luc already introduced and that can be obtained by focusing a high-power femtosecond laser onto a gas jet. And this laser plasma accelerator currently offers the prospect of building much cheaper and much more compact accelerators than conventional accelerator technology. However, a major limitation of this accelerator is low charge to currently deliver at high energies, which currently limits the range of applications. The reason behind this limitation is that actually everything turns extreme with laser plasma accelerators.

Their accelerating structure, which is made of plasma at high laser intensity, can be as small as a few microns, and therefore it requires a high-precision injector. So you need an injector that is capable to inject a lot of electrons on the micron scale. And so far, the simplest way that was found to inject these electrons in the laser plasma accelerator structure is to directly pick these electrons from the accelerating plasma medium. So the ambient medium, which is made of, initially, gas. And so pick up these electrons and inject them into the accelerating structure.

Using this kind of technique, you can in principle achieve a highly localized injection in the accelerating structure, but the problem is that the injected charge is inevitably limited by the low charge density that is made available by the gas. To overcome this limitation and level up the charge injected in laser plasma accelerators by up to an order of magnitude, we recently proposed a novel concept using a hybrid solid-gas target which is made up of solid parts followed by a gas part. And the solid part offers orders of magnitude more charge than  the gas offers.

When illuminated it by the laser, it can turn into a high-density plasma that is called a plasma mirror, which has the ability to specularly reflect the incident laser light. And after a reflection of the incident laser, so high-charge electron bunches can be ejected from the plasma mirror surface. And this high-charge bunches can be further trapped into the laser plasma accelerator that is driven in the gas part of the target. So whereas this is simple in principle—so this is a simple concept—as Luca will now explain, the big challenge has been to robustly model this target in simulation. And so, this has been a real challenge from a computational point of view.

[Luca] Yeah, well, the scheme that Henri has outlined is quite a challenge from the computational point of view. To be fair, the core idea of the numerical method that we use to simulate it is not that difficult. We deal with a plasma. Plasma is a collection of charged particles that are interacting together. And to simulate that, we model the electromagnetic field on a grid, and then since there are a lot of particles in the plasma, we cannot simulate all of them. We use macro particles, each one representing many real particles to simulate the particles.

These particles, they move according to the field. They generate currents, and these currents, they are used to evolve the field. This may sound simple, but in our case, it becomes very challenging due to among other things the size of the simulation. In order to model what happens at the surface of the solid target, we need to resolve very small spatial scales. But the acceleration process takes place on a much longer distance.

And so we end up with a huge simulation with 10s of billions or more macro particles. And we need special numerical methods to deal with that mesh refinement, load balancing. I’m sure we’ll go into more details later. And even with these sophisticated methods, we need top supercomputers to model this scenario. However, we need the results of these simulations. We need them to provide insights on the acceleration process. We need them to provide guidance for our experimental activities. We collaborate with experimental teams.

In practice, what we do is that we model a certain scenario. We select the parameters and we look at the acceleration process. We look at the properties of the accelerated electrons or how many electrons we have accelerated, and then we try to understand eventually what went wrong, what we can do to improve the results.

[Scott] That was Luca Fedeli. The team implemented major innovations in the WarpX particle-in-cell, or PIC, code: a parallelization strategy for performance portability, a groundbreaking mesh refinement capability, and an efficient load-balancing strategy. Axel Huebl, a research software engineer at Berkeley Lab and WarpX team member, provided some details.

[Axel] That’s really the exciting part and down to the meat of our Golden Bell submission. So for performance portability, our approach is really to have a single source code developed against the code’s performance portability layer and improve this performance portability layer where needed.

So if you look specifically in the landscape of US exascale machines, the US has converged quite strongly now for exascale’s first machines to GPU based machines. So what we implemented in WarpX is that we have three backends, for Intel, AMD and Nvidia GPUs, and the latter two are now on the floor, and we can use them. And that’s what we demonstrated in this paper.

We went to the top 7 machines and ran on as many as we could get our hands on. And now the interesting part gets in there, if you look internationally. Because at exascale or pre-exascale, we also have the Fugaku machine, which is also a reduced instruction set machine. But it’s fully based on Arm CPUs. And that’s quite different because that’s not a target we have in ECP.

So what was really interesting for us is to develop with our partners for the Gordon Bell paper, a highly tuned version of WarpX that actually circumvents our portability layer and shows what is the next level performance we can get out of having manually tuned our code. And then compare it to that. And this now then we compare back to our current implementation as well, it’s already inspiring how we can improve our portability layer to also be really efficient on this machine. And in that sense, it’s not only a really large run and a great exercise, but it’s really also inspiring and giving us feedback for future developments.

Then the things that we present here really all go together. Mesh refinement itself is a capability that is really unique to Warp and WarpX. Mesh refinement itself is that you simulate parts of a simulation with different resolutions. For some science cases, this can be rather well done and is rather well understood. So for example, if you’re solving a Poisson equation, every time step, that’s a global operation, that’s pretty well done in the literature.

The thing that we’re doing here is: we’re solving electromagnetic particle-in-cell. And the challenge here comes of it’s nature, that we have to track radiative effects that persist over different time steps of the simulation. And for example, electromagnetic waves, they travel independent of sources over our simulation time.

And when we start to refine parts of the simulation, we have to be extremely careful that we don’t introduce any issues, for example, having different dispersion on a different refinement patch, or creating artifacts, between sources, traveling waves, and so on. And that’s really a method that Jean-Luc is pioneering, already with Warp and now with WarpX and that we continue to push on the numerics side, and then have performance portability layers as we implement these individual patches to be efficient on GPU.

Now, this can be a huge, huge win for plasma densities, as Luca already alluded to, because we have these parts where we have to have high resolution and then we want to propagate for acceleration for really long time steps. But that’s way more where we want to have fine resolution, maybe 10% of the simulation, and we want to go really far and really stable for the accelerator.

Now, this all then ties together with the last part, which is load balancing. And load balancing on GPU is relatively tricky on its own already. Because for load balancing, you need multiple parts.  The first thing you need is a cost estimate, right? How much time are you spending? And so, we did over the last year already research and we had a great paper in PASC21 on that—how we can either directly measure costs on the GPU or find great surrogates for them to estimate costs of whatever we want to load balance to. And I will go into detail on that in a bit more in a second.

Then we need to estimate what is the outcome that we get from rebalancing. So what would be the optimum that you can get out and so we, for example, develop performance models for that to estimate that, then we switch to a concrete distribution mapping. Do we want to do a space filling curve? Do we want to just equally cut everything and distribute independent of cluster geometry and networking connections? And then we actually do the load balancing, including mesh refinement, where we not only have to have the spatial distribution but also just communication between different refinement levels of higher and lower resolution. And that makes it a really interesting challenge.

And you see, it all ties together, you have to have the performance portability layer. You have to have the implementation of mesh refinement. And all of that has to be load balanced to actually make sense and get you faster. And that’s what we’re demonstrating in this paper. And yeah, that’s a really exciting part. And there’s so many interesting research topics that we could already address for this paper and that we showed that we get a  speedup, but we have way more ideas how we can go forward.

So for example, what’s really challenging is to estimate what is actually your cost, right? Is your cost runtime or is a cost actually also a metric that includes the memory on your GPU that you want to take into account? Because you cannot arbitrarily scale fast-running operations that take a lot of memory to one node, because it will just run out of memory. So there’s actually multiple dimensions in there. There’s also dimensions of the critical path of the parallel application that you load-balance. So the critical path means what is the slowest running operation that you can, in a sequential part of an algorithm, actually distribute and paralyze on. So that’s something that we also have to improve cost functions for, and multiple things in that direction. So yeah, it’s a really fun topic. And I’m really excited that Gordon Bell could bring all these three topics together to see actually a real-world benefit and a really challenging science problem.

[Scott] The mesh-refined PIC code enabled 3D simulations of laser–matter interactions that so far have been out of reach of standard codes. But what exactly did these simulations accomplish?

[Luca] I would say multiple things. One thing is that these simulations, they provide us with a prediction for the properties of the accelerated electrons under certain conditions, and we’ll be able to use these predictions to design upcoming experiments and to compare predictions with experimental results. Another point is that the results are quite encouraging. They show us that you can accelerate substantial amounts of charge and this is very important for what we would like to accomplish.

Henri has mentioned before radiotherapy, radiobiology. One of the applications that we have in mind is to use these electron sources to perform radiobiology experiments at ultrahigh dose rate. Well, it is known in the medical literature that if you give a certain dose of radiation to a biological sample, it matters if you do that very quickly or very slowly and this one day be very interesting for cancer treatment. We are far from that but we would like to provide the tools to study these effects and having a substantial amount of charge is necessary to be able to do that.

And finally, these simulations have validated mesh refinement algorithms on a very large scale. In the future, we are planning to acquire computational allocation, computing time allocation on large machines, and now we know that we will be able to perform. Thanks to mesh-refined PIC codes at scale, we will be able to perform large parametric scans, parametric investigations of the scenarios that we are interested in, including these electron acceleration schemes.

[Scott] WarpX works closely with the ECP AMReX co-design center and collaborates with ECP software technology teams. This is in keeping with ECP’s “team of teams” ethos, or character.

[Axel] Oh yes, absolutely. Besides the science case, which is really exciting to us, I think that it is probably one of the most enjoyable aspects of my work is that we are actually having these close, great collaborations. So in exascale, just to give the bigger picture, we have a central mission that is to deliver applications for exascale machines that are working as a whole, and that means that we cannot just go and make, for example, a small test case that is somehow using the machine. We actually want to deliver the whole functionality, starting from initialization of the simulation, going to running it, up to analysis at the end, visualization, whatever we need to get science results out of them. And this is why exascale is really giving us a whole team of people that we can work with.

So, let me show you three aspects that actually, we collaborate with. So, first of all, you already heard about AMReX, which is a co-design center [and library] in ECP, and what we do there is that we have for example shared milestones with them. We work integrated, have shared positions as well that are integrated in our team, and both help us, but we also have AMReX putting our challenges back and we are contributing solutions that we develop for WarpX to be shared with a large ecosystem on a lower level, basically, of the software stack.

Then we have software technologies that we just mentioned, and we are collaborating actively with everyone that we need to implement our algorithms. A good example is we integrate with the mathematical stack of ECP, which is FFTs, for our advanced field solvers, which is linear algebra, for advanced geometries. And this is about vendor libraries, so for example, vendors provide optimized on-node acceleration for FFTs, which you can then parallelize on.

And we have diagnostics. That’s very important for us at the end to really distill out scientific data of our runs. We are interacting for example with the ADIOS team and the HDF5 team, but also with data reduction teams. So, for example, compressors, lossy compressors, all this is very interesting to us. Because the moment that we run at exascale, we suddenly create 10 petabytes per simulation as output, so you really want to rethink how you do your simulations, how to get data out of this.

There are also at times conflicting constraints. For example, when you design the application, and you would silo this without talking to the next step in your science case. If your whole science case is that I want to run a simulation that has perfect load balancing and then creates output, output for example would not be ideal if we split up all our data just randomly over the whole cluster and then have to somehow collect the data together again, in post processing or in situ processing of them. So, what we do is, for example, we inspire our load balancing strategy by needs that are needed in I/O, because we cannot think the whole science run by just looking at the application runtime, we have to figure the whole thing. So that’s a really fun thing and that inspired previous talking.

Then visualization and in-situ visualization, we have a great collaboration going on there that helps us, instead of storing 3D data at high resolution, just creating directly multiple camera views into our simulations and visualizing it on the fly, which is extremely insightful and extremely helpful.

Then, we go up to the packaging stack. We work for example with the Spack team to have WarpX with all the dependencies that we might want to use, for example, the visualization that I mentioned before, taking them all up and deploying them then by the E4S team on supercomputing centers. So that we can at the end, for example, have whole modules for HPC systems and that we package through other packages as well.

So that is the software technology interaction, and all of that I am describing here is actually following a philosophy in ECP that we could describe as teams of teams approach, and this is something that we are also happy to contribute in our involvement with the so-called IDEAS-ECP team where we exchange across teams on productivity goals, on productivity approaches, on team work and also evolving best practices by writing about them and sharing about them and providing feedback and also adopting methods from other teams.

[Scott] And what is the composition of the WarpX team?

[Axel] So the core team in ECP, which is centered around Lawrence Berkeley, then Lawrence Livermore, and SLAC, which are already people that we worked with around the Warp development as well. And so, these are the ECP core team. Then we have a huge set of contributors and co-     contributors that are just as equally valuable.

So we introduced this already. And we have already introduced the collaboration we have going on with LIDYL, CEA, in France for a long time, where Luca and Henri are working. And so for example, they were working on improving our code for machines that we mentioned, like Fugaku, which are not in ECP scope, but also adding generally improvements on all levels, and also working on adding physical effects like QED effects that are not directly on our agenda, but very important to get together into a PIC code.

Then we have collaborators from Germany, for example, at DESY in Hamburg, and or at HZDR in Germany, in Dresden, where we collaborate, for example, on IO methods and on advanced and spin-offs from WarpX for specific use cases. And then we have contributors from CERN, for example, that use us for modeling, there of cavities. And we have already contributors from industry that say: “Okay, because the code is open source and we like the methods that you have, we will contribute to this new development, because we actually use WarpX on the cloud for our modeling. We want to use Cloud AWS GPUs. And if we had this small feature that we need here or this specific feature, then it’s useful for us.” So we have contributors from Modern Electron and Intense Computing that contribute to our development and benefit indirectly, again, then, HPC development, because we can use their developments as well.

Now, the personnel themselves are as diverse as the teams that are explained here just a second ago. So we have of course, at our core, we have computational physicists that are thriving in using the code. But we work very, very tightly integrated, and our teams with applied mathematicians developing advanced numerics, and a well with computer scientists and research software engineers.

[Scott] Any shout-outs to recognize especially notable contributions?

[Jean-Luc] Well, just like many top sports teams that have their superstars, whether in basketball, soccer, or other sports, the success is first and foremost a result of teamwork. And for WarpX specifically, the success is due to the complementary of an interdisciplinary team, as Axel already outlined a minute ago with outstanding skill and talents in few key areas.

So first area is supercomputing programming and use, of course, essential for exascale, also applied mathematics and numerical analysis. And very importantly and key to this team is the innovation in algorithmic and plasma-based accelerator physics. The combination of all of this is very essential.

And the other key ingredient, of course, is the motivation that is animating the team. And one example of this motivation is the nonstop 8- to 12-hours marathon sessions on Zoom and Slack that occurred when computing on Frontier and Fugaku for the Golden Bell. It was exhausting but also exhilarating. It was really a lot of fun and great.

[Scott] What’s next for WarpX?”

[Jean-Luc] Our main focus right now within the Exascale Computing Project is to do plasma accelerator research with WarpX in the context of developing the plasma accelerator technology to the level where it can be considered for actual designs of future colliders, so-called atom smashers for high energy physics, that’s really the main focus here on the ECP side.

[Henri] So regarding our plans regarding WarpX at CEA, our aim is to develop a new generation of extreme-intensity lasers that can provide the intensities much larger than what present laser technology can deliver. And so, for this we engineer a new kind of intensity booster based on plasma mirror that has the potential to intensify current high-power laser by more than three orders of magnitude.

And this makes accessible a whole new kind of experiments where we would like to explore the interaction of these boosted lasers so that we call doppler-boosted laser with a matter or even the quantum vacuum. And this will give rise to a new fundamental regime of strong field quantum electrodynamics that have been so far out of reach of even the largest particle accelerators. So this represents an immense fundamental prospect. And we plan to study this interaction by developing new modeling tools in WarpX.

[Axel] One of the things that we do now to leverage the developments that we did with WarpX is to upgrade more parts of accelerator modeling codes that we maintain. So WarpX specifically, as we mentioned, is focusing in ECP on the modeling of laser wakefield acceleration for colliders, but a whole particle accelerator of course, has way more components. So for example, we have to model beam dynamics transport and in the end also interaction, for example, between multiple beams in a collider.

So what we’re doing is we leveraged the technology; we use it for other codes. So we take AMReX and WarpX routines and update, for example, codes that can do large steps that are good for tracking the beam dynamics, but they don’t have to be sliced into small parts with radiation effects as WarpX can do. And at the same time, when we do this, we also open up the ecosystem that we’re developing in the sense that we can couple WarpX and then its spin-off codes to be able to be integrated tightly with AI and ML.

[Scott] And a few final thoughts.

[Axel] Looking ahead, the PIC method of implementing WarpX is really general purpose and open source, implementing a relativistic electromagnetic solver and solvers for that. And with the growing international users and developer base that we have from labs, academia, and industry, this has actually really a lot of applications ahead.

So what’s really exciting for us is to see that people really pick up our work already. And we foster this growing number of applications. And so for example, including other types of particle accelerators  than plasma-based ones, but also things like laboratory or space plasma physics that is very close to the modeling that we can do. And as well, fusion components. Fusion devices can be modeled with WarpX, and we have users that. Or things like thermionics in industry and much more.

So just going full circle, the method that particle-in-cell is based on, for example, for wave propagation was originally used to model antennas. And so for example, there’s already one spin-off of WarpX that is using an evolved version of WarpX to study microelectronics and to build new silicon devices in the future.

[Scott] Thanks to Jean-Luc Vay, Henri Vincenti, Luca Fedeli, and Axel Huebl of ECP’s WarpX for being on Let’s Talk Exascale.

And thank you for listening. Visit exascaleproject.org. Subscribe to ECP’s YouTube channel—our handle is Exascale Computing Project. Additionally, follow ECP on Twitter @exascaleproject.

The Exascale Computing Project is a US Department of Energy multi-lab collaboration to develop a capable and enduring exascale ecosystem for the nation.

Related Links

WarpX

Henri Vincenti’s group at Commissariat a l’Energie Atomique, France

AMReX

ADIOS

HDF5

“Pushing the Frontiers in the Design of Laser-Based Electron Accelerators with Groundbreaking Mesh-Refined Particle-In-Cell Simulations on Exascale-Class Supercomputers”

 

Scott Gibson is a communications professional who has been creating content about high-performance computing for over a decade.