Scaling up Clean Fossil Fuel Combustion Technology for Industrial Use

Exascale Computing Project · Episode 71: Scaling up Clean Fossil Fuel Combustion Technology for Industrial Use

By Scott Gibson

Disharmony between humanity’s desires for both economic prosperity and a healthy environment has been growing since the dawn of the industrial revolution. Today, as 80 percent of the world’s energy is derived from the burning of fossil fuels, societies continue to be faced with finding affordable and reliable ways of cleverly curbing man-made carbon dioxide (CO2).

Madhava Syamlal and Jordan Musser of the National Energy Technology Laboratory and the MFiX-Exa project

Madhava Syamlal (left) and Jordan Musser of the National Energy Technology Laboratory and the MFiX-Exa project

Although promising carbon-capture and storage technologies exist in the laboratory setting, the high cost of scaling up the designs for commercialization, coupled with the need for greater confidence in the likelihood of success, stands in the way of progress. But exascale computing simulations could help researchers clear those hurdles, as explained by our guests Madhava Syamlal and Jordan Musser of the National Energy Technology Laboratory (NETL) in the latest episode of the Let’s Talk Exascale podcast from the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP). Syamlal is lead principal investigator (PI) and Musser is co-PI of an ECP subproject called MFiX-Exa, which plans to scale up lab-based clean fossil fuel combustion technologies for industrial use.

Topics discussed: carbon capture and storage defined and state-of-the-art technologies described, promising lab-based carbon-capture systems, impediments to scaling up the lab designs for industrial use, problems that exascale computing simulations can help solve, the MFiX-Exa objective, the project’s use of computational fluid dynamics, it’s ECP challenge problem, how exascale computing will help manage the computational work, and the enduring legacy the MFiX-Exa project hopes to leave in its wake.

Interview Transcript

Gibson: First, some basics. What are carbon capture and storage technologies?

Syamlal: To understand carbon capture, I think we need to talk a little bit about energy. It is intricately related to energy. As you know, the prosperity of any society depends upon the amount of energy it consumes. Worldwide, 80 percent of that energy comes from what are called fossil fuels. Those are coal, natural gas, and oil. So, 80 percent of energy that we use—for example, the gasoline that we use in our cars or the electricity that we turn on in our houses—80 percent of that comes from fossil fuels. One unique feature of fossil fuels is that it contains carbon and hydrogen, and, typically, these fuels are burned, or combusted, in air, and when you do that, the hydrogen will essentially become water, so there is no issue there. But carbon, when it is burned, becomes carbon dioxide. And, so far, what we have been doing is to let that carbon dioxide into the atmosphere. As we all know, the amount of atmospheric carbon dioxide has been increasing since the industrial revolution, and that causes global warming.

One of the things that DOE is very interested in is to decrease the carbon dioxide emissions from various energy sources. In particular at our lab, we look at power generation. We look at a variety of things, but power generation is our number-one focus. And the reason is simple. In the US, about 35 percent of the CO2 emissions come from power plants, and these are fairly large sources of carbon dioxide. What we want to do is to separate that carbon dioxide from the power plant emissions. Typically, those emissions contain maybe about 13 percent carbon dioxide, and the rest is nitrogen. So, before we can store the carbon dioxide away, we want to separate it. That’s called carbon capture.

Essentially, you separate carbon dioxide from a mixture of carbon dioxide and nitrogen. Then you can use the carbon dioxide from some purposes like enhanced oil recovery or making some building materials. Things like that. But still only a small percentage of the carbon dioxide that we generate can be used in that manner. Primarily, what we need to do is store it. So, that’s why it’s called carbon capture and storage.

One of the main ways of storing is deep underground. We can compress carbon dioxide into a dense liquid and then can pump that maybe a mile or two underground into what are called saline aquifers. And once you do that, it’s fairly stable. It can stay there for hundreds or thousands of years without coming back to the atmosphere. So, in a nutshell, that’s the carbon capture and storage process.

Gibson: How would you describe the state of the art in carbon capture and storage technologies?

Syamlal: There are different types of carbon capture technologies that we work at our lab and around the world in various research labs. The three primary technologies: pre-combustion capture; that is, the fuel is combusted, or burned, to generate energy. If the carbon dioxide can be separated before that happens, that’s called pre-combustion capture. There are some advantages to doing that. The second technology is post-combustion capture. Essentially, you burn the fuel, and then separate CO2 from a CO2–nitrogen mixture. And a third technology is called oxy-combustion. In that technology, you separate oxygen from air, and the fuel is combusted in that oxygen material, which then doesn’t require us to separate CO2. So, those are the three technologies.

The post-combustion technology has been around for a long time. There are these materials called amines. You can dissolve CO2 in amines and then you can separate them out from the power plant emissions. It is a very mature technology. The other two are still being researched, pre-combustion and oxy-combustion. The issue with all these technologies is the cost.

If you want to separate carbon dioxide from the power plant emissions, it will require some expenditures of energy, so, essentially, that increases the cost of electricity. That’s why at DOE and at other research labs, we have been doing research to decrease the cost of the carbon capture. Kind of a rough estimate would be today some of the technologies would require $60 for a ton of carbon dioxide captured. And from thermodynamic calculations, we know that the extra energy that is required can be decreased. There is this possibility of decreasing that cost to $40 per ton, so that’s one target. Eventually, decreasing it to $30 per ton. On the carbon-capture front, that’s kind of the state of the art. On the storage front, those storage technologies are very well-known. There are some technical issues, but cost is not the biggest issue.

The other issue is in being able to understand what happens to the CO2 that is put underground, to make sure that we are able to track that and if there is a leak, detect that very effectively and take actions to decrease those leakages. So, those are the areas of research there in the storage technology.

Gibson: Will you explain the carbon-capture system designs that are available in the lab but are not yet scaled up for industrial use?

Syamlal: Within the post-combustion capture, there are several technologies that are being developed in the lab. One technology is based on solvents. Solvents are liquids that can dissolve carbon dioxide, say, at a certain temperature or pressure. And if you increase the temperature or decrease the pressure, then the carbon dioxide gets released. That technology, it is actually already in commercial space. The research area is in coming up with better solvents.

The second technology is called sorbents. These are solid particles which can, again, absorb CO2 at a certain temperature and pressure. And when those conditions are changed by increasing the temperature or decreasing the pressure, you can release the absorbed CO2. So, in that way, the CO2 can be separated from the other gas, say, nitrogen. This technology is quite promising. It’s still under research. It’s not yet commercialized.

Another technology is membranes. These are membranes that will separate CO2 from nitrogen. If you flow the mixture of CO2 and nitrogen on the side. So, it’s like filter paper, say, a coffee filter, and through the other side comes, essentially, CO2. So, that’s how that is separated. Again, this technology, we are working in our lab to come up with better materials for that separation. But it is not commercialized.

The fourth one, especially in the exascale computing we’re interested in, is called chemical looping. In this technology, the fuel is not burned in air. It is burned with an oxygen-carrying material. So, it doesn’t require any carbon dioxide separation. It provides lots of advantages in terms of better efficiency, and also the temperature is fairly low, so it doesn’t produce pollutants like nitrogen oxide. And, also, this technology has fewer materials challenges. But it does have disadvantages in terms of solids circulation is an issue that needs to be dealt with. This technology is also not commercialized. It has been proven to pilot scale, and smaller versions of this sort of devices have been installed at power plants and tested. But, again, this technology has not gone into a large commercial deployment.

Gibson: Syam, what are the impediments to scaling up the lab designs for industrial use? The cost and the risks involved?

Syamlal: The main impediment is cost. If you want to go to a large pilot scale, we are talking several tens of millions of dollars. If you want to go to a commercial scale, that could be on the order of a billion dollars. So, the cost is a primary thing that prevents us from taking a lab-scale design to a commercial scale. But even if the funding is there, having confidence in a scaled-up design of a reactor is also important. For someone to invest a large amount of capital into this sort of new technologies, they need to have a lot of confidence that the scaled-up power plant will work as designed. So, the typical approach is to take small steps.

Usually, going from a lab scale to a commercial scale might take over five or six steps. At each step, you essentially increase the size of, say, the power rating of that plant and demonstrate that the plant works at that scale and then, based on the data, you go to the next level. So, the risk is a major factor. And I’ll also say the time. I think when you want to take multiple steps, each step could take, say, five years. So, all those things delay taking a technology from the lab scale to commercial scale.

Gibson: How could exascale computing capability enable the scale-up and overcome those problems?

Syamlal: Exascale computing can help in multiple ways. Let me talk about a large lab-scale device that we have at NETL. It’s a chemical looping reactor at 50 kilowatt. As this experiment is performed, one of the first things where modeling can help—and not specifically exascale, but very detailed computational fluid dynamics modeling—is in troubleshooting. For example, in our lab, the cyclone—those are devices that separate solids from a gas stream—stopped working. And then they wanted to find out, what’s the problem and how do you fix it? And to help with that troubleshooting, we use modeling. We use computational fluid dynamics modeling to come up with better cyclone designs and to help that experiment progress. So, that’s one way in which computational modeling helps.

The second way is even when we have a reactor at that scale, say, 50-kilowatt scale, and we are able to perform these experiments and these reactors are heavily instrumented, even then there are lots of information that you simply cannot collect because of the complexity of the device and the complexity of the flow. The multiphase flows are typically opaque. They are corrosive. They are erosive. It’s very hard to make measurements. And the flow is very complex. So, that’s one place exascale computing can shed some light.

To give you an example, one of the issues in this chemical looping reactor is attrition. Attrition is the process in which you have these solid particles. They slowly break up. That breakup is not something we desire. It’s not designed. But it happens anyway because the solids are moving around in the reactor, and those small particles get lost in the exhaust stream. That costs money. You’re losing money because of attrition. And, in fact, we have done economic analysis of chemical looping processes and are sure that attrition is one of the main things that we need to control to make this process cost-effective.

In the lab, in the 50-kilowatt lab reactor, we can measure the attrition rate. So, we can’t collect the fine particles that are produced, and we can say that there is this rate of attrition happening in our reactor. What we cannot do are: We do not know where that’s happening, how that’s happening, what can we do to improve it? That is beyond the experimental capabilities. When we do the exascale computations—because we have such detailed information about the flow of gas and solids—we can, in fact, go back and look at the various processes and identify which part of the reactor causes the attrition. And, based on that information, we can take corrective action. So, that’s one thing that we can do with the pilot scale.

The third way in which exascale computing can help is when we want to go from, say, 50 kilowatt to, let’s say, the next stage, say, 1 megawatt. So, it’s a larger reactor. Once we have a model that is validated for the 50-kilowatt scale, we can do simulations at this 1 megawatt, and we’ll be able to tell how well this scale-up will perform even before we build the scaled-up reactor. And in fact, we can decide what geometric changes in the reactor would be beneficial for ensuring that those reactors will work well. These are things that we have done in the past for other reactors. So, those are the three ways in which exascale computing will help accelerate the scale-up.

Gibson: Jordan, what does the MFiX-Exa project plan to achieve? We’re talking the big-picture vista, the 30,000-ft view.

Musser: I think a big-picture view of what we’re trying to do with the MFiX-Exa project is to develop a toolset that would allow engineers and scientists to obviously better impact the design and scale-up large gas–solid chemical reactors, whether it be for the energy industry or even something like the pharmaceutical industry. We’re building upon the legacy of MFiX, which has been around since the mid-80s. And this is a toolset that allows using computer modeling to determine, to some degree, what’s going on inside of gas–solid flow reactors. So, we think about that.

We have a fluid, usually a gas, and we have solids, and there are different ways within a computer that we can mode these systems. And what we’re trying to do with MFiX-Exa is go from traditionally larger, coarser models that give us a general understanding of what’s happening inside these reactors to a higher-fidelity model which provides far-more detailed information on both spatial and temporal scales as to how particles and gas are interacting with one another as well as the reactors themselves.

Gibson: What is computational fluid dynamics, CFD, and what is its role in the MFiX-Exa project?

Musser: I think that sounds like a very easy question to answer, but I think if you were to ask ten different scientists what CFD is, you’d probably get about twelve different answers. For me, that answer’s going to be we’re going to take a reactor that we’re interested in looking at and we’re going to divide it up into smaller volumes. And within each one of those smaller volumes, we’re going to use a set of governing equations to seek to determine how the fluid behaves within that. So, we have how much fluid comes in and out of the cell, what’s the temperature of the fluid. And, in our case, we also couple this to a solids model. For MFiX-Exa, this is the discrete element model [DEM]. For each particle in the system, we represent that particle uniquely.

So, throughout time and space, we know the position, trajectory—and, in the case of a reacting flow, the temperature and composition. You have this large reactor, which may have a complex geometry, and we use some form of discretization of that physical space to solve for the fluid, and those are constant through-times so that these volumes that represent the fluid will be constant in time, but the evolution of the fluid will be transient. And then the particles themselves are moving through space and time. So, we may have regions in our system where there are little to no particles or we may have regions where there are high concentrations of solids to the point where we have the process of fluidization where bubbles are occurring within the solids phase and a dense mixing region occurs.

MFiX-Exa is kind of the combination of both CFD, where we’re solving for a fluid, as well as discrete element model, where we’re tracking individual particles.

Gibson: Will you give us a summary of the challenge problem that MFiX-Exa is tackling?

Musser: Sure. As Syam had mentioned, we’re looking at a chemical looping reactor [CLR]. And for the one we’re looking at, we’re using a solids–oxygen carrier in lieu of bringing in, say, air for the combustion of the fuel. If we can kind of visually think about the reactor itself, a very crude approximation of it would be kind of like an elongated doughnut. So, we’ve taken a doughnut and stretched it in space. And solids are going to continuously move around that doughnut in a very slow fashion.

If we kind of think of the left side of the doughnut, that’s going to be our fuel reactor, and that’s where solid particles that have oxygen bound to them will react with the fuel, creating our pure CO2 stream as well as some water vapor. And then solids will pass out the bottom and around the doughnut into the air reactor, where we introduce fresh air, hot air, that will then reoxygenate our solids particles. And as that occurs, it will move up through the riser and down through, again, a cyclone, where the solids are spun out and re-entered into that fuel reactor, and the gas will exit so that we can continue this full-loop process.

To kind of classify this problem, we have a complex geometry where we have fuel reactor and air reactor, crossovers, cyclones, and various dip legs within the systems. So, there’s a fair amount of geometric complexity with that. To obtain good results from our system, we typically like to have a spatial resolution for that fluid grid to be somewhere around two particle diameters. We’re looking at a solids inventory of around five billion particles. To model this, we’re going to be tracking the individual positions in time and space for about five billion particles and have around several hundred million fluid cells in order to accurately represent that geometry of the CLR.

As the particles are moving around, we have the oxidation and reduction reactions that occur, and those mechanisms will be transferring both mass and energy between the two phases. As the particles give off that oxygen—it’s an endothermic reaction, so they’ll slightly cool—we’ll see that temperature reaction occur. And then as they take up oxygen in the air reactor, they’ll release a fair amount of heat during that oxidation process. There’s a lot of energy transfer that we need to account for in that system. Then, additionally, in multiphase flows—gas–solids flows, in particular—we have this momentum energy transfer term as well.

The fluid will see the particles through the presence of a volume fraction. We kind of draw a box in space and assume there’s a certain amount of solids in there. That’s all to occupy a particular volume of that box, and that volume fraction will impact how the fluid behaves. Then, additionally, we have the drag force that occurs between the particles in the fluid. As the fluid moves past or the particles move past the fluid, we will induce motion in one or the other as a result. So, there’s a lot of interplay between both the gas and the fluid, and then there’s this complex geometry with multiple inlets and outlets which account for the introduction of fresh species and the release of spent species in the system.

Gibson: How will exascale computing manage the process you just described?

Musser: Sure. Obviously, five billion particles is quite a few to keep track of, with their interactions with one another. In DEM, we resolve individual collisions between a particle and its neighbors, so there’s a lot of computational work, in first detecting those collisions and then actually accounting for them in the model itself. So, in general, there’s just a lot of computational work. Additionally, one thing that typically gets set aside on these types of systems is that there is no steady state per se. These are large transient reactions. So, in order to get information that’s useful to a design engineer, we don’t just need, say, a couple of seconds of physical time modeled. We need that and we need initial conditions and boundary conditions that define our problem to fully mature and allow that system to actually represent what we would see in a physical laboratory space.

So for exascale, not only do we have fairly large problems in and of themselves from a computational perspective, but we have a long timeframe represented by very small advances in time as we march through the start and end of our simulations. For example, an individual time step may be of the order of .0001 seconds. So, if we need to accumulate ten minutes of physical time modeled, we need a very fast model that can account for that. And we’re hoping exascale can help take these large problems and accelerate time-to-solutions to something that allows these systems to be used in a design cycle.

Syamlal: Yeah, Scott, I’ll add maybe one more thing to what Jordan just said. For the challenge problem we are interested in, it’s solving a problem with five billion particles. And if you compare the state of the art when we started this project, that was sort of a thousand-fold increase in the number of particles that can be simulated. So, the exascale computer is helping us to get to that stage, partly. The other part is that the Exascale Computing Project, ECP, helped to take a legacy code to get to that point where it is able to now solve a very large problem that we believe will be state of the art when we are able to do that.

Gibson: Your remarks make for a nice segue to the next question. What do you believe will be the enduring legacy of the MFiX-Exa project?

Syamlal: The legacy will be that the MFiX-Exa that we develop through this project, we will eventually make that open source and it will become available to a large number of users. MFiX, the classic, the existing code, already has a fairly large user base. So, MFiX-Exa will help that user base, and our expectation is that by the time MFiX-Exa is released, the computer architecture that is available out there will be more GPU-based architectures compared to the CPU-based architectures that a typical MFiX user uses today. It will greatly benefit them because it’s a code optimized for modern computer architectures. And, also, I would add that as we were starting this project, one of our colleagues, one of the co-Pis, her group did a survey of industrial companies that use CFD-DEM codes. One of the things that the users said was this increase in capability will help them solve a new class of problems of industrial relevance. These are companies from chemicals, petroleum, pharmaceutical, agriculture, energy. A large number of different industries will benefit from this increase in capability.

Musser: I would like to see, to some extent, the legacy of this project be the greater adoption of large-scale modeling in the design of gas–solids systems moving forward. There’s a lot of industries that rely heavily on computer modeling for input in the design process if we think of car crash modeling or airplane modeling. I would like to see this project advance the state of the art to the degree that when industry partners are looking at designing large-scale gas–solids systems—whichever industry it may be—that they’re able to use these types of models to better facilitate the design and improve the time-to-market capability of these new systems.

Gibson: Thank you both for joining us and sharing the insights.