On-Prem

HPC

It takes an exascale supercomputer to drive carbon capture

Here on Earth, we bury our problems and simulate our way out of them later


Over the course of four decades, global carbon dioxide emissions increased by 90 percent and it goes without saying, especially this summer week, that the impact is keenly felt.

It would be convenient to bury these facts and the CO2 somewhere out of sight, which is exactly what carbon capture efforts aim to do.

The goal of carbon capture is to grab CO2 at the emission location, pocket it to a facility, and isolate it underground. The goal, of course, is to keep it from entering the atmosphere, but the process itself comes with some nasty byproducts, which also need to be stored and managed.

There are a number of methods depending on the type of reactor, but one of the most promising is still on the horizon. It could be revolutionary because it eliminates nitrous oxide and other offshoots from capture reactions. The issue is that doing this at meaningful scale has been difficult because as the size of a reactor goes up, so too does the complexity of the problem.

The irony is that it takes CO2-generating supercomputer powerhouses to start cracking the CO2 capture problem. In this case, it’s America's first exascale supercomputer, the 21-megawatt Frontier, at Oak Ridge National Laboratory, although to be fair, this system is one of the few hydro-powered HPC giants.

Jordan Musser, a scientist at the National Energy Technology Laboratory (NETL) in the US, is leading an effort to use the entire Frontier supercomputer later this year to model the feasibility of moving clean carbon capture from a small-scale lab experiment to much larger scale.

There are only about 20 projects queued up that can gobble most of the cores on the exascale machine outside of the NETL group's work, but the code to model the new approach to carbon capture means billions of particles need to be tracked individually to simulate a gas-solid interaction over defined time scales. As one might imagine, this is more than a little computationally-intensive.

"We are using a metal oxide to provide oxygen for the reaction so there's no nitrogen available, therefore when the reaction occurs with the fossil energy source, there's no nitrous oxide or other byproduct produced. Further, the only resulting gases are carbon dioxide and water vapor so it's possible to condense water vapor and get a pure CO2 stream for use or storage," Musser explained.

NETL's small experimental carbon capture system using this approach is already functional, but "as you make the reactors bigger, the particle sizes remain the same but it changes all the flow conditions. You get different mixing behaviors, different amounts of contact between gas and solid, so this changes the overall performance of the unit," he added. Changes therefore have to be made to the geometry and flow behavior to get the right amount of mixing for heat transfer, chemical reactions, and other processes that have to fit into a particular window of time.

"The advantage of having exascale capabilities is we can look at larger systems in much higher resolution," Musser said. "With limited computing we'd do coarsening of approximations of the system. Now, we can look at mid-to-large-scale units, which takes us into the demo pilot range for these to provide insight into operational conditions or potential problems."

Just as hopping from a small experimental device to one much larger, scaling isn't linear. Being able to consume most or all of the compute on an exascale machine is far from trivial. Musser and team had to completely rewrite the physical models from their legacy MFIX code, then port those to GPUs, and test those out.

The code "allows us to look inside the corrosive environment of these reactors and see how the process is behaving," Musser said. "An extension of the legacy MFIX primarily used for lab-scale devices will allow a ramp-up of problem size, speed, and accuracy on exascale computers, like Frontier, over the next decade." ®

Send us news
8 Comments

Datacenters feeling the heat to turn hot air into cool solutions

It's tricky to pull off, but new rules may make reuse more common

DoE watchdog warns of poor maintenance at home of Frontier exascale system

Report says new QA plan currently being worked up

Vertiv goes against the grain with wooden datacenters for greener bytes

Will timber tech take root or just go up in flames?

Bitcoin's thirst for water is just as troubling as its energy appetite

A single transaction chugs 6.2 million times more than a credit card swipe

Greenpeace calls out tech giants for carbon footprint fumble

Net-zero promises or zero-net progress?

Researchers weigh new benchmarks for Green500 amid shifting workload priorities

Just because it's super efficient at Linpack doesn't mean it'll be in everything

Fujitsu says it can optimize CPU and GPU use to minimize execution time

Demos its Adaptive GPU Allocator as global shortage of geepies grinds on

HPE and Nvidia offer 'turnkey' supercomputer for AI training

If you can afford it – pricing's not out yet

Japan Airlines fuels up on hydrogen hype with eye on cleaner skies

Jet-setting to a greener future?

Intel chips away at carbon footprint but skims over thirst for water, chemicals

Semiconductors are a dirty business

As the Top500 celebrates its 30th year, with a $5 VM you too can get into the top 10 ... of 1993

But if you really care about performance, there are better options out there, natch

Intel drops the deets on UK's Dawn AI supercomputer

Phase one packs 512 Xeons, 1,024 Ponte Vecchio GPUs. Phase two: 10x that