How much should you utilise a DAC plant?

By Neil Hacker & Ryan Anderson

More posts ➚

A utilisation rate is basically the percentage of time you are actually doing something. For example, with a DAC plant a 100% utilisation rate would suggest you are actively using energy 24/7 to pull CO₂ out of the sky/separate it from your sorbent, whereas a 50% utilisation rate would suggest you are only actively using energy for half of the day in order to remove CO₂.¹ ¹ This doesn't actually mean you are only able to create a stream of removed CO₂ for 50% of the day, just that you don’t need to actively be putting energy into your system for more than 50% of the day to remove CO₂

Some systems will work much better with very high utilisation rates, for example, systems that rely on using very high temperatures or pressures may find that unless they have high utilisation rates the wind up costs/time of getting back up to operating conditions become pretty much prohibitive.

Other systems, like Parallel Carbon, could use energy for 50% of the day to electrolyse water, but then they could shut off their energy inputs for the other 50% of the day and just drip feed the acid they’ve created into their carbonate mixture to separate out CO₂.

Why does this matter? It matters because if you only need power for say 50% of the day this can radically change the way you are able to generate the power vs if you need power 100% of the day. If you only need power 50% of the day you can get there basically with renewables like solar or wind. If you need power up to around 70% of the time you can add a couple hours of battery energy storage at little additional cost to get you that last little bit. If you need power above say 80% of the day you start needing to either use large scale battery storage, clean firm solutions like geothermal² ² If you need really very high utilisation rates even something like geothermal has drawbacks. Mainly because to convert into electricity, which even temperature-swing sorbent DAC need for 10–25% of their power demand, it ends up costing around 3x more for those MWh than intermittent renewables , or you have to tap into the grid (I’m sure you can tell why this would not be a great option from a life cycle analysis standpoint).

Let's have a look at an example to see how different energy costs can change what the optimal utilisation rate is.

Model

In this example we have a certain power schedule where we have different costs for MWh depending on how much utilisation we're getting. I'm going to start with the default schedule below but you can either drag the values around or use the sidebar to try some other pre-set options. Note that the prices in this chart denote the average cost of all MWh at a certain utilisation rate not the incremental cost increase.

← You can click and drag the energy prices up and down on the chart to the left to set custom price schedules.

Power scenario 1 (?)
This schedule uses only renewables and then off-grid battery storage, this approach gets very very expensive as you try to get to full utilisation. Power scenario 2 (?) The schedule uses renewables at $40/MWh until 65% utilisation then gets electricity from the grid at $60/MWh, the reason it is an upward slope not a new flat line is because these costs represent the average of all tons removed at a given utilisation. (Just to say this undersells the actual costs becuase you would be removing less net CO₂ if you connect the grid) Power scenario 3 (?) The schedule demonstrates the effect of a constant price at every utilisation level. ³ You can find out more about capital recovery factors at another post of mine here. Plant 1
Plant 2
Plant 3

Cost:


Capacity:


CRF:


Power intensity:
Next suppose we have a DAC plant that costs $ to build, has an annual removal capacity of tons, requires MWh to remove a ton of CO₂ and has a capital recovery factor³ of %. (You can either customise these values or pick from default options to the right.)

From these values we can calculate the cost of power which we get from multiplying power intensity by the cost per MWh (at whichever utilisation rate we are looking at).

\( Power   cost = Power  intensity * cost  per  MWh \)

We can also calculate the cost of the CRF. To do this we get the project cost, multiply it by the CRF value and then divide this by the number of tons we're removing at a specific utilisation rate.

\( CRF   cost = \dfrac{Project  cost * CRF}{Project  capacity * utilisation  rate} \)

Adding these values then gives us our total cost.

\( Total   cost = Power  cost + CRF   cost \)

We can see these values plotted below. Plant 1
Plant 2
Plant 3

━ Total cost/t
━ CRF cost/t
━ Power cost/t
If you have kept the default power costs and are using plant 1 then you will probably see that the lowest cost per ton is very close to 100% capacity. This is basically because you have such little capacity to recover your large CAPEX expenditure from you just need to churn out tons.

However, if you select plant 2 you will see that in this case the lowest levelized cost per ton is actually around the 55% utilisation mark. What does this mean? It means that if you as the seller want to be able to charge the lowest price per ton (while still achieving some expected rate of return) you only want to run energy to your plant 55% of the time.

Theoretical consequences

This outcome has some pretty big consequences for both buyers and sellers. From the seller point of view if they want to be most competitive and be able to charge the lowest price then it would benefit them if they could lower their utilisation rate to the range with the lowest levelized DAC costs, i.e well below 100%.

The consequence for the buyers though is much much more profound I think. To see why it is worth taking a moment to restate what the goal of buyers is in the CDR sector. To some it is meeting annual net-zero targets, but as a group it is to maximise the speed with which we can scale up CDR capacity.

Despite having under-utilisation if this leads to lower costs for the buyer this might be the preferable options as it could allow the scaling of the whole sector more quickly. Consider plant 2 again. If it was operating at 50% capacity it would be selling 500,000 tons at $80/t, this would put the buyer on the hook for $40,000,000 a year. If however it was operating at 100% capacity it would be selling 1,000,000 tons at $180/t putting the annual cost to buyers at $180,000,000 or 4.5x more.

This difference is incredibly important because in many ways that money is just wasted. You're not buying any more ability to scale capacity with that money, we've already established that from the sellers point of view these two options actually give them the exact same expected rate of return so neither one would be any more preferable.

You're therefore not buying any more learning with your money either which could help the technology get cheaper in the future. While we might think about learning rates as a product of cumulative tons removed this isn't really true. Once you've built the DAC modules/plant it doesn't really matter if it runs at 50% utilisation or 100% the "learning" so to speak has been achieved already by making the thing itself.

What this means is that all that extra money you're spending on the plant at 100% utilisation could instead be used to pay for 4 more plants all running at 50% utilisation. If you use your money in this way you get much much more leverage out of it in speeding up the overall increase in CDR capacity. This could be rephrased from the buyers point of view to say that what they really want to do is pay the absolute minimum amount required to allow a DAC plant to hit the needed expected rate of return to raise follow-on financing for new plants. The model above would suggest that this could be achieved with over 4x less money if a plant was able to run at lower utilisation rates vs needing to run almost all of the time.

Practical consequences

What this means in practice is that companies that can take advantage of intermittent energy supply rather than running at very high utilisation rates have a huge advantage as you try to scale up the industry.

There's a reason we don't really care about the utilisation of any plant in any given year and that is that the output they are making until you reach any real scale doesn't really have any value except in so far as it allows the companies to learn how to make the technology cheaper. It's not like solar that would be making useful KWh all the time which we might want to harness. No, in this case all we care about is getting more capacity installed and if the model above has much to say on the matter the best way to do that is to find DAC sources that can run at lower utilisation rates.

Taking these ideas to their extremes

Let's take another quick look at the chart from above, this time only showing the total cost per ton. What would be best from the buyer's point of view is to pick a point on this line where the area under that point is smallest. ⁴ Note this box is not necessarily the lowest cost to the buyer and is just illustrative of the area we are looking for. In other words we're looking for the area of the box below⁴, for each level of utilisation. I've called the y axis 'normalised' because it doesn't show the actual total cost to remove tons at given utilisation levels but displays the same shape of total costs you would get if you were to do that multiplication.
One other consequence of this type of thinking is that even if the lowest levelized cost of DAC occurs at 100% utilisation (i.e Plant 2 & Power scenario 3 ) it might still be better to try to have the plant go under full utilisation as this can be much cheaper for the buyer and the seller is indifferent.
One quick way to get this for all values on the chart is to multiply the levlised cost of DAC per ton by the utilisation rate which would give us this chart. What you will find is that in virtually every case of pairing up plant stats with power costs this line is lowest at the lowest level of utilisation.

What does this suggest? It suggests that if the buyer is primarily concerned with paying the least amount of money to a DAC plant so that it hits some needed rate of return to allow follow on scale ups then they should basically have it produce no units of carbon removal at all and just pay the fixed CAPEX costs.

You can kind of understand why this is the case, CDR (for voluntary markets) is basically doing something that has no economic value and also once the plant is built is providing very little in the way of learning effects. But it still seems like the above is pretty divorced from what we might have thought initially as the best path forward.

I don't want to stretch an analogy too much but this is similar to a number of probability games where how you maximise the expected return on any given play and how you maximise the expected growth of wealth over time end up being different strategies. It's worth asking ourselves seriously when do we actually want to be pulling CO₂ out of the atmosphere. The above would suggest that doing so now actually imposes a very large tradeoff with how quickly we can scale the sector as a whole. Of course you eventually want to turn on the taps to get things rolling but given how the entire sector will likely have very little impact at all until decades from now in terms of how much it can remove there is a case to be made that tons today is the wrong metric to be maximising.

⁵ This point of knowing a company can pull tons out properly ends up being quite important. If you take the ideas above to their extremes it would suggest buyers basically instantly pay the CAPEX costs for CDR plants that would be able to⁵ pull CO₂ out of the sky and then have them sit idle while the company simply makes more plants that will similarly sit idle.

What is the best level of utilisation?

There are broadly 3 zones of utilisation we might now want to consider. Zone 1 is low utilisation levels. Here a buyer would be paying a very high cost per ton but the overall amount they have to actually spend per year is the lowest.

Zone 2 is around whichever utilisation level leads to the cheapest cost per ton.

Zone 3 is around 100% utilisation.
Sometimes zone 2 and 3 coincide for example if you have Power scenario 1 and Plant 1 both of these zones will be at the very top end of utilisation. If however you stick with power scenario 1 and look at Plant 2 you will see zone 2 is much further left at utilisation levels around 55%. If you look at Plant 3 zone 2 will have moved right to around 85% utilisation levels.

Each zone has something to be said in its favour. Zone 1 allows the buyer to pay the least while still having the same benefits on scaling capacity as any other utilisation level. Zone 2 minimises the actual cost per ton buyers would be paying and allows the buyers to pay less in absolute terms than zone 3 while having the same scaling benefits. Zone 3 maximises the actual number of tons being removed in any given year, which some types of market dynamics might incentivise.

There is a question to answer of why we might prefer one of these zones to the others. I’m going to start with the premise zone 1 is best. The reasoning behind this is that the logic of the above section, that the company gets the same benefit but the buyer pays far far less, is a pretty compelling starting point. If we look at the above graph again for plant 2, paying for tons at 10% utilisation is about half as expensive per year to the buyer as paying for tons at 50% utilisation and something like 8 times cheaper than 100% utilisation. These numbers seem large enough to need a convincing reason to not follow.

What might make us want to be in one of the higher zones?

Market incentives
If you have lots of fragmented buyers they may care more about meeting individual targets and so not have the same optimisation as a single buyer would have. At the end of the day many companies would be buying carbon removal to count against their emissions and they might want to, you know, actually have those tons removed every year.

A single buyer might be able to better internalise the approach that would scale capacity the fastest but the more buyers want annual targets hit the less we can focus on this. I can’t lie the tension between wanting more buying power to help accelerate capacity increases and the fact that a lot of buying power is likely to want yearly results which as we’ve seen above might be inefficient to the long run growth rate is a really tricky one… you probably want more money, but I'm not exactly sure how to balance these. This is probably going to be a really strong driving force that is difficult to deal with. If a lot of the eventual money comes from buyers buying CDR for compliance reasons, they will likely care about actually getting a certain number of tons every year.

This would almost certainly push you well out of zone 1 type utilisation rates. One question to ask though is whether these market forces would push you out of zone 2 if it is lower than zone 3 and actually create the incentive for close to full utilisation even if that means higher prices per ton? I’m not actually sure what the answer to this is, I expect it depends on the marginal abatement costs companies face and so it is pretty difficult to predict at the moment.

Another pragmatic reason pushing us towards higher utilisation levels is that being in zone 1 would look pretty weird from a PR standpoint. "US government pays weremovecarbon $100m to make some stuff DAC units and then not really use them"....not ideal. While operating in zone 1 really might make your money go furthest it’s also kind of strange and might be a hard sell. [I think one of the reasons it seems so strange is that in most other technologies that have scaled we kind of did want to be producing stuff in the interim, i.e you probably did want your solar cell to be used to produce energy. CDR has no direct economic value so you spend on it only insofar as it helps scale our ability to do it. If that means actually buying fewer tons in a given year than possible that might well be alright.]

You also really do probably want to consider the incentives you’re putting on companies to actually get better at making their DAC plants. Having them compete to reduce the actual cost per ton buyers are paying is a really simple, really legible thing to try to maximise (or I guess minimise). Buyers can really easily see progress companies are making, everyone can be pretty certain that if companies are selling x tons a year and that is getting cheaper they really do have the ability to remove that many tons.

If we operate in zone 1 this is less true though. In this area of low utilisation you focus much more on scaling capacity vs demonstrating just how much CDR you can do in a given year. Firstly this might make it a bit more opaque for the market to know how well companies are doing. You basically have to look at CAPEX spend, capacity, and the assumed CRF of a project and work out what that would imply the cost curve is for the plant at different levels of utilisation. This is clearly more effort than just letting the plant remove the CO₂ and then sell the CO₂ at the lowest price it can.

Secondly, even if you care mainly about scaling up capacity until we have the ability to remove meaningful amounts of CO₂ at some point you will want to, you know, actually start removing tons of carbon. While it’s somewhat unlikely that a company could make a plant that wouldn’t actually be able to function well at full capacity but just pretend like it might if you aren’t actually getting companies to work at higher capacities through market forces, checking this becomes more complex.

Sellers of CDR also have to believe that even if they’re being paid enough at low utilisation rates at some stage when it does come time, probably a few decades away, for them to actually start pulling meaningful amounts of CO₂ out of the air there will be enough money there to do this. I’m somewhat conflicted about what this suggests. On the one hand, if you go for lower utilisation rates that allows the buyers to use their money more efficiently which might allow them to have more when the time comes to purchase serious amounts of actual removal. On the other hand, if you aren’t buying large amounts of tons then when that switch does occur as we’ve seen above buyers might be needing to spend anywhere from 2-10x more per year. By having an ecosystem that grows demand for actual tons removed alongside demand for capacity scaling I think makes it more clear if we are on track to pay for enough CDR at truly large scales.

The points above would all push you away from wanting to be in zone 1 and make you more likely to want to be in something like zone 2, and quite possibly into zone 3 territory.

Learning effects
One of the big assumptions made above is that a company will get the same level of learning at any utilisation level. If this is the case then, as we did above, there is a strong incentive to not use the plant near full capacity as this is just adding costs to the buyer without the seller getting any better at producing the DAC plant itself. Or basically once you’ve spent the minimum amount needed to get the full level of learning possible any more money is better spent on growing capacity rather than paying for actual tons removed.

There is some reason to think this is mostly true, a lot of the actual “learning” so to speak does in fact tend to come from the deployment of some technology more so than the operating of that technology. This would make us favour lower utilisation ranges as it suggests the extra money really might not be helping us make CDR any cheaper.


★ Just want to draw attention to these next two paragraphs as they talk about the actual market stage we're currently in.
The extent to which this is true, or matters, will depend a lot on the actual scale of CDR going on. At very small levels you probably could learn a lot about how to make your tech better by running it at very high utilisation levels and so you probably want to take advantage of that effect as better learning rates dwarf pretty much everything in the long run.

Also, at small scales your zone 2 and zone 3 are likely to overlap. Or in other words because your CAPEX is probably really expensive vs your capacity you are better off making as many tons as you can to try to amortize this CAPEX spend. In practical terms what this means is that if we have market forces pushing for the cheapest cost per ton, at the moment at least, you won't see much of a difference in how any DAC plant should opperate, they should all be full steam ahead. It is only at larger scales this behaviour should potentially change.

Similarly once your capacity gets very very large, i.e large enough to start making a noticeable difference to global CO₂ levels you probably want to….start doing that? At a certain point we will probably care less about scaling capacity alone and start caring about actually removing CO₂ and again when we roughly get to this higher utilisation rate is probably the way to go.

Wider ecosystem
Another reason why the above may not work so well is that it considers a DAC plant in isolation, but what if it uses a different company to actually sequester the CO₂? You would now have to pay that company money to...not sequester stuff(?), and what if they have different cost structures?

I think where this really comes in as an issue is with the power generation itself, things like solar, wind, and batteries. If you are operating around zone 1 then you won’t actually have much of a demand for these things because you won’t actually be removing many tons. However, we can expect that at some point you will want to start removing lots of tons and what happens then?

Let’s say you get down to 1.5MWh/t for removal, if we get to the stage of having the capacity to remove 1Gt/yr this would take 1500TWh. This is about 10 times larger than the current worldwide solar capacity of 156 TWh in 2020. At certain scales if you are only deploying DAC plants but not actually drawing the energy to run them you risk not having enough supporting energy infrastructure available when you eventually do want to scale up removals.

I think this is one of the biggest reasons to worry about operating in zone 1. Given the sheer amount of energy DAC will potentially need it seems much better to grow its capacity along with solar at atleast somewhat similar speeds. You don’t get much benefit if you’ve scaled the number of DAC plants to enormous levels but there isn’t enough energy to power them.

Different cost structures
The above is all good analysis if (1) have to pay for energy and (2) can only use your CO₂ in ways that have no inherent economic value i.e shoving it underground. If either of these aren’t true then things do change a bit.

If we take assumption (1), what happens if you have a DAC facility that can still operate at lower levels of utilisation, and that has a source of power that costs ≈$0? Well let’s not imagine it let’s just have a look. If you use this power schedule you will find that it sets the cost of power at $0 all the way up to 30% utilisation, after this point it follows power schedule 1. You can see from the graph below that if you didn’t have to pay at all for power up to the 30% utilisation mark then from the buyers side they are indifferent buying any level of tons pretty much all the way up to that level. In this setting then you would probably put a hard floor in terms of utilisation at whatever level you can access basically free energy. This isn’t all just complete fantasy either. It’s hard to get good figures on this but it might be that above 5% of solar energy in the states is being curtailed. If you have DAC modules that can get going at a moment's notice this is an economical way to have instant energy demand which can actually help accelerate the grid decarbonising faster by making solar projects better investments. (You might need to compete against bitcoin mining though)

The second assumption we could break is that you have no economic use for your captured CO₂, if this is the case then buyers are basically just doing it for the climate benefit. As we discussed above though, if they mainly care about quickly scaling CDR infrastructure though, because of the climate benefit, they might actually want DAC plants closer to the zone 1 range.

If however you are selling your CO₂ as an input and so not relying on the voluntary markets then you will probably want to churn out as much of it as possible and the buyers also don’t mind paying for as much as you can produce because, well, it’s actually useful to them right now for its own sake. This would likely push you much closer to the zone 3 type utilisation rates.

Where does this leave us?

There are two countervailing forces we've talked about above. On the one hand lower utilisation rates can lead to a much more efficient use of capital as you try to scale up DAC, on the other hand, there are lots of institutional forces that would want you closer to full utilisation.

Where we end up will be determined by a number of things, firstly the actual makeup of who is buying CDR matters a lot. If you have something closer to a single buyer who can take the long view you might well see yourself closer to the lower end. If you have a whole mishmash of people trying to buy CDR they will probably want more immediate results in terms of buying actual tons removed to count towards their emission goals, which would put you at the higher end.

A second reason that might push you away from the very low utilisation rates is that they are just kind of more complicated to deal with. If you are in the zone 2 kind of region it’s pretty clear that you just buy actual tons from firms at the lowest price going. If you’re in the zone 1 area you start needing to be more sophisticated in how you link your purchases with the effects you want to see. This isn’t to say it’s prohibitively difficult, just that the closer things get to an open market the more pressure there will be towards the simpler option.


Where I think I come out on this is that you really want to try to support technologies that would be capable of operating at the utilisation rates with the lowest cost per ton (zone 2). In terms of the relative pull of more efficient capital deployment at rates a bit lower than this and higher utilisation rates potentially being able to bring in more buyers I’m not fully sure where I end up… which is not super satisfying I know.

A lot of this comes down to the fact these situations all seem very far away. At the moment we can kind of think of tons removed and DAC modules interchangeably as all of the volumes are so small, as volumes increase this distinction comes to matter more.

At the very least I hope this challenges people to think about what it is we actually want to maximise and what the most efficient way of achieving that is. We tend to talk a lot about maximising tons removed which makes sense if you only think about 100% utilisation situations but if you go beyond those situations you may realise, to the extent we get to decide on the implementation strategy for buying CDR, that a different strategy is preferable .