Alternative Geologies: Trump’s “America First Energy Plan”

Also published at Resilience.org.

Donald Trump’s official Energy Plan envisions cheap fossil fuel, profitable fossil fuel and abundant fossil fuel. The evidence shows that from now on, only two of those three goals can be met – briefly – at any one time.

While many of the Trump administration’s “alternative facts” have been roundly and rightly ridiculed, the myths in the America First Energy Plan are still widely accepted and promoted by mainstream media.

The dream of a great America which is energy independent, an America in which oil companies make money and pay taxes, and an America in which gas is still cheap, is fondly nurtured by the major business media and by many politicians of both parties.

The America First Energy Plan expresses this dream clearly:

The Trump Administration is committed to energy policies that lower costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil.

And further:

Sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. The Trump Administration will embrace the shale oil and gas revolution to bring jobs and prosperity to millions of Americans. … We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Less expensive energy will be a big boost to American agriculture, as well.
– www.whitehouse.gov/america-first-energy

This dream harkens back to a time when fossil fuel energy was indeed plentiful and cheap, when profitable oil companies did pay taxes to fund public infrastructure, and the US was energy independent – that is, when Donald Trump was still a boy who had not yet managed a single company into bankruptcy.

To add to the “flashback to the ’50s” mood, Trump’s plan doesn’t mention renewable energy, solar power, and wind turbines – it’s all fossil fuel all the way.

Nostalgia for energy independence

Let’s look at the “energy independence” myth in context. It has been more than 50 years since the US produced as much oil as it consumed.

Here’s a graph of US oil consumption and production since 1966. (Figures are from the BP Statistical Review of World Energy, via ycharts.com.)

Gap between US oil consumption and production – from stats on ycharts.com (click here for larger version)

Even at the height of the fracking boom in 2014, according to BP’s figures Americans were burning 7  million barrels per day more oil than was being produced domestically. (Note: the US Energy Information Agency shows net oil imports at about 5 million barrels/day in 2014 – still a big chunk of consumption.)

OK, so the US hasn’t been “energy independent” in oil for generations, and is not close to that goal now.

But if Americans Drill, Baby, Drill, isn’t it possible that great new fields could be discovered?

Well … oil companies in the US and around the world ramped up their exploration programs dramatically during the past 40 years – and came up with very little new oil, and very expensive new oil.

It’s difficult to find estimates of actual new oil discoveries in the US – though it’s easy to find news of one imaginary discovery.

When I  google “new oil discoveries in US”, most of the top links go to articles with totally bogus headlines, in totally mainstream media, from November 2016.

For example:

CNN: “Mammoth Texas oil discovery biggest ever in USA”

USA Today: “Largest oil deposit ever found in U.S. discovered in Texas”

The Guardian: “Huge deposit of untapped oil could be largest ever discovered in US”

Business Insider: “The largest oil deposit ever found in America was just discovered in Texas”

All these stories are based on a November 15, 2016 announcement by the United States Geological Survey – but the USGS claim was a far cry from the oil gushers conjured up in mass-media headlines.

The USGS wasn’t talking about a new oil field, but about one that has been drilled and tapped for decades. It merely estimated that there might be 20 billion more barrels of tight oil (oil trapped in shale) remaining in the field. The USGS announcement further specified that this estimated oil “consists of undiscovered, technically recoverable resources”. (Emphasis in USGS statement). In other words, if and when it is discovered, it will likely be technically possible to extract it, if the cost of extraction is no object.

The dwindling pace of oil discovery

We’ll come back to the issues of “technically recoverable” and “cost of extraction” later. First let’s take a realistic look at the pace of new oil discoveries.

Bloomberg sums it up in an article and graph from August, 2016:

Graph from Bloomberg.com

This chart is restricted to “conventional oil” – that is, the oil that can be pumped straight out of the ground, or which comes streaming out under its own pressure once the well is drilled. That’s the kind of oil that fueled the 20th century – but the glory days of discovery ended by the early 1970s.

While it is difficult to find good estimates of ongoing oil exploration expenditures, we do have estimates of “upstream capital spending”. This larger category includes not only the cost of exploration, but the capital outlays needed in developing new discoveries through to production.

Exploration and development costs must be funded by oil companies or by lenders, and the more companies rely on expensive wells such as deep off-shore wells or fracked wells, the less money is available for new exploration.

Over the past 20 years companies have been increasingly reliant on a) fracked oil and gas wells which suck up huge amounts of capital, and 2) exploration in ever-more-difficult environments such as deep sea, the arctic, and countries with volatile social situations.

As Julie Wilson of Wood Mackenzie forecast in Sept 2016, “Over the next three years or more, exploration will be smaller, leaner, more efficient and generally lower-risk. The biggest issue exploration has faced recently is the difficulty in commercializing discoveries—turning resources into reserves.”

Do oil companies choose to explore in more difficult environments just because they love a costly challenge? Or is it because their highly skilled geologists believe most of the oil deposits in easier environments have already been tapped?

The following chart from Barclays Global Survey shows the steeply rising trend in upstream capital spending over the past 20 years.

Graph from Energy Fuse Chart of the Week, Sept 30, 2016

 

Between the two charts above – “Oil Discoveries Lowest Since 1947”, and “Global Upstream Capital Spending” – there is overlap for the years 1985 to 2014. I took the numbers from these charts, averaged them into five-year running averages to smooth out year-to-year volatility, and plotted them together along with global oil production for the same years.

Based on Mackenzie Wood figures for new oil discoveries, Barclays Global Survey figures for upstream capital expenditures, and world oil production figures from US Energy Information Administration (click here for larger version)

This chart highlights the predicament faced by societies reliant on petroleum. It has been decades since we found as much new conventional oil in a year as we burned – so the supplies of cheap oil are being rapidly depleted. The trend has not been changed by the fracking boom in the US – which has involved oil resources that had been known for decades, resources which are costly to extract, and which has only amounted to about 5% of world production at the high point of the boom.

Yet while our natural capital in the form of conventional oil reserves is dwindling, the financial capital at play has risen steeply. In the 10 year period from 2005, upstream capital spending nearly tripled from $200 billion to almost $600 billion, while oil production climbed only about 15% and new conventional oil discoveries averaged out to no significant growth at all.

Is doubling down on this bet a sound business plan for a country? Will prosperity be assured by investing exponentially greater financial capital into the reliance on ever more expensive oil reserves, because the industry simply can’t find significant quantities of cheaper reserves? That fool’s bargain is a good summary of Trump’s all-fossil-fuel “energy independence” plan.

(The Canadian government’s implicit national energy plan is not significantly different, as the Trudeau government continues the previous Harper government’s promotion of tar sands extraction as an essential engine of “growth” in the Canadian economy.)

To jump back from global trends to a specific example, we can consider the previously mentioned “discovery” of 20 billion barrels of unconventional oil in the Permian basin of west Texas. Mainstream media articles exclaimed that this oil was worth $900 billion. As geologist Art Berman points out, that valuation is simply 20 billion barrels times the market price last November of about $45/barrel. But he adds that based on today’s extraction costs for unconventional oil in that field, it would cost $1.4 trillion to get this oil out of the ground. At today’s prices, in other words, each barrel of that oil would represent a $20 loss by the time it got to the surface.

Two out of three

To close, let’s look again at the three goals of Trump’s America First Energy Plan:
• Abundant fossil fuel
• Profitable fossil fuel
• Cheap fossil fuel

With remaining resources increasingly represented by unconventional oil such as that in the Permian basin of Texas, there is indeed abundant fossil fuel – but it’s very expensive to get. Therefore if oil companies are to remain profitable, oil has to be more expensive – that is, there can be abundant fossil fuel and profitable fossil fuel, but then the fuel cannot be cheap (and the economy will hit the skids). Or there can be abundant fossil fuel at low prices, but oil companies will lose money hand-over-fist (a situation which cannot last long).

It’s a bit harder to imagine, but there can also be fossil fuel which is both profitable to extract and cheap enough for economies to afford – it just won’t be abundant. That would require scaling back production/consumption to the remaining easy-to-extract conventional fossil fuels, and a reduction in overall demand so that those limited supplies aren’t immediately bid out of a comfortable price range. For that reduction in demand to occur, there would have to be some combination of dramatic reduction in energy use per capita and a rapid increase in deployment of renewable energies.

A rapid decrease in demand for oil is anathema to Trumpian fossil-fuel cheerleaders, but it is far more realistic than their own dream of cheap, profitable, abundant fossil fuel forever.
Top photo: composite of Donald Trump in a lake of oil spilled by the Lakeview Gusher, California, 1910 (click here for larger version). The Lakeview Gusher was the largest on-land oil spill in the US. It occurred in the Midway-Sunset oil field, which was discovered in 1894. In 2006 this field remained California’s largest producing field, though more than 80% of the estimated recoverable reserves had been extracted. (Source: California Department of Conservation, 2009 Annual Report of the State Oil & Gas Supervisor)

Energy From Waste, or Waste From Energy? A look at our local incinerator

Also published at Resilience.org.

Is it an economic proposition to drive up and down streets gathering up bags of plastic fuel for an electricity generator?

Biking along the Lake Ontario shoreline one autumn afternoon, I passed the new and just-barely operational Durham-York Energy Centre and a question popped into mind. If this incinerator produces a lot of electricity, where are all the wires?

The question was prompted in part by the facility’s location right next to the Darlington Nuclear Generating Station. Forests of towers and great streams of high-voltage power lines spread out in three directions from the nuclear station, but there is no obvious visible evidence of major electrical output from the incinerator.

So just how much electricity does the Durham-York Energy Centre produce? Does it produce as much energy as it consumes? In other words, is it accurate to refer to the incinerator as an “energy from waste” facility, or is it a “waste from energy” plant? The first question is easy to answer, the second takes a lot of calculation, and the third is a matter of interpretation.

Before we get into those questions, here’s a bit of background.

The Durham-York Energy Centre is located about an hour’s drive east of Toronto on the shore of Lake Ontario, and was built at a cost of about $300 million. It is designed to take 140,000 tonnes per year of non-recyclable and non-compostable household garbage, burn it, and use the heat to power an electric generator. The garbage comes from the jurisdictions of adjacent regions, Durham and York (which, like so many towns and counties in Ontario, share names with places in England).

The generator powered by the incinerator is rated at 14 megawatts net, while the generators at Darlington Nuclear Station, taken together, are rated at 3500 megawatts net. The incinerator produces 1/250th the electricity that the nuclear plant produces. That explains why there is no dramatically visible connection between the incinerator and the provincial electrical grid.

In other terms, the facility produces surplus power equivalent to the needs of 10,000 homes. Given that Durham and York regions have fast-growing populations – more than 1.6 million at the 2011 census – the power output of this facility is not regionally significant.

A small cluster of transformers is part of the Durham-York Energy Centre.

Energy Return on Energy Invested

But does the facility produce more energy than it uses? That’s not so easy to determine. A full analysis of Energy Return On Energy Invested (EROEI) would require data from many different sources. I decided to approach the question by looking at just one facet of the issue:

Is the energy output of the generator greater than the energy consumed by the trucks which haul the garbage to the site?

Let’s begin with a look at the “fuel” for the incinerator. Initial testing of the facility showed better than expected energy output due to the “high quality of the garbage”, according to Cliff Curtis, commissioner of works for Durham Region (quoted in the Toronto Star). Because most of the paper, cardboard, glass bottles, metal cans, recyclable plastic containers, and organic material is picked up separately and sent to recycling or composting plants, the remaining garbage is primarily plastic film or foam. (Much of this, too, is technically recyclable, but in current market conditions that recycling would be carried out at a financial loss.)

Inflammatory material

If you were lucky to grow up in a time and a place where building fires was a common childhood pastime, you know that plastic bags and styrofoam burn readily and create a lot of heat. A moment’s consideration of basic chemistry backs up that observation.

Our common plastics are themselves a highly processed form of petroleum. One of the major characteristics of our industrial civilization is that we have learned how to suck finite resources of oil from the deepest recesses of the earth, process it in highly sophisticated ways, mold it into endlessly versatile – but still cheap! – types of packaging, use the packaging once, and then throw the solidified petroleum into the garbage.

If instead of burying the plastic garbage in a landfill, we burn it, we capture some of the energy content of that original petroleum. There’s a key problem, though. As opposed to a petroleum or gas well, which provides huge quantities of energy in one location, our plastic “fuel” is light-weight and dispersed through every city, town, village and rural area.

The question thus becomes: is it an economic proposition to drive up and down every street gathering up bags of plastic fuel for an electricity generator?

The light, dispersed nature of the cargo has a direct impact on garbage truck design, and therefore on the number of loads it takes to haul a given number of tonnes of garbage.

Because these trucks must navigate narrow residential streets they must have short wheelbases. And because they need to compact the garbage as they go, they have to carry additional heavy machinery to do the compaction. The result is a low payload:

Long-haul trucks and their contents can weigh 80,000 pounds. However, the shorter wheelbase of garbage and recycling trucks results in a much lower legal weight  — usually around 51,000 pounds. Since these trucks weigh about 33,000 pounds empty, they have a legal payload of about nine tons. (Source: How Green Was My Garbage Truck)

By my calculations, residential garbage trucks picking up mostly light packaging will be “full” with a load weighing about 6.8 tonnes. (The appendix to this article lists sources and shows the calculations.)

At 6.8 tonnes per load, it will require over 20,000 garbage truck loads to gather the 140,000 tonnes burned each year by the Durham-York Energy Centre.

How many kilometers will those trucks travel? Working from a detailed study of garbage pickup energy consumption in Hamilton, Ontario, I estimated that in a medium-density area, an average garbage truck route will be about 45 km. Truck fuel economy during the route is very poor, since there is constant stopping and starting plus frequent idling while workers grab and empty the garbage cans.

There is additional traveling from the base depot to the start of each route, from the end of the route to the drop-off point, and back to the depot.

I used the following map to make a conservative estimate of total kilometers.

Google map of York and Durham Region boundaries, with location of incinerator.

Because most of the garbage delivered to the incinerator comes from Durham Region, and the population of both Durham Region and York Region are heavily weighted to their southern and western portions, I picked a spot in Whitby as an “average” starting point. From that circled “X” to the other “X” (the incinerator location) is 30 kilometers. Using that central location as the starting and ending point for trips, I estimated 105 km total for each load. (45 km on the pickup route, 30 km to the incinerator, and 30 km back to the starting point).

Due to their weight and to their frequent stops, garbage trucks get poor fuel economy. I calculated an average .96 liters/kilometer.

The result: our fleet of trucks would haul 20,600 loads per year, travel 2,163,000 kilometers, and burn just over 2 million liters of diesel fuel.

Comparing diesel to electricity

How does the energy content of the diesel fuel compare to the energy output of the incinerator’s generator? Here the calculations are simpler though the numbers get large.

There are 3412 BTUs in a kilowatt-hour of electricity, and about 36,670 BTUs in a liter of diesel fuel.

If the generator produces enough electricity for 10,000 homes, and these homes use the Ontario average of 10,000 kilowatt-hours per year, then the generator’s output is 100,000,000 kWh per year.

Converted to BTUs, the 100,000,000 kWh equal about 341 billion BTUs.

The diesel fuel burned by the garbage trucks, on the other hand, has a total energy content of about 76 billion BTUs.

That answers our initial question: does the incinerator produce more energy than the garbage trucks consume in fuel? Yes it does, by a factor of about 4.5.

If we had tallied all the energy consumed by this operation, then we could say it had an Energy Return On Energy Invested ratio of about 4.5 – comparable to the bottom end of economically viable fossil fuel extraction operations such as Canadian tar sands mining. But of course we have considered just one energy input, the fuel burned by the trucks.

If we added in the energy required to build and maintain the fleet of garbage trucks, plus an appropriate share of the energy required to maintain our roads (which are greatly impacted by weighty trucks), plus the energy used to build the $300 million incinerator/generator complex, the EROEI would be much lower, perhaps below 1. In other words, there is little or no energy return in the business of driving around picking up household garbage to fuel a generator.

Energy from waste, or waste from energy

Finally, our third question: is this facility best referred to as “Energy From Waste” or “Waste From Energy”?

Looking at the big picture, “Waste From Energy” is the best descriptor. We take highly valuable and finite energy sources in the form of petroleum, consume a lot of that energy to create plastic packaging, ship that packaging to every household via a network of stores, and then use a lot more energy to re-collect the plastic so that we can burn it. The small amount of usable energy we get at the last stage is inconsequential.

From a municipal waste management perspective, however, things might look quite different. In our society people believe they have a god-given right to acquire a steady-stream of plastic-packaged goods, and a god-given right to have someone else come and pick up their resulting garbage.

Thus municipal governments are expected to pay for a fleet of garbage trucks, and find some way to dispose of all the garbage. If they can burn that garbage and recapture a modest amount of energy in the form of electricity, isn’t that a better proposition than hauling it to expensive landfill sites which inevitably run short of capacity?

Looked at from within that limited perspective, “Energy From Waste” is a fair description of the process. (Whether incineration is a good idea still depends, of course, on the safety of the emissions from modern garbage incinerators – another controversial issue.)

But if we want to seriously reduce our waste, the place to focus is not the last link in the chain – waste disposal. The big problem is our dependence on a steady stream of products produced from valuable fossil fuels, which cannot practically be re-used or even recycled, but only down-cycled once or twice before they end up as garbage.

Top photo: Durham-York Energy Centre viewed from south east. 

APPENDIX – Sources and Calculations

Capacity and Fuel Economy of Garbage Trucks

There are many factors which determine the capacity and fuel economy of garbage trucks, including: type of truck (front-loading, rear-loading, trucks with hoists for large containers vs. trucks which are loaded by hand by workers picking up individual bags); type of route (high density urban areas with large businesses or apartment complex vs. low-density rural areas); and type of garbage (mixed waste including heavy glass, metal and wet organics vs. light but bulky plastics and foam).

Although I sent an email inquiry to Durham Waste Department asking about capacity and route lengths of garbage trucks, I didn’t receive a response. So I looked for published studies which could provide figures that seemed applicable to Durham Region.

A major source was the paper “Fuel consumption estimation for kerbside municipal solid waste (MSW) collection activities”, in Waste Management & Research, 2010, accessed via www.sagepub.com.

This study found that “Within the ‘At route’ stage, on average, the normal garbage truck had to travel approximately 71.9 km in the low-density areas while the route length in high-density areas is approximately 25 km.” Since Durham Region is a mix of older dense urban areas, newer medium-density urban sprawl, and large rural areas, I estimated an average “medium-density area route” of 45 km.

The same study found an average fuel economy of .335 liters/kilometer for garbage trucks when they were traveling from depot to the beginning of a route. The authors found that fuel economy in the “At Route” portion (with frequent stops, starts, and idling) was 1.6 L/km for high-density areas, and 2.0 L/km in low-density areas; I split the difference and used 1.8 L/km as the “At Route” fuel consumption.

As to the volumes of trucks and the weight of the garbage, I based on estimates on figures in “The Workhorses of Waste”, published by MSW Management Magazine and WIH Resource Group. This article states: “Rear-end loader capacities range from 11 cubic yards to 31 cubic yards, with 25 cubic yards being typical.”

Since rear-end loader trucks are the ones I usually see in residential neighborhoods, I used 25 cubic yards as the average volume capacity.

The same article discusses the varying weight factors:

The municipal solid waste deposited at a landfill has a density of 550 to over 650 pounds per cubic yard (approximately 20 to 25 pounds per cubic foot). This is the result of compaction within the truck during collection operations as the truck’s hydraulic blades compress waste that has a typical density of 10 to 15 pounds per cubic foot at the curbside. The in-vehicle compaction effort should approximately double the density and half the volume of the collected waste. However, these values are rough averages only and can vary considerably given the irregular and heterogeneous nature of municipal solid waste.

In Durham Region the heavier paper, glass, metal and wet organics are picked up separately and hauled to recycling depots, so it seems reasonable to assume that the remaining garbage hauled to the incinerator would not be at the dense end of the “550 to over 650 pounds per cubic yard” range. I used what seems like a conservative estimate of 600 pounds per cubic yard.

(I am aware that in some cases garbage may be off-loaded at transfer stations, further compacted, and then loaded onto much larger trucks for the next stage of transportation. This would impact the fuel economy per tonne in transportation, but would involve additional fuel in loading and unloading. I would not expect that the overall fuel use would be dramatically different. In any case, I decided to keep the calculations (relatively) simple and so I assumed that one type of truck would pick up all the garbage and deliver it to the final drop-off.)

OK, now the calculations:

Number of truckloads

25 cubic yard load X 600 pounds / cubic yard = 15000 pounds per load

15000 pounds ÷ 2204 lbs per tonne = 6.805 tonnes per load

140,000 tonnes burned by incinerator ÷ 6.805 tonnes per load = 20,570 garbage truck loads

Fuel burned:

45 km per “At Route” portion X 20,570 loads = 925,650 km “At Route”

1.8 L/km fuel consumption “At Route” x 925,650 km = 1,666,170 liters

60 km per load traveling to and from incinerator

60 km x 20,570 loads = 1,234,200 km traveling

.335 L/km travelling fuel consumption X 1,234,200 km = 413,457 liters

1,666,170 liters + 413,457 liters = 2,027,627 liters total fuel used by garbage trucks

As a check on the reasonableness of this estimate, I calculated the average fuel economy from the above figures:

20,570 loads x 105 km per load = 2,159,850 km per year

2,079,625 liters fuel ÷ 2,159,850 km = .9629 L/km

This compares closely with a figure published by the Washington Post, which said municipal garbage trucks get just 2-3 mpg. The middle of that range, 2.5 miles per US gallon, equals 1.06 L/km.

Electricity output of the generator power by the incinerator

With a rated output of 14 megawatts, the generator could produce about 122 megawatt-hours of electricity per year – if it ran at 100% capacity, every hour of the year. (14,000 kW X 24 hours per day X 365 days = 122,640,000 kWh.) That’s clearly unrealistic.

However, the generator’s operators say it puts out enough electricity for 10,000 homes. The Ontario government says the average residential electricity consumption is 10,000 kWh.

10,000 homes X 10,000 kWh per year = 100,000,000 kWh per year.

This figure represents about 80% of the maximum rated capacity of the incinerator’s generator, which sounds like a reasonable output, so that’s the figure I used.

Fake news as official policy

Also published at Resilience.org.

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)

 

Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from Wikipedia.org.

Fake news, failed states

Also published at Resilience.org.

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From Wikimedia.org.

More than one way to fall off a cliff

Also published at Resilience.org.

Wonkometer Warning MH-225The “energy cliff” is a central concept in ecological economics, and it’s based on a very simple ratio. But for me this principle was a slippery thing to grasp, and I eventually realized some of the most common graphs used to illustrate the Energy Cliff were leaving me with a misleading mental image.

This column takes a closer look at Energy Return on Energy Invested (ERoEI, EROEI or simply EROI) and the Energy Cliff, concluding with the question of how and whether the Energy Cliff might be experienced as a historical phenomenon.

The Energy Cliff as a mathematical function

Below are two frequently used versions of the Energy Cliff graph, based on the pioneering work of Charles Hall. They illustrate the relationship between Energy Return on Energy Invested and the percentage of energy production that is “surplus”, i.e., not needed by the energy sector for its own work and therefore available for use by the rest of society.

Chart accessed via http://www.resilience.org/stories/2016-06-07/let-nature-be-nature

Chart accessed via http://www.resilience.org/stories/2016-06-07/let-nature-be-nature

Chart from Tim Morgan, Life After Growth, Kindle edition, locus 980

Chart from Tim Morgan, Life After Growth, Kindle edition, locus 980

In each case the EROEI is shown on the horizontal axis with lowest values at the right. The apparent suddenness of the drop-off in surplus energy depends on the relative scales of the axes and maximum value shown for EROEI, but in each case the drop-off becomes nearly perpendicular as EROEI falls below 10 – thus the name “Energy Cliff”.

Simple enough, eh? But after seeing this graph presented in several books and essays, I still found the concept hard to master. I kept asking myself, “How does that work again?” or “Why does energy supply drop off so suddenly?”

The problem, I realized, is that the impression these graphics leave in my mind is at odds with the intent. As these examples show, the “Energy for society” or “Profit energy” dominates the graphic visually, and the “Energy used to procure energy” or “Cost energy” seems like such a small sliver that it couldn’t possibly be that important. Mathematically naïve as that impression may have been, it nevertheless made it difficult for me to retain a clear understanding of the Energy Cliff.

The solution for me was to play with the graph until I felt I understood it clearly, using imagery that reinforced the understanding.

It was most helpful, I found, to present the graph not as an unbroken continuum between the two variables, but as a bar chart showing discrete values of Energy Return on Energy Invested: 1, 2, 3, 4, etc up to 50.

The Energy Cliff as a Bart Chart

Visualizing the numbers this way minimizes the tendency to see the surplus energy, or Net energy output, as one massive block. Just as importantly, it allowed me to easily focus on the relationship between specific values of Energy input and Net energy output.

For example, at the far right end of the graph is the ERoEI value 1. This corresponds to a bare break-even scenario. An oil well with this ERoEI would not be worth drilling: we would use up one barrel of oil to drill and operate the well, and it would spit out exactly one barrel in return, leaving us with no surplus energy for our efforts.

An ERoEI of 2 corresponds to a Net energy output of 50%. To return to our Proverbial Oil Corp., we burn one barrel of oil to drill and operate a well, and the well spits out two barrels, leaving us with a net gain of 1 barrel or 50% of the Total energy output.

Our oil wells with ERoEI of 3 give us 3 barrels total for every one we invest, for a net energy gain of 2 barrels or 66.6%, wells with ERoEI of 4 give us a net energy output equal to 75% of their total energy output, wells with ERoEI of 5 give us a net energy output equal to 80% of their total energy output, and so on.

We can also see clearly that the Energy input and Net energy output percentages change very slowly for ERoEI values above 20 – at which point Energy input is 5% and Net energy output is 95% of Total energy output).

There is another simple tweak to this chart that can vividly illustrate the sudden drop-off: animation. (And since most of us use supercomputers capable of guiding a moon mission for our morning reading, why not throw in some animation?)

The animated Energy Cliff – click chart to set in motion

The animated Energy Cliff – click chart to set in motion

By focusing attention on just a narrow range of ERoEI values at a time, this moving bar graph illustrates the fact that Net energy output changes slowly throughout most of the range, and then drops off suddenly and swiftly.

The animated graph relies on the element of time as a key facet of the presentation. That begs the question: can the Energy Cliff chart be read as a function of time?

The Energy Cliff as a historical phenomenon

It is easy to look at the Energy Cliff graphic as a chronological progression, given the convention of viewing timelines with past on the left and future on the right. That would be a mistake – there is no element of time in the chart – but it might be a useful mistake if made consciously.

It’s true that ERoEI rates have been declining slowly for the past 50 years, and many new energy technologies today have ERoEI rates of 10 or lower. And in fact, the Energy Cliff chart is sometimes presented as evidence that an impending energy crisis is mathematically inevitable. While that would be an unwarranted extrapolation from a graph of a simple exponential curve, it isn’t hard to cherry-pick data that graphs to a shape similar to the Energy Cliff.

Consider the following table of ERoEI rates over time.

Selected ERoEI rates over time

This table starts with EROEI rates before the industrial age, and finishes with rates that could plausibly represent the collapse of industrial society. When graphed these numbers show a drop-off much like the Energy Cliff, with the addition of a steep slope going up at the outset of industrial civilization. The values are roughly scaled chronologically, to represent the length of time during which very high EROEI prevailed – basically, the 20th century.

Net Energy over time - chart 1 copy

 

The numbers cherry-picked for this chart include, crucially, an EROEI for photovoltaic panels in Spain as calculated by Charles Hall and Pedro Prieto, which was the subject of spirited discussion recently on Resilience. At 2.45, this EROEI is far below the level needed to support a highly complex economy. If this number is correct and turns out to be representative of photovoltaics more generally, then the scenario suggested in the above chart is plausible. As high EROEI petroleum sources are depleted, we turn to bottom-of-the-barrel resources like tar sands, and then to solar panels which are even less energy-efficient. Complex industrial society soon collapses, and the vast majority of us must return to the fields.

For a very different picture, we could use the EROEI for solar panel installations presented by Ugo Bardi in Resilience, from a study by Bhandari et al. In this view, photovoltaics in Spain have an EROEI of 11–12, safely out of the drop-off zone of the Energy Cliff. In this scenario we’d have no need for last-ditch fossil fuels from tar sands, solar panels would produce enough surplus energy to create more solar panels and keep industrial society rolling cleanly along, and the Energy Cliff would be a mathematical function but not a historical reality.

Net Energy over time - chart 1 copy

 

These two charts are equally over-simplified, ignoring other renewable resource energy technologies with widely varying EROEI rates such as hydro-electric generation. It’s unknown how long we might stretch out the dwindling supply of high-EROEI fossil fuels, or whether there will be a collective decision to clamp down on carbon emissions and leave fossil fuels in the ground. And I’m unqualified to make any judgment on whether the Hall/Prieto or the Bhandari assessment of photovoltaics is most realistic.

In presenting these two different charts I merely want to illustrate that while the Energy Cliff graph of a mathematical function is simple and direct, extrapolating from this simple function to forecast historical trends is fraught with uncertainty.

Top graphic: “The Fool” in the Rider-Waite Tarot deck dances gayly at the edge of a precipice.

 

Naomi Klein, photograph by Joe Mabel, distributed via Wikimedia Commons

A renewable energy economy will create more jobs. Is that a good thing?

Also published at Resilience.org.

In a tidal wave of good news stories, infographics and Facebook memes about renewable energy job creation, the implicit, unquestioned assumption is that More Jobs = A Healthier Economy.

A popular Facebook meme, based on the Stanford University Solutions Project, celebrates the claim that in a renewable energy-powered Canada, 40% more people will work in the energy sector.

From the Environment Hamilton Facebook page.

From the Environment Hamilton Facebook page.

 

In elaborate info-graphics, the Solutions Project provides comparable claims for all 50 US states and countries around the world – although “assertion-graphic” might be a better term, since the graphics are presented with no footnotes and no clear links to any data that might allow a skeptical mind to evaluate the conclusions.

From The Solutions Project website.

From The Solutions Project website.

And Naomi Klein, author of This Changes Everything and one of the proponents of The Leap Manifesto, cites the Energy Transition in Germany and notes that 400,000 new jobs have already been created. In her hour-long talk on the CBC Radio Ideas program and podcast, Klein gets at some of the key issues that will determine whether More Energy Jobs = A Good Thing, and we’ll return to this podcast later.

To start, though, let’s look at the issue through the following proposition:

The 20th century fossil-fueled economic growth spurt happened not because the energy industry created many jobs, but because it created very few jobs.

For most of human history, providing energy in the form of food calories was the major human occupation. Even in societies that consumed relatively high amounts of energy via firewood, harvesting and transporting that wood kept a lot of people busy.

But during the 19th and 20th centuries, as the available per capita energy supply in industrialized countries exploded, the proportion of the population employed supplying that energy dropped dramatically.

The result: instead of farming to provide the carbohydrates that feed humans and oxen, or cutting firewood to heat buildings, nearly the whole population has been free to do other activities. Whether we have made good use of this opportunity is debatable, but we’ve had plenty of energy, and nearly our entire labour force, available to run an elaborate manufacturing, consumption and service economy.

Seen from this perspective, the claim that renewable energy will create more jobs might set off alarms.

What’s in a job?

Part of the difficulty is that when we speak of a job, we refer to two (or more) very different things.

A job might mean simply something that has to be done. In this sense of the word, we don’t usually celebrate jobs. If we need to carry all our water in buckets from a well five kilometers from home, there are a lot of jobs in water-carrying – but we would probably welcome having taps right in our kitchens instead. Agriculture employs a lot of people if the only tools are sticks, but with better tools the same amount of food can be raised with fewer people working the fields.

So when we think of a job as the need to do something, we typically think that the fewer jobs the better.

When we celebrate job-creation, on the other hand, we typically mean something quite different –  a “job” is an activity that is accompanied by a pay-cheque. Since in our society most of us need to get pay-cheques for most of our lives, job-creation strikes us as a good thing to the extant that pay-cheques are involved.

Here’s the wrinkle with renewable energy job creation: the renewable energy transition will likely create jobs in the sense of adding to the quantity of work that must be done (which we normally try to minimize) and jobs in the sense of providing pay-cheques (which we typically want to maximize). The two types of job-creation are at cross-purposes, and the outcome is uncertain.

Allocation of energy surplus

Widespread prosperity depends not only on what work is done and what surplus is produced, but on how that surplus is allocated and distributed.

In the middle of the 20th century in North America and Europe, only a few people worked in energy supply but they produced a huge surplus. At the same time, the products of surplus energy were distributed in relatively equal fashion, compared to the rising levels of inequality today. The mass consumption economy – a brief anomaly in human history which is ironically referred to as Business As Usual – depended on both conditions being met. There had to be a large surplus of energy produced (or, more accurately, extracted) by a few people, and this surplus energy had to be widely distributed so that most people could participate in a consumer economy.

Naomi Klein gives prominent emphasis to the second of these two conditions. In her CBC Radio Ideas talk, she says

There’s a group in the US called Movement Generation which has a slogan that I quote a lot, which is that “transition is invevitable, but justice is not.” You can respond to climate change in a way that people putting up solar panels are paid terrible wages. In the US prison inmates are making some of the solar panels that they’re putting up. … There has to be a road map for responding to climate change in an intersectional way, which solves multiple problems at once.”

She cites the German Energy Transition as an encouraging example:

There are 900 new energy co-operatives that have sprung up in Germany. Two hundred towns and cities in Germany have taken their energy grids back from the private companies that took them over in the 1990s, and they call it “energy democracy”. They’re taking back control over their energy, so that the resources stay in the communities and they can use the profits generated from renewable energy to pay for services. They’ve also created 400,000 jobs as part of this transition. So they’re showing how you solve multiple problems at once. Lower emissions create good unionized jobs and generate the revenue we need to fight the logic of austerity at the local level.”

In Klein’s formulation, democratic control of the energy economy is a key to prosperity. Because of this energy democracy, the new jobs are “good unionized jobs” which “fight the logic of austerity”. But is that sustainable in the long run?

As Klein says, in Germany’s “energy democracy” they use “the profits generated from renewable energy to pay for services”. But that presupposes that the renewable energy technologies being used do indeed generate “profits”.

It remains an open question how much profit – how much surplus energy – will be generated from renewable energy development. If renewable energy developments consume nearly as much energy as they produce, then in the long run the energy sector may produce many pay-cheques but they won’t be generous pay-cheques, however egalitarian society might be.

Book cover, Life After Growth by Tim MorganEnergy sprawl

Tim Morgan uses the apt phrase “energy sprawl” to describe what happens as we switch to energy technologies with a lower Energy Return on Energy Invested (EROEI).

‘energy sprawl’ … has both physical and economic meanings. In physical terms, the infrastructure required to access energy and deliver it to where it is needed is going to expand exponentially. At the same time, the proportion of GDP absorbed by the energy infrastructure is going to increase as well, which means that the rest of the economy will shrink.” (Life After Growth, Harriman House, 2013, locus 2224)

As Morgan makes clear, energy sprawl is not at all unique to renewable energy transition – it applies equally to non-conventional, bottom-of-the-barrel fossil fuels such as fracked oil and gas, and bitumen extracted from Alberta’s tar sands. There will indeed be more jobs in a renewable resource economy, compared to the glory days of the fossil fuel economy, but there will also be more energy jobs if we cling to fossil fuels.

As energy sprawl proceeds, more of us will work in energy production and distribution, and fewer of us will be free to work at other pursuits. As Klein and the other authors of the Leap Manifesto argue, the higher number of energy jobs might be a net plus for society, if we use energy more wisely AND we allocate surplus more equitably.

But unless our energy technologies provide a good Energy Return On Energy Invested, there will be little surplus to distribute. In other words, there will be lots of new jobs, but few good pay-cheques.

Top photo: Canadian author and activist Naomi Klein, photographed by Joe Mabel in October 2015, accessed via Wikimedia Commons

Oil well in southeast Saskatchewan, with flared gas.

Energy at any cost?

Also published at Resilience.org.

If all else is uncertain, how can growing demand for energy be guaranteed? A review of Vaclav Smil’s Natural Gas.

Near the end of his 2015 book Natural Gas: Fuel for the 21st Century, Vaclav Smil makes two statements which are curious in juxtaposition.

On page 211, he writes:

I will adhere to my steadfast refusal to engage in any long-term forecasting, but I will restate some basic contours of coming development before I review a long array of uncertainties ….”

Link to Vaclav Smil series list.And in the next paragraph:

Given the scale of existing energy demand and the inevitability of its further growth, it is quite impossible that during the twenty-first century, natural gas could come to occupy such a dominant position in the global primary energy supply as wood did in the preindustrial era or as coal did until the middle of the twentieth century.”

If you think that second statement sounds like a long-term forecast, that makes two of us. But apparently to Smil it is not a forecast to say that the growth of energy demand is inevitable, and it’s not a forecast to state with certainty that natural gas cannot become the dominant energy source during the twenty-first century – these are simply “basic contours of coming development.” Let’s investigate.

An oddly indiscriminate name

Natural Gas is a general survey of the sources and uses of what Smil calls the fuel with “an oddly indiscriminate name”. It begins much as it ends: with a strongly-stated forecast (or “basic contour”, if you prefer) about the scale of natural gas and other fossil fuel usage relative to other energy sources.

why dwell on the resources of a fossil fuel and why extol its advantages at a time when renewable fuels and decentralized electricity generation converting solar radiation and wind are poised to take over the global energy supply. That may be a fashionable narrative – but it is wrong, and there will be no rapid takeover by the new renewables. We are a fossil-fueled civilization, and we will continue to be one for decades to come as the pace of grand energy transition to new forms of energy is inherently slow.” – Vaclav Smil, preface to Natural Gas

And in the next paragraph:

Share of new renewables in the global commercial primary energy supply will keep on increasing, but a more consequential energy transition of the coming decades will be from coal and crude oil to natural gas.”

In support of his view that a transition away from fossil fuel reliance will take at least several decades, Smil looks at major energy source transitions over the past two hundred years. These transitions have indeed been multi-decadal or multi-generational processes.

Obvious absence of any acceleration in successive transitions is significant: moving from coal to oil has been no faster than moving from traditional biofuels to coal – and substituting coal and oil by natural gas has been measurably slower than the two preceding shifts.” – Natural Gas, page 154

It would seem obvious that global trade and communications were far less developed 150 years ago, and that would be one major reason why the transition from traditional biofuels to coal proceeded slowly on a global scale. Smil cites another reason why successive transitions have been so slow:

Scale of the requisite transitions is the main reason why natural gas shares of the TPES [Total Primary Energy System] have been slower to rise: replicating a relative rise needs much more energy in a growing system. … going from 5 to 25% of natural gas required nearly eight times more energy than accomplishing the identical coal-to-oil shift.” – Natural Gas, page 155

Open-pit coal mine in south-east Saskatchewan.

Open-pit coal mine in south-east Saskatchewan. June 2014.

Today only – you’ll love our low, low prices!

There is another obvious reason why transitions from coal to oil, and from oil to natural gas, could have been expected to move slowly throughout the last 100 years: there have been abundant supplies of easily accessible, and therefore cheap, coal and oil. When a new energy source was brought online, the result was a further increase in total energy consumption, instead of any rapid shift in the relative share of different sources.

The role of price in influencing demand is easy to ignore when the price is low. But that’s not a condition we can count on for the coming decades.

Returning to Smil’s “basic contour” that total energy demand will inevitably rise, that would imply that energy prices will inevitably remain relatively low – because there is effective demand for a product only to the extent that people can afford to buy it.

Remarkably, however, even as he states confidently that demand must grow, Smil notes the major uncertainty about the investment needed simply to maintain existing levels of supply:

if the first decade of the twenty-first century was a trendsetter, then all fossil energy sources will cost substantially more, both to develop new capacities and to maintain production of established projects at least at today’s levels. … The IEA estimates that between 2014 and 2035, the total investment in energy supply will have to reach just over $40 trillion if the world is to meet the expected demand, with some 60% destined to maintain existing output and 40% to supply the rising requirements. The likelihood of meeting this need will be determined by many other interrelated factors.” – Natural Gas, page 212

What is happening here? Both Smil and the IEA are cognizant of the uncertain effects of rising prices on supply, while graphing demand steadily upward as if price has no effect. This is not how economies function in the real world, of course.

Likewise, we cannot assume that because total energy demand kept rising throughout the twentieth century, it must continue to rise through the twenty-first century. On the contrary, if energy supplies are difficult to access and therefore much more costly, then we should also expect demand to grow much more slowly, to stop growing, or to fall.

Falling demand, in turn, would have a major impact on the possibility of a rapid change in the relative share of demand met by different sources. In very simple terms, if we increased total supply of renewable energy rapidly (as we are doing now), but the total energy demand were dropping rapidly, then the relative share of renewables in the energy market could increase even more rapidly.

Smil’s failure to consider such a scenario (indeed, his peremptory dismissal of the possibility of such a scenario) is one of the major weaknesses of his approach. Acceptance of business-as-usual as a reliable baseline may strike some people as conservative. But there is nothing cautious about ignoring one of the fundamental factors of economics, and nothing safe in assuming that the historically rare condition of abundant cheap energy must somehow continue indefinitely.

In closing, just a few words about the implications of Smil’s work as it relates to the threat of climate change. In Natural Gas, he provides much valuable background on the relative amounts of carbon emissions produced by all of our major energy sources. He explains why natural gas is the best of the fossil fuels in terms of energy output relative to carbon emissions (while noting that leaks of natural gas – methane – could in fact outweigh the savings in carbon emissions). He explains that the carbon intensity of our economies has dropped as we have gradually moved from coal to oil to natural gas.

But he also makes it clear that this relative decarbonisation has been far too slow to stave off the threat of climate change.

If he turns out to be right that total energy demand will keep rising, that there will only be a slow transition from other fossil fuels to natural gas, and that the transition away from all fossil fuels will be slower still, then the chances of avoiding catastrophic climate change will be slim indeed.

Top photo: Oil well in southeast Saskatchewan, with flared gas. June 2014.

Wind turbine on site of Pickering Nuclear Generating Station.

How big is that hectare? It depends.

Also published at Resilience.org.

link to Accounting For Energy seriesThe Pickering Nuclear Generating Station, on the east edge of Canada’s largest city, Toronto, is a good take-off point for a discussion of the strengths and limitations of Vaclav Smil’s power density framework.

The Pickering complex is one of the older nuclear power plants operating in North America. Brought on line in 1971, the plant includes eight CANDU reactors (two of which are now permanently shut down). The complex also includes a single wind turbine, brought online in 2001.

Wonkometer-225The CANDU reactors are rated, at full power, at about 3100 Megawatts (MW). The wind turbine, which at 117 meters high was one of North America’s largest when it was installed, is rated at 1.8 MW at full power. (Because the nuclear reactor runs at full power for many more hours in a year, the disparity in actual output is even greater than the above figures suggest.)

How do these figures translate to power density, or power per unit of land?

The Pickering nuclear station stands cheek-by-jowl with other industrial sites and with well-used Lake Ontario waterfront parks. With a small land footprint, its power density is likely towards the high end – 7,600 W/m2 – of the range of nuclear generating stations Smil considers in Power Density. Had it been built with a substantial buffer zone, as is the case with many newer nuclear power plants, the power density might only be half as high.

A nuclear power plant, of course, requires a complex fuel supply chain that starts at a uranium mine. To arrive at more realistic power density estimates, Smil considers a range of mining and processing scenarios. When a nuclear station’s output is prorated over all the land used – land for the plant site itself, plus land for mining, processing and spent fuel storage – Smil estimates a power density of about 500 W/m2 in what he considers the most representative, mid-range of several examples.

Cameco uranium processing plant in Port Hope, Ontario

The Cameco facility in Port Hope, Ontario processes uranium for nuclear reactors. With no significant buffer around the plant, its land area is small and its power density high. Smil calculates its conversion power density at approximately 100,000 W / square meter, with the plant running at 50% capacity.

And wind turbines? Smil looks at average outputs from a variety of wind farm sites, and arrives at an estimated power density of about 1 W/m2.

So nuclear power has about 500 times the power density of wind turbines? If only it were that simple.

Inside and outside the boundary

In Power Density, Smil takes care to explain the “boundary problem”: defining what is being included or excluded in an analysis. With wind farms, for example, which land area is used in the calculation? Is it just the area of the turbine’s concrete base, or should it be all the land around and between turbines (in the common scenario of a large cluster of turbines spaced throughout a wind farm)?  There is no obviously correct answer to this question.

On the one hand, land between turbines can be and often is used as pasture or as crop land. On the other hand, access roads may break up the landscape and make some human uses impractical, as well as reducing the viability of the land for species that require larger uninterrupted spaces. Finally, there is considerable controversy about how close to wind turbines people can safely live, leading to buffer zones of varying sizes around turbine sites. Thus in this case the power output side of the quotient is relatively easy to determine, but the land area is not.

Wind turbines in southwestern Minnesota

Wind turbines line the horizon in Murray County, Minnesota, 2012.

Smil emphasizes the importance of clearly stating the boundary assumptions used in a particular analysis. For the average wind turbine power density of 1 W/m2, he is including the whole land area of a wind farm.

That approach is useful in giving us a sense of how much area would need to be occupied by wind farms to produce the equivalent power of a single nuclear power plant. The mid-range power station cited above (with overall power density of 500 W/m2) takes up about 1360 hectares in the uranium mining-processing-generating station chain. A wind farm of equivalent total power output would sprawl across 680,000 hectares of land, or 6,800 square kilometers, or a square with 82 km per side.

A wind power evangelist, on the other hand, could argue that the wind farms remain mostly devoted to agriculture, and with the concrete bases of the towers only taking 1% of the wind farm area, the power density should be calculated at 100 instead of 1W/m2.

Similar questions apply in many power density calculations. A hydro transmission corridor takes a broad stripe of countryside, but the area fenced off for the pylons is small. Most land in the corridor may continue to be used for grazing, though many other land uses will be off-limits. So you could use the area of the whole corridor in calculating power density – plus, perhaps, another buffer on each side if you believe that electromagnetic fields near power lines make those areas unsafe for living creatures. Or you could use just the area fenced off directly around the pylons. The respective power densities will vary by orders of magnitude.

If the land area is not simple to quantify when things go right, it is even more difficult when things go wrong. A drilling pad for a fracked shale gas may only be a hectare or two, so during the brief decade or two of the well’s productive life, the power density is quite high. But if fracking water leaks into an aquifer, the gas well may have drastic impacts on a far greater area of land – and that impact may continue even when the fracking boom is history.

The boundary problem is most tangled when resource extraction and consumption effects have uncertain extents in both space and time. As mentioned in the previous installment in this series, sometimes non-renewable energy facilities can be reclaimed for a full range of other uses. But the best-case scenario doesn’t always apply.

In mountain-top removal coal mining, there is a wide area of ecological devastation during the mining. But once the energy extraction drops to 0 and the mining corporation files bankruptcy, how much time will pass before the flattened mountains and filled-in valleys become healthy ecosystems again?

Or take the Pickering Nuclear Generation Station. The plant is scheduled to shut down about 2020, but its operators, Ontario Power Generation, say they will need to allow the interior radioactivity to cool for 15 years before they can begin to dismantle the reactor. By their own estimates the power plant buildings won’t be available for other uses until around 2060. Those placing bets on whether this will all go according to schedule can check back in 45 years.

In the meantime the plant will occupy land but produce no power; should the years of non-production be included in calculating an average power density? If decommissioning fails to make the site safe for a century or more, the overall power density will be paltry indeed.

In summary, Smil’s power density framework helps explain why it has taken high-power-density technologies to fuel our high-energy-consumption society, even for a single century. It helps explain why low power density technologies, such as solar and wind power, will not replace our current energy infrastructure or current demand for decades, if ever.

But the boundary problem is a window on the inherent limitations of the approach. For the past century our energy has appeared cheap and power densities have appeared high. Perhaps the low cost and the high power density are both due, in significant part, to important externalities that were not included in calculations.

Top photo: Pickering Nuclear Generating Station site, including wind turbine, on the shoreline of Lake Ontario near Toronto.

Insulators on high-voltage electricity transmission line.

Timetables of power

Also published at Resilience.org.

accounting_for_energy_2For more than three decades, Vaclav Smil has been developing the concepts presented in his 2015 book Power Density: A Key to Understanding Energy Sources and Uses.

The concept is (perhaps deceptively) simple: power density, in Smil’s formulation, is “the quotient of power and land area”. To facilitate comparisons between widely disparate energy technologies, Smil states power density using common units: watts per square meter.

Wonkometer-225Smil makes clear his belief that it’s important that citizens be numerate as well as literate, and Power Density is heavily salted with numbers. But what is being counted?

Perhaps the greatest advantage of power density is its universal applicability: the rate can be used to evaluate and compare all energy fluxes in nature and in any society. – Vaclav Smil, Power Density, pg 21

A major theme in Smil’s writing is that current renewable energy resources and technologies cannot quickly replace the energy systems that fuel industrial society. He presents convincing evidence that for current world energy demand to be supplied by renewable energies alone, the land area of the energy system would need to increase drastically.

Study of Smil’s figures will be time well spent for students of many energy sources. Whether it’s concentrated solar reflectors, cellulosic ethanol, wood-fueled generators, fracked light oil, natural gas or wind farms, Smil takes a careful look at power densities, and then estimates how much land would be taken up if each of these respective energy sources were to supply a significant fraction of current energy demand.

This consideration of land use goes some way to addressing a vacuum in mainstream contemporary economics. In the opening pages of Power Density, Smil notes that economists used to talk about land, labour and capital as three key factors in production, but in the last century, land dropped out of the theory.

The measurement of power per unit of land is one way to account for use of land in an economic system. As we will discuss later, those units of land may prove difficult to adequately quantify. But first we’ll look at another simple but troublesome issue.

Does the clock tick in seconds or in centuries?

It may not be immediately obvious to English majors or philosophers (I plead guilty), but Smil’s statement of power density – watts per square meter – includes a unit of time. That’s because a watt is itself a rate, defined as a joule per second. So power density equals joules per second per square meter.

There’s nothing sacrosanct about the second as the unit of choice. Power densities could also be calculated if power were stated in joules per millisecond or per megasecond, and with only slightly more difficult mathematical gymnastics, per century or per millenium. That is of course stretching a point, but Smil’s discussion of power density would take on a different flavor if we thought in longer time frames.

Consider the example with which Smil opens the book. In the early stages of the industrial age, English iron smelting was accomplished with the heat from charcoal, which in turn was made from coppiced beech and oak trees. As pig iron production grew, large areas of land were required solely for charcoal production. This changed in the blink of an eye, in historical terms, with the development of coal mining and the process of coking, which converted coal to nearly 100% pure carbon with energy equivalent to good charcoal.

As a result, the charcoal from thousands of hectares of hardwood forest could be replaced by coal from a mine site of only a few hectares. Or in Smil’s favored terms,

The overall power density of mid-eighteenth-century English coke production was thus roughly 500 W/m2, approximately 7,000 times higher than the power density of charcoal production. (Power Density, pg 4)

Smil notes rightly that this shift had enormous consequences for the English countryside, English economy and English society. Yet my immediate reaction to this passage was to cry foul – there is a sleight of hand going on.

While the charcoal production figures are based on the amount of wood that a hectare might produce on average each year, in perpetuity, the coal from the mine will dwindle and then run out in a century or two. If we averaged the power densities of the woodlot and mine over several centuries or millennia, the comparison look much different.

And that’s a problem throughout Power Density. Smil often grapples with the best way to average power densities over time, but never establishes a rule that works well for all energy sources.

Generating station near Niagara Falls

The Toronto Power Generating Station was built in 1906, just upstream from Horseshoe Falls in Niagara Falls, Ontario. It was mothballed in 1974. Photographed in February, 2014.

In discussing photovoltaic generation, he notes that solar radiation varies greatly by hour and month. It would make no sense to calculate the power output of a solar panel solely by the results at noon in mid-summer, just as it would make no sense to run the calculation solely at twilight in mid-winter. It is reasonable to average the power density over a whole year’s time, and that’s what Smil does.

When considering the power density of ethanol from sugar cane, it would be crazy to run the calculation based solely on the month of harvest, so again, the figures Smil uses are annual average outputs. Likewise, wood grown for biomass fuel can be harvested approximately every 20 years, so Smil divides the energy output during a harvest year by 20 to arrive at the power density of this energy source.

Using the year as the averaging unit makes obvious sense for many renewable energy sources, but this method breaks down just as obviously when considering non-renewable sources.

How do you calculate the average annual power density for a coal mine which produces high amounts of power for a hundred years or so, and then produces no power for the rest of time? Or the power density of a fracked gas well whose output will continue only a few decades at most?

The obvious rejoinder to this line of questioning is that when the energy output of a coal mine, for example, ceases, the land use also ceases, and at that point the power density of the coal mine is neither high nor low nor zero; it simply cannot be part of a calculation. As we’ll discuss later in this series, however, there are many cases where reclamations are far from certain, and so a “claim” on the land goes on.

Smil is aware of the transitory nature of fossil fuel sources, of course, and he cites helpful and eye-opening figures for the declining power densities of major oil fields, gas fields and coal mines over the past century. Yet in Power Density, most of the figures presented for non-renewable energy facilities apply for that (relatively brief) period when these facilities are in full production, but they are routinely compared with power densities of renewable energy facilities which could continue indefinitely.

So is it really true that power density is a measure “which can be used to evaluate and compare all energy fluxes in nature and in any society”? Only with some critical qualifications.

In summary, we return to Smil’s oft-emphasized theme, that current renewable resource technologies are no match for the energy demands of our present civilization. He argues convincingly that the power density of consumption on a busy expressway will not be matched to the power density of production of ethanol from corn: it would take a ridiculous and unsustainable area of corn fields to fuel all that high-energy transport. Widening the discussion, he establishes no less convincingly, to my mind, that solar power, wind power, and biofuels are not going to fuel our current high-energy way of life.

Yet if we extend our averaging units to just a century or two, we could calculate just as convincingly that the power densities of non-renewable fuel sources will also fail to support our high-energy society. And since we’re already a century into this game, we might be running out of time.

Top photo: insulators on high-voltage transmission line near Darlington Nuclear Generating Station, Bowmanville, Ontario.

Tractor-trailers hauling oil and water on North Dakota highway.

‘Are we there yet?’ The uncertain road to the twenty-first century.

Also published at Resilience.org.

What made the twentieth century such a distinctive period in human history? Are we moving into the future at an ever-increasing speed? What measures provide the most meaningful comparisons of different energy technologies? Is it “conservative” to base forecasts on business-as-usual scenarios?

These questions provide handy lenses for looking at the work of prolific energy science writer Vaclav Smil.

accounting_for_energy_1Smil, a professor emeritus at the University of Manitoba, is not likely to publish any best-sellers, but his books are widely read by people looking for data-backed discussion of energy sources and their role in our civilization. While Smil’s seemingly effortless fluency in wide-ranging topics of energy science can be intimidating to non-scientists, many of his books require no more than a good high-school-level knowledge of physics, chemistry and mathematics.

This post is the first in a series on issues raised by Smil. How many posts? Let’s just say, to use a formulation familiar to anyone who reads Smil, that the number of posts in this series will be “in the range of an order of magnitude less” than the number of Smil’s books. (He’s at 37 books and counting.)

The myth of accelerating change

In early 2004, I wrote a newspaper column with the title “Got Any Change?” Some excerpts:

Think back 50 years. If you grew up in North America, people were already travelling in cars, which moved along at about 60 miles per hour. You lived in a house with heat and running water, and you could just flick a switch to turn on the lights. You turned on the TV or radio to get instant news. You could pick up the phone and actually talk to relatives on the other side of the country.

For ease of daily living and communication, things haven’t changed much in the last 50 years for most North Americans.

My grandparents, by contrast, who grew up “when motorcars were still exotic playthings”, really lived through rapid and fundamental changes:

The magic of telephone reached into rural areas, and soon my grandparents adjusted to the even more astonishing development of moving pictures, transmitted to television sets in the living room. The airplane was invented about the time my grandparents were born, but they lived long enough to fly on passenger jets, and they watched the live newscasts as astronauts landed on the moon. (“Got Any Change?”, in the Brighton Independent, January 7, 2004)

As it turns out Smil was working on a similar premise, and developing it with his customary authority and historical rigor. The result was his 2005 book Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact. This was the first Smil book I picked up, and naturally I read it while basking in the warm glow of confirmation bias.

In the course of 300 pages, Smil argues that many world-changing technologies swept the world in the twentieth century, but nearly all of them are directly traceable to scientific advances – both theoretical and applied – during the period 1867 to 1914. There is no other period in world history so far, he says, in which so many scientific discoveries made their way so rapidly into the fabric of everyday life.

Most of [these technical advances] are still with us not just as inconsequential survivors or marginal accoutrements from a bygone age but as the very foundations of modern civilization. Such a profound and abrupt discontinuity with such lasting consequences has no equivalent in history.

For anyone alive in North America today, it’s easy to take these advances for granted, because we have never known a world without them. That’s what makes Smil’s book so valuable. In detail and with clarity, he outlines the development of electrical generators, transformers, transmission systems, and motors; internal combustion engines; new industrial processes that turned steel, aluminum, concrete, and plastics from scarce or unknown products into mass-produced commodities; and the ability to harness the electromagnetic spectrum in ways that made telephone, radio and television commercially feasible within the first few decades of the twentieth century.

Ship docked at St. Mary's Cement plant at sunset.

The Peter R Cresswell docked at the St. Mary’s Cement plant on Lake Ontario near Bowmanville, Ontario. The plant converts quarried limestone to cement, in kilns fueled by coal and pet coke. Photo from July, 2015.

Energy matters

There is a good deal in Creating the Twentieth Century on increasingly efficient methods of energy conversion. For example, Smil writes that “Typical efficiency of new large stationary steam engines rose from 6–10% during the 1860s to 12–15% after 1900, a 50% efficiency gain, and when small machines were replaced by electric motors, the overall efficiency gain was typically more than fourfold.”

But I found it odd that Creating the Twentieth Century gives little ink to the sources of energy. Smil does note that

for the first time in human history the age was marked by the emergence of high-energy societies whose functioning, be it on mundane or sophisticated levels, became increasingly dependent on incessant supplies of fossil fuels and on rising need for electricity.

Yet there is no substantial examination in this book of the fossil fuel extraction and processing industries, which rapidly became (and remained) among the dominant industries of the twentieth century.

Clearly the new understandings of thermodynamics and electromagnetism, along with new processes for steel and concrete production, were key to the twentieth century as we knew it. But suppose those developments had occurred, but at the same time only a few sizable reservoirs of oil had been discovered, so that petroleum had remained useful but expensive. Would the twentieth century still have happened?

Perhaps we shouldn’t blame Smil for avoiding a counterfactual question about epochal changes a century and more ago. After all, he has devoted a great deal of attention to a more pressing quandary: how might we create a future, with the scientific knowledge that’s accumulated in the past century and a half, while also faced with the need to move beyond fossil fuel dependence? Can we make such a transition, and how long might it take? We’ll move to those issues in the coming installments.

Top photo: Trucks hauling crude oil and frac water near Watford City, North Dakota, June 2014.