Energy And Civilization: a review

Also published at Resilience.org.

If you were to find yourself huddled with a small group of people in a post-crash, post-internet world, hoping to recreate some of the comforts of civilization, you’d do well to have saved a printed copy of Vaclav Smil’s Energy and Civilization: A History.

Smil’s new 550-page magnum opus would help you understand why for most applications a draft horse is a more efficient engine than an ox – but only if you utilize an effective harness, which is well illustrated. He could help you decide whether building a canal or a hard-topped road would be a more productive use of your energies. When you were ready to build capstans or block-and-tackle mechanisms for accomplishing heavy tasks, his discussion and his illustrations would be invaluable.

But hold those thoughts of apocalypse for a moment. Smil’s book is not written as a doomer’s handbook, but as a thorough guide to the role of energy conversions in human history to date. Based on his 1994 book Energy in World History, the new book is about 60% longer and includes 40% more illustrations.

Though the initial chapters on prehistory are understandably brief, Smil lays the groundwork with his discussion of the dependency of all living organisms on their ability to acquire enough energy in usable forms.

The earliest humanoids had some distinct advantages and liabilities in this regard. Unlike other primates, humans evolved to walk on two feet all the time, not just occasionally. Ungainly though this “sequence of arrested falls” may be, “human walking costs about 75% less energy than both quadrupedal and bipedal walking in chimpanzees.” (Energy and Civilization, pg 22)

What to do with all that saved energy? Just think:

The human brain claims 20–25% of resting metabolic energy, compared to 8–10% in other primates and just 3–5% in other mammals.” (Energy and Civilization, pg 23)

In his discussion of the earliest agricultures, a recurring theme is brought forward: energy availability is always a limiting factor, but other social factors also come into play throughout history. In one sense, Smil explains, the move from foraging to farming was a step backwards:

Net energy returns of early farming were often inferior to those of earlier or concurrent foraging activities. Compared to foraging, early farming usually required higher human energy inputs – but it could support higher population densities and provide a more reliable food supply.” (Energy and Civilization, pg 42)

The higher population densities allowed a significant number of people to work at tasks not immediately connected to securing daily energy requirements. The result, over many millennia, was the development of new materials, tools and processes.

Smil gives succinct explanations of why the smelting of brass and bronze was less energy-intensive than production of pure copper. Likewise he illustrates why the iron age, with its much higher energy requirements, resulted in widespread deforestation, and iron production was necessarily very limited until humans learned to exploit coal deposits in the most recent centuries.

Cooking snails in a pot over an open fire. In Energy and Civilization, Smil covers topics as diverse as the importance of learning to use fire to supply the energy-rich foods humans need; the gradual deployment of better sails which allowed mariners to sail closer to the wind; and the huge boost in information consumption that occurred a century ago due to a sudden drop in the energy cost of printing. This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the United Kingdom, via Wikimedia Commons.

Energy explosion

The past two hundred years of fossil-fuel-powered civilization takes up the biggest chunk of the book. But the effective use of fossil fuels had to be preceded by many centuries of development in metallurgy, chemistry, understanding of electromagnetism, and a wide array of associated technologies.

While making clear how drastically human civilizations have changed in the last several generations, Smil also takes care to point out that even the most recent energy transitions didn’t take place all at once.

While the railways were taking over long-distance shipments and travel, the horse-drawn transport of goods and people dominated in all rapidly growing cities of Europe and North America.” (Energy and Civilization, pg 185)

Likewise the switches from wood to coal or from coal to oil happened only with long overlaps:

The two common impressions – that the twentieth century was dominated by oil, much as the nineteenth century was dominated by coal – are both wrong: wood was the most important fuel before 1900 and, taken as a whole, the twentieth century was still dominated by coal. My best calculations show coal about 15% ahead of crude oil …” (Energy and Civilization, pg 275)

Smil draws an important lesson for the future from his careful examination of the past:

Every transition to a new form of energy supply has to be powered by the intensive deployment of existing energies and prime movers: the transition from wood to coal had to be energized by human muscles, coal combustion powered the development of oil, and … today’s solar photovoltaic cells and wind turbines are embodiments of fossil energies required to smelt the requisite metals, synthesize the needed plastics, and process other materials requiring high energy inputs.” (Energy and Civilization, pg 230)

A missing chapter

Energy and Civilization is a very ambitious book, covering a wide spread of history and science with clarity. But a significant omission is any discussion of the role of slavery or colonialism in the rise of western Europe.

Smil does note the extensive exploitation of slave energy in ancient construction works, and slave energy in rowing the war ships of the democratic cities in ancient Greece. He carefully calculates the power output needed for these projects, whether supplied by slaves, peasants, or animals.

In his look at recent European economies, Smil also notes the extensive use of physical and child labour that occurred simultaneously with the growth of fossil-fueled industry. For example, he describes the brutal work conditions endured by women and girls who carried coal up long ladders from Scottish coal mines, in the period before effective machinery was developed for this purpose.

But what of the 20 million or more slaves taken from Africa to work in the European colonies of the “New World”? Did the collected energies of all these unwilling participants play no notable role in the progress of European economies?

Likewise, vast quantities of resources in the Americas, including oil-rich marine mammals and old-growth forests, were exploited by the colonies for the benefit of European nations which had run short of these important energy commodities. Did this sudden influx of energy wealth play a role in European supremacy over the past few centuries? Attention to such questions would have made Energy and Civilization a more complete look at our history.

An uncertain future

Smil closes the book with a well-composed rumination on our current predicaments and the energy constraints on our future.

While the timing of transition is uncertain, Smil leaves little doubt that a shift away from fossil fuels is necessary, inevitable, and very difficult. Necessary, because fossil fuel consumption is rapidly destabilizing our climate. Inevitable, because fossil fuel reserves are being depleted and will not regenerate in any relevant timeframe. Difficult, both because our industrial economies are based on a steady growth in consumption, and because much of the global population still doesn’t have access to a sufficient quantity of energy to provide even the basic necessities for a healthy life.

The change, then, should be led by those who are now consuming quantities of energy far beyond the level where this consumption furthers human development.

Average per capita energy consumption and the human development index in 2010. Smil, Energy and Civilization, pg 363

 

Smil notes that energy consumption rises in correlation with the Human Development Index up to a point. But increases in energy use beyond, roughly the level of present-day Turkey or Italy, provide no significant boost in Human Development. Some of the ways we consume a lot of energy, he argues, are pointless, wasteful and ineffective.

In affluent countries, he concludes,

Growing energy use cannot be equated with effective adaptations and we should be able to stop and even to reverse that trend …. Indeed, high energy use by itself does not guarantee anything except greater environmental burdens.

Opportunities for a grand transition to less energy-intensive society can be found primarily among the world’s preeminent abusers of energy and materials in Western Europe, North America, and Japan. Many of these savings could be surprisingly easy to realize.” (Energy and Civilization, pg 439)

Smil’s book would indeed be a helpful post-crash guide – but it would be much better if we heed the lessons, and save the valuable aspects of civilization, before apocalypse overtakes us.

 

Top photo: Common factory produced brass olive oil lamp from Italy, c. late 19th century, adapted from photo on Wikimedia Commons.

The Carbon Code – imperfect answers to impossible questions

Also published at Resilience.org.

“How can we reconcile our desire to save the planet from the worst effects of climate change with our dependence on the systems that cause it? How can we demand that industry and governments reduce their pollution, when ultimately we are the ones buying the polluting products and contributing to the emissions that harm our shared biosphere?”

These thorny questions are at the heart of Brett Favaro’s new book The Carbon Code (Johns Hopkins University Press, 2017). While he  readily concedes there can be no perfect answers, his book provides a helpful framework for working towards the immediate, ongoing carbon emission reductions that most of us already know are necessary.

Favaro’s proposals may sound modest, but his carbon code could play an important role if it is widely adopted by individuals, by civil organizations – churches, labour unions, universities – and by governments.

As a marine biologist at Newfoundland’s Memorial University, Favaro is keenly aware of the urgency of the problem. “Conservation is a frankly devastating field to be in,” he writes. “Much of what we do deals in quantifying how many species are declining or going extinct  ….”

He recognizes that it is too late to prevent climate catastrophe, but that doesn’t lessen the impetus to action:

There’s no getting around the prospect of droughts and resource wars, and the creation of climate refugees is certain. But there’s a big difference between a world afflicted by 2-degree warming and one warmed by 3, 4, or even more degrees.”

In other words, we can act now to prevent climate chaos going from worse to worst.

The code of conduct that Favaro presents is designed to help us be conscious of the carbon impacts of our own lives, and work steadily toward the goal of a nearly-complete cessation of carbon emissions.

The carbon code of conduct consists of four “R” principles that must be applied to one’s carbon usage:

1. Reduce your use of carbon as much as possible.

2. Replace carbon-intensive activities with those that use less carbon to achieve the same outcome.

3. Refine the activity to get the most benefit for each unit of carbon emitted.

4. Finally, Rehabilitate the atmosphere by offsetting carbon usage.”

There’s a good bit of wiggle room in each of those four ’R’s, and Favaro presents that flexibility not as a bug but as a feature. “Codes of conduct are not the same thing as laws – laws are dichotomous, and you are either following them or you’re not,” he says. “Codes of conduct are interpretable and general and are designed to shape expectations.”

Street level

The bulk of the book is given to discussion of how we can apply the carbon code to home energy use, day-to-day transportation, a lower-carbon diet, and long distance travel.

There is a heavy emphasis on a transition to electric cars – an emphasis that I’d say is one of the book’s weaker points. For one thing, Favaro overstates the energy efficiency of electric vehicles.

EVs are far more efficient. Whereas only around 20% of the potential energy stored in a liter of gasoline actually goes to making an ICE [Internal Combustion Engine] car move, EVs convert about 60% of their stored energy into motion ….”

In a narrow sense this is true, but it ignores the conversion costs in common methods of producing the electricity that charges the batteries. A typical fossil-fueled generating plant operates in the range of 35% energy efficiency. So the actual efficiency of an electric vehicle is likely to be closer to 35% X 60%, or 21% – in other words, not significantly better than the internal combustion engine.

By the same token, if a large proportion of new renewable energy capacity over the next 15 years must be devoted to charging electric cars, it will be extremely challenging to simultaneously switch home heating, lighting and cooling processes away from fossil fuel reliance.

Yet if the principles of Favaro’s carbon code were followed, we would not only stop building internal combustion cars, we would also make the new electric cars smaller and lighter, provide strong incentives to reduce the number of miles they travel (especially miles with only one passenger), and rapidly improve bicycling networks and public transit facilities to get people out of cars for most of their ordinary transportation. To his credit, Favaro recognizes the importance of all these steps.

Flight paths

As a researcher invited to many international conferences, and a person who lives in Newfoundland but whose family is based in far-away British Columbia, Favaro has given a lot of thought to the conundrum of air travel. He notes that most of the readers of his book will be members of a particular global elite: the small percentage of the world’s population who board a plane more than a few times in their lives.

We members of that elite group have a disproportionate carbon footprint, and therefore we bear particular responsibility for carbon emission reductions.

The Air Transport Action Group, a UK-based industry association, estimated that the airline industry accounts for about 2% of global CO2 emissions. That may sound small, but given the tiny percentage of the world population that flies regularly, it represents a massive outlier in terms of carbon-intensive behaviors. In the United States, air travel is responsible for about 8% of the country’s emissions ….”

Favaro is keenly aware that if the Carbon Code were read as “never get on an airplane again for the rest of your life”, hardly anyone would adopt the code (and those few who did would be ostracized from professional activities and in many cases cut off from family). Yet the four principles of the Carbon Code can be very helpful in deciding when, where and how often to use the most carbon-intensive means of transportation.

Remember that ultimately all of humanity needs to mostly stop using fossil fuels to achieve climate stability. Therefore, just like with your personal travel, your default assumption should be that no flights are necessary, and then from there you make the case for each flight you take.”

The Carbon Code is a wise, carefully optimistic book. Let’s hope it is widely read and that individuals and organizations take the Carbon Code to heart.

 

Top photo: temporary parking garage in vacant lot in Manhattan, July 2013.

Alternative Geologies: Trump’s “America First Energy Plan”

Also published at Resilience.org.

Donald Trump’s official Energy Plan envisions cheap fossil fuel, profitable fossil fuel and abundant fossil fuel. The evidence shows that from now on, only two of those three goals can be met – briefly – at any one time.

While many of the Trump administration’s “alternative facts” have been roundly and rightly ridiculed, the myths in the America First Energy Plan are still widely accepted and promoted by mainstream media.

The dream of a great America which is energy independent, an America in which oil companies make money and pay taxes, and an America in which gas is still cheap, is fondly nurtured by the major business media and by many politicians of both parties.

The America First Energy Plan expresses this dream clearly:

The Trump Administration is committed to energy policies that lower costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil.

And further:

Sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. The Trump Administration will embrace the shale oil and gas revolution to bring jobs and prosperity to millions of Americans. … We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Less expensive energy will be a big boost to American agriculture, as well.
– www.whitehouse.gov/america-first-energy

This dream harkens back to a time when fossil fuel energy was indeed plentiful and cheap, when profitable oil companies did pay taxes to fund public infrastructure, and the US was energy independent – that is, when Donald Trump was still a boy who had not yet managed a single company into bankruptcy.

To add to the “flashback to the ’50s” mood, Trump’s plan doesn’t mention renewable energy, solar power, and wind turbines – it’s all fossil fuel all the way.

Nostalgia for energy independence

Let’s look at the “energy independence” myth in context. It has been more than 50 years since the US produced as much oil as it consumed.

Here’s a graph of US oil consumption and production since 1966. (Figures are from the BP Statistical Review of World Energy, via ycharts.com.)

Gap between US oil consumption and production – from stats on ycharts.com (click here for larger version)

Even at the height of the fracking boom in 2014, according to BP’s figures Americans were burning 7  million barrels per day more oil than was being produced domestically. (Note: the US Energy Information Agency shows net oil imports at about 5 million barrels/day in 2014 – still a big chunk of consumption.)

OK, so the US hasn’t been “energy independent” in oil for generations, and is not close to that goal now.

But if Americans Drill, Baby, Drill, isn’t it possible that great new fields could be discovered?

Well … oil companies in the US and around the world ramped up their exploration programs dramatically during the past 40 years – and came up with very little new oil, and very expensive new oil.

It’s difficult to find estimates of actual new oil discoveries in the US – though it’s easy to find news of one imaginary discovery.

When I  google “new oil discoveries in US”, most of the top links go to articles with totally bogus headlines, in totally mainstream media, from November 2016.

For example:

CNN: “Mammoth Texas oil discovery biggest ever in USA”

USA Today: “Largest oil deposit ever found in U.S. discovered in Texas”

The Guardian: “Huge deposit of untapped oil could be largest ever discovered in US”

Business Insider: “The largest oil deposit ever found in America was just discovered in Texas”

All these stories are based on a November 15, 2016 announcement by the United States Geological Survey – but the USGS claim was a far cry from the oil gushers conjured up in mass-media headlines.

The USGS wasn’t talking about a new oil field, but about one that has been drilled and tapped for decades. It merely estimated that there might be 20 billion more barrels of tight oil (oil trapped in shale) remaining in the field. The USGS announcement further specified that this estimated oil “consists of undiscovered, technically recoverable resources”. (Emphasis in USGS statement). In other words, if and when it is discovered, it will likely be technically possible to extract it, if the cost of extraction is no object.

The dwindling pace of oil discovery

We’ll come back to the issues of “technically recoverable” and “cost of extraction” later. First let’s take a realistic look at the pace of new oil discoveries.

Bloomberg sums it up in an article and graph from August, 2016:

Graph from Bloomberg.com

This chart is restricted to “conventional oil” – that is, the oil that can be pumped straight out of the ground, or which comes streaming out under its own pressure once the well is drilled. That’s the kind of oil that fueled the 20th century – but the glory days of discovery ended by the early 1970s.

While it is difficult to find good estimates of ongoing oil exploration expenditures, we do have estimates of “upstream capital spending”. This larger category includes not only the cost of exploration, but the capital outlays needed in developing new discoveries through to production.

Exploration and development costs must be funded by oil companies or by lenders, and the more companies rely on expensive wells such as deep off-shore wells or fracked wells, the less money is available for new exploration.

Over the past 20 years companies have been increasingly reliant on a) fracked oil and gas wells which suck up huge amounts of capital, and 2) exploration in ever-more-difficult environments such as deep sea, the arctic, and countries with volatile social situations.

As Julie Wilson of Wood Mackenzie forecast in Sept 2016, “Over the next three years or more, exploration will be smaller, leaner, more efficient and generally lower-risk. The biggest issue exploration has faced recently is the difficulty in commercializing discoveries—turning resources into reserves.”

Do oil companies choose to explore in more difficult environments just because they love a costly challenge? Or is it because their highly skilled geologists believe most of the oil deposits in easier environments have already been tapped?

The following chart from Barclays Global Survey shows the steeply rising trend in upstream capital spending over the past 20 years.

Graph from Energy Fuse Chart of the Week, Sept 30, 2016

 

Between the two charts above – “Oil Discoveries Lowest Since 1947”, and “Global Upstream Capital Spending” – there is overlap for the years 1985 to 2014. I took the numbers from these charts, averaged them into five-year running averages to smooth out year-to-year volatility, and plotted them together along with global oil production for the same years.

Based on Mackenzie Wood figures for new oil discoveries, Barclays Global Survey figures for upstream capital expenditures, and world oil production figures from US Energy Information Administration (click here for larger version)

This chart highlights the predicament faced by societies reliant on petroleum. It has been decades since we found as much new conventional oil in a year as we burned – so the supplies of cheap oil are being rapidly depleted. The trend has not been changed by the fracking boom in the US – which has involved oil resources that had been known for decades, resources which are costly to extract, and which has only amounted to about 5% of world production at the high point of the boom.

Yet while our natural capital in the form of conventional oil reserves is dwindling, the financial capital at play has risen steeply. In the 10 year period from 2005, upstream capital spending nearly tripled from $200 billion to almost $600 billion, while oil production climbed only about 15% and new conventional oil discoveries averaged out to no significant growth at all.

Is doubling down on this bet a sound business plan for a country? Will prosperity be assured by investing exponentially greater financial capital into the reliance on ever more expensive oil reserves, because the industry simply can’t find significant quantities of cheaper reserves? That fool’s bargain is a good summary of Trump’s all-fossil-fuel “energy independence” plan.

(The Canadian government’s implicit national energy plan is not significantly different, as the Trudeau government continues the previous Harper government’s promotion of tar sands extraction as an essential engine of “growth” in the Canadian economy.)

To jump back from global trends to a specific example, we can consider the previously mentioned “discovery” of 20 billion barrels of unconventional oil in the Permian basin of west Texas. Mainstream media articles exclaimed that this oil was worth $900 billion. As geologist Art Berman points out, that valuation is simply 20 billion barrels times the market price last November of about $45/barrel. But he adds that based on today’s extraction costs for unconventional oil in that field, it would cost $1.4 trillion to get this oil out of the ground. At today’s prices, in other words, each barrel of that oil would represent a $20 loss by the time it got to the surface.

Two out of three

To close, let’s look again at the three goals of Trump’s America First Energy Plan:
• Abundant fossil fuel
• Profitable fossil fuel
• Cheap fossil fuel

With remaining resources increasingly represented by unconventional oil such as that in the Permian basin of Texas, there is indeed abundant fossil fuel – but it’s very expensive to get. Therefore if oil companies are to remain profitable, oil has to be more expensive – that is, there can be abundant fossil fuel and profitable fossil fuel, but then the fuel cannot be cheap (and the economy will hit the skids). Or there can be abundant fossil fuel at low prices, but oil companies will lose money hand-over-fist (a situation which cannot last long).

It’s a bit harder to imagine, but there can also be fossil fuel which is both profitable to extract and cheap enough for economies to afford – it just won’t be abundant. That would require scaling back production/consumption to the remaining easy-to-extract conventional fossil fuels, and a reduction in overall demand so that those limited supplies aren’t immediately bid out of a comfortable price range. For that reduction in demand to occur, there would have to be some combination of dramatic reduction in energy use per capita and a rapid increase in deployment of renewable energies.

A rapid decrease in demand for oil is anathema to Trumpian fossil-fuel cheerleaders, but it is far more realistic than their own dream of cheap, profitable, abundant fossil fuel forever.
Top photo: composite of Donald Trump in a lake of oil spilled by the Lakeview Gusher, California, 1910 (click here for larger version). The Lakeview Gusher was the largest on-land oil spill in the US. It occurred in the Midway-Sunset oil field, which was discovered in 1894. In 2006 this field remained California’s largest producing field, though more than 80% of the estimated recoverable reserves had been extracted. (Source: California Department of Conservation, 2009 Annual Report of the State Oil & Gas Supervisor)

Energy From Waste, or Waste From Energy? A look at our local incinerator

Also published at Resilience.org.

Is it an economic proposition to drive up and down streets gathering up bags of plastic fuel for an electricity generator?

Biking along the Lake Ontario shoreline one autumn afternoon, I passed the new and just-barely operational Durham-York Energy Centre and a question popped into mind. If this incinerator produces a lot of electricity, where are all the wires?

The question was prompted in part by the facility’s location right next to the Darlington Nuclear Generating Station. Forests of towers and great streams of high-voltage power lines spread out in three directions from the nuclear station, but there is no obvious visible evidence of major electrical output from the incinerator.

So just how much electricity does the Durham-York Energy Centre produce? Does it produce as much energy as it consumes? In other words, is it accurate to refer to the incinerator as an “energy from waste” facility, or is it a “waste from energy” plant? The first question is easy to answer, the second takes a lot of calculation, and the third is a matter of interpretation.

Before we get into those questions, here’s a bit of background.

The Durham-York Energy Centre is located about an hour’s drive east of Toronto on the shore of Lake Ontario, and was built at a cost of about $300 million. It is designed to take 140,000 tonnes per year of non-recyclable and non-compostable household garbage, burn it, and use the heat to power an electric generator. The garbage comes from the jurisdictions of adjacent regions, Durham and York (which, like so many towns and counties in Ontario, share names with places in England).

The generator powered by the incinerator is rated at 14 megawatts net, while the generators at Darlington Nuclear Station, taken together, are rated at 3500 megawatts net. The incinerator produces 1/250th the electricity that the nuclear plant produces. That explains why there is no dramatically visible connection between the incinerator and the provincial electrical grid.

In other terms, the facility produces surplus power equivalent to the needs of 10,000 homes. Given that Durham and York regions have fast-growing populations – more than 1.6 million at the 2011 census – the power output of this facility is not regionally significant.

A small cluster of transformers is part of the Durham-York Energy Centre.

Energy Return on Energy Invested

But does the facility produce more energy than it uses? That’s not so easy to determine. A full analysis of Energy Return On Energy Invested (EROEI) would require data from many different sources. I decided to approach the question by looking at just one facet of the issue:

Is the energy output of the generator greater than the energy consumed by the trucks which haul the garbage to the site?

Let’s begin with a look at the “fuel” for the incinerator. Initial testing of the facility showed better than expected energy output due to the “high quality of the garbage”, according to Cliff Curtis, commissioner of works for Durham Region (quoted in the Toronto Star). Because most of the paper, cardboard, glass bottles, metal cans, recyclable plastic containers, and organic material is picked up separately and sent to recycling or composting plants, the remaining garbage is primarily plastic film or foam. (Much of this, too, is technically recyclable, but in current market conditions that recycling would be carried out at a financial loss.)

Inflammatory material

If you were lucky to grow up in a time and a place where building fires was a common childhood pastime, you know that plastic bags and styrofoam burn readily and create a lot of heat. A moment’s consideration of basic chemistry backs up that observation.

Our common plastics are themselves a highly processed form of petroleum. One of the major characteristics of our industrial civilization is that we have learned how to suck finite resources of oil from the deepest recesses of the earth, process it in highly sophisticated ways, mold it into endlessly versatile – but still cheap! – types of packaging, use the packaging once, and then throw the solidified petroleum into the garbage.

If instead of burying the plastic garbage in a landfill, we burn it, we capture some of the energy content of that original petroleum. There’s a key problem, though. As opposed to a petroleum or gas well, which provides huge quantities of energy in one location, our plastic “fuel” is light-weight and dispersed through every city, town, village and rural area.

The question thus becomes: is it an economic proposition to drive up and down every street gathering up bags of plastic fuel for an electricity generator?

The light, dispersed nature of the cargo has a direct impact on garbage truck design, and therefore on the number of loads it takes to haul a given number of tonnes of garbage.

Because these trucks must navigate narrow residential streets they must have short wheelbases. And because they need to compact the garbage as they go, they have to carry additional heavy machinery to do the compaction. The result is a low payload:

Long-haul trucks and their contents can weigh 80,000 pounds. However, the shorter wheelbase of garbage and recycling trucks results in a much lower legal weight  — usually around 51,000 pounds. Since these trucks weigh about 33,000 pounds empty, they have a legal payload of about nine tons. (Source: How Green Was My Garbage Truck)

By my calculations, residential garbage trucks picking up mostly light packaging will be “full” with a load weighing about 6.8 tonnes. (The appendix to this article lists sources and shows the calculations.)

At 6.8 tonnes per load, it will require over 20,000 garbage truck loads to gather the 140,000 tonnes burned each year by the Durham-York Energy Centre.

How many kilometers will those trucks travel? Working from a detailed study of garbage pickup energy consumption in Hamilton, Ontario, I estimated that in a medium-density area, an average garbage truck route will be about 45 km. Truck fuel economy during the route is very poor, since there is constant stopping and starting plus frequent idling while workers grab and empty the garbage cans.

There is additional traveling from the base depot to the start of each route, from the end of the route to the drop-off point, and back to the depot.

I used the following map to make a conservative estimate of total kilometers.

Google map of York and Durham Region boundaries, with location of incinerator.

Because most of the garbage delivered to the incinerator comes from Durham Region, and the population of both Durham Region and York Region are heavily weighted to their southern and western portions, I picked a spot in Whitby as an “average” starting point. From that circled “X” to the other “X” (the incinerator location) is 30 kilometers. Using that central location as the starting and ending point for trips, I estimated 105 km total for each load. (45 km on the pickup route, 30 km to the incinerator, and 30 km back to the starting point).

Due to their weight and to their frequent stops, garbage trucks get poor fuel economy. I calculated an average .96 liters/kilometer.

The result: our fleet of trucks would haul 20,600 loads per year, travel 2,163,000 kilometers, and burn just over 2 million liters of diesel fuel.

Comparing diesel to electricity

How does the energy content of the diesel fuel compare to the energy output of the incinerator’s generator? Here the calculations are simpler though the numbers get large.

There are 3412 BTUs in a kilowatt-hour of electricity, and about 36,670 BTUs in a liter of diesel fuel.

If the generator produces enough electricity for 10,000 homes, and these homes use the Ontario average of 10,000 kilowatt-hours per year, then the generator’s output is 100,000,000 kWh per year.

Converted to BTUs, the 100,000,000 kWh equal about 341 billion BTUs.

The diesel fuel burned by the garbage trucks, on the other hand, has a total energy content of about 76 billion BTUs.

That answers our initial question: does the incinerator produce more energy than the garbage trucks consume in fuel? Yes it does, by a factor of about 4.5.

If we had tallied all the energy consumed by this operation, then we could say it had an Energy Return On Energy Invested ratio of about 4.5 – comparable to the bottom end of economically viable fossil fuel extraction operations such as Canadian tar sands mining. But of course we have considered just one energy input, the fuel burned by the trucks.

If we added in the energy required to build and maintain the fleet of garbage trucks, plus an appropriate share of the energy required to maintain our roads (which are greatly impacted by weighty trucks), plus the energy used to build the $300 million incinerator/generator complex, the EROEI would be much lower, perhaps below 1. In other words, there is little or no energy return in the business of driving around picking up household garbage to fuel a generator.

Energy from waste, or waste from energy

Finally, our third question: is this facility best referred to as “Energy From Waste” or “Waste From Energy”?

Looking at the big picture, “Waste From Energy” is the best descriptor. We take highly valuable and finite energy sources in the form of petroleum, consume a lot of that energy to create plastic packaging, ship that packaging to every household via a network of stores, and then use a lot more energy to re-collect the plastic so that we can burn it. The small amount of usable energy we get at the last stage is inconsequential.

From a municipal waste management perspective, however, things might look quite different. In our society people believe they have a god-given right to acquire a steady-stream of plastic-packaged goods, and a god-given right to have someone else come and pick up their resulting garbage.

Thus municipal governments are expected to pay for a fleet of garbage trucks, and find some way to dispose of all the garbage. If they can burn that garbage and recapture a modest amount of energy in the form of electricity, isn’t that a better proposition than hauling it to expensive landfill sites which inevitably run short of capacity?

Looked at from within that limited perspective, “Energy From Waste” is a fair description of the process. (Whether incineration is a good idea still depends, of course, on the safety of the emissions from modern garbage incinerators – another controversial issue.)

But if we want to seriously reduce our waste, the place to focus is not the last link in the chain – waste disposal. The big problem is our dependence on a steady stream of products produced from valuable fossil fuels, which cannot practically be re-used or even recycled, but only down-cycled once or twice before they end up as garbage.

Top photo: Durham-York Energy Centre viewed from south east. 

APPENDIX – Sources and Calculations

Capacity and Fuel Economy of Garbage Trucks

There are many factors which determine the capacity and fuel economy of garbage trucks, including: type of truck (front-loading, rear-loading, trucks with hoists for large containers vs. trucks which are loaded by hand by workers picking up individual bags); type of route (high density urban areas with large businesses or apartment complex vs. low-density rural areas); and type of garbage (mixed waste including heavy glass, metal and wet organics vs. light but bulky plastics and foam).

Although I sent an email inquiry to Durham Waste Department asking about capacity and route lengths of garbage trucks, I didn’t receive a response. So I looked for published studies which could provide figures that seemed applicable to Durham Region.

A major source was the paper “Fuel consumption estimation for kerbside municipal solid waste (MSW) collection activities”, in Waste Management & Research, 2010, accessed via www.sagepub.com.

This study found that “Within the ‘At route’ stage, on average, the normal garbage truck had to travel approximately 71.9 km in the low-density areas while the route length in high-density areas is approximately 25 km.” Since Durham Region is a mix of older dense urban areas, newer medium-density urban sprawl, and large rural areas, I estimated an average “medium-density area route” of 45 km.

The same study found an average fuel economy of .335 liters/kilometer for garbage trucks when they were traveling from depot to the beginning of a route. The authors found that fuel economy in the “At Route” portion (with frequent stops, starts, and idling) was 1.6 L/km for high-density areas, and 2.0 L/km in low-density areas; I split the difference and used 1.8 L/km as the “At Route” fuel consumption.

As to the volumes of trucks and the weight of the garbage, I based on estimates on figures in “The Workhorses of Waste”, published by MSW Management Magazine and WIH Resource Group. This article states: “Rear-end loader capacities range from 11 cubic yards to 31 cubic yards, with 25 cubic yards being typical.”

Since rear-end loader trucks are the ones I usually see in residential neighborhoods, I used 25 cubic yards as the average volume capacity.

The same article discusses the varying weight factors:

The municipal solid waste deposited at a landfill has a density of 550 to over 650 pounds per cubic yard (approximately 20 to 25 pounds per cubic foot). This is the result of compaction within the truck during collection operations as the truck’s hydraulic blades compress waste that has a typical density of 10 to 15 pounds per cubic foot at the curbside. The in-vehicle compaction effort should approximately double the density and half the volume of the collected waste. However, these values are rough averages only and can vary considerably given the irregular and heterogeneous nature of municipal solid waste.

In Durham Region the heavier paper, glass, metal and wet organics are picked up separately and hauled to recycling depots, so it seems reasonable to assume that the remaining garbage hauled to the incinerator would not be at the dense end of the “550 to over 650 pounds per cubic yard” range. I used what seems like a conservative estimate of 600 pounds per cubic yard.

(I am aware that in some cases garbage may be off-loaded at transfer stations, further compacted, and then loaded onto much larger trucks for the next stage of transportation. This would impact the fuel economy per tonne in transportation, but would involve additional fuel in loading and unloading. I would not expect that the overall fuel use would be dramatically different. In any case, I decided to keep the calculations (relatively) simple and so I assumed that one type of truck would pick up all the garbage and deliver it to the final drop-off.)

OK, now the calculations:

Number of truckloads

25 cubic yard load X 600 pounds / cubic yard = 15000 pounds per load

15000 pounds ÷ 2204 lbs per tonne = 6.805 tonnes per load

140,000 tonnes burned by incinerator ÷ 6.805 tonnes per load = 20,570 garbage truck loads

Fuel burned:

45 km per “At Route” portion X 20,570 loads = 925,650 km “At Route”

1.8 L/km fuel consumption “At Route” x 925,650 km = 1,666,170 liters

60 km per load traveling to and from incinerator

60 km x 20,570 loads = 1,234,200 km traveling

.335 L/km travelling fuel consumption X 1,234,200 km = 413,457 liters

1,666,170 liters + 413,457 liters = 2,027,627 liters total fuel used by garbage trucks

As a check on the reasonableness of this estimate, I calculated the average fuel economy from the above figures:

20,570 loads x 105 km per load = 2,159,850 km per year

2,079,625 liters fuel ÷ 2,159,850 km = .9629 L/km

This compares closely with a figure published by the Washington Post, which said municipal garbage trucks get just 2-3 mpg. The middle of that range, 2.5 miles per US gallon, equals 1.06 L/km.

Electricity output of the generator power by the incinerator

With a rated output of 14 megawatts, the generator could produce about 122 megawatt-hours of electricity per year – if it ran at 100% capacity, every hour of the year. (14,000 kW X 24 hours per day X 365 days = 122,640,000 kWh.) That’s clearly unrealistic.

However, the generator’s operators say it puts out enough electricity for 10,000 homes. The Ontario government says the average residential electricity consumption is 10,000 kWh.

10,000 homes X 10,000 kWh per year = 100,000,000 kWh per year.

This figure represents about 80% of the maximum rated capacity of the incinerator’s generator, which sounds like a reasonable output, so that’s the figure I used.

Fake news as official policy

Also published at Resilience.org.

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)

 

Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from Wikipedia.org.

Fake news, failed states

Also published at Resilience.org.

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From Wikimedia.org.

Naomi Klein, photograph by Joe Mabel, distributed via Wikimedia Commons

A renewable energy economy will create more jobs. Is that a good thing?

Also published at Resilience.org.

In a tidal wave of good news stories, infographics and Facebook memes about renewable energy job creation, the implicit, unquestioned assumption is that More Jobs = A Healthier Economy.

A popular Facebook meme, based on the Stanford University Solutions Project, celebrates the claim that in a renewable energy-powered Canada, 40% more people will work in the energy sector.

From the Environment Hamilton Facebook page.

From the Environment Hamilton Facebook page.

 

In elaborate info-graphics, the Solutions Project provides comparable claims for all 50 US states and countries around the world – although “assertion-graphic” might be a better term, since the graphics are presented with no footnotes and no clear links to any data that might allow a skeptical mind to evaluate the conclusions.

From The Solutions Project website.

From The Solutions Project website.

And Naomi Klein, author of This Changes Everything and one of the proponents of The Leap Manifesto, cites the Energy Transition in Germany and notes that 400,000 new jobs have already been created. In her hour-long talk on the CBC Radio Ideas program and podcast, Klein gets at some of the key issues that will determine whether More Energy Jobs = A Good Thing, and we’ll return to this podcast later.

To start, though, let’s look at the issue through the following proposition:

The 20th century fossil-fueled economic growth spurt happened not because the energy industry created many jobs, but because it created very few jobs.

For most of human history, providing energy in the form of food calories was the major human occupation. Even in societies that consumed relatively high amounts of energy via firewood, harvesting and transporting that wood kept a lot of people busy.

But during the 19th and 20th centuries, as the available per capita energy supply in industrialized countries exploded, the proportion of the population employed supplying that energy dropped dramatically.

The result: instead of farming to provide the carbohydrates that feed humans and oxen, or cutting firewood to heat buildings, nearly the whole population has been free to do other activities. Whether we have made good use of this opportunity is debatable, but we’ve had plenty of energy, and nearly our entire labour force, available to run an elaborate manufacturing, consumption and service economy.

Seen from this perspective, the claim that renewable energy will create more jobs might set off alarms.

What’s in a job?

Part of the difficulty is that when we speak of a job, we refer to two (or more) very different things.

A job might mean simply something that has to be done. In this sense of the word, we don’t usually celebrate jobs. If we need to carry all our water in buckets from a well five kilometers from home, there are a lot of jobs in water-carrying – but we would probably welcome having taps right in our kitchens instead. Agriculture employs a lot of people if the only tools are sticks, but with better tools the same amount of food can be raised with fewer people working the fields.

So when we think of a job as the need to do something, we typically think that the fewer jobs the better.

When we celebrate job-creation, on the other hand, we typically mean something quite different –  a “job” is an activity that is accompanied by a pay-cheque. Since in our society most of us need to get pay-cheques for most of our lives, job-creation strikes us as a good thing to the extant that pay-cheques are involved.

Here’s the wrinkle with renewable energy job creation: the renewable energy transition will likely create jobs in the sense of adding to the quantity of work that must be done (which we normally try to minimize) and jobs in the sense of providing pay-cheques (which we typically want to maximize). The two types of job-creation are at cross-purposes, and the outcome is uncertain.

Allocation of energy surplus

Widespread prosperity depends not only on what work is done and what surplus is produced, but on how that surplus is allocated and distributed.

In the middle of the 20th century in North America and Europe, only a few people worked in energy supply but they produced a huge surplus. At the same time, the products of surplus energy were distributed in relatively equal fashion, compared to the rising levels of inequality today. The mass consumption economy – a brief anomaly in human history which is ironically referred to as Business As Usual – depended on both conditions being met. There had to be a large surplus of energy produced (or, more accurately, extracted) by a few people, and this surplus energy had to be widely distributed so that most people could participate in a consumer economy.

Naomi Klein gives prominent emphasis to the second of these two conditions. In her CBC Radio Ideas talk, she says

There’s a group in the US called Movement Generation which has a slogan that I quote a lot, which is that “transition is invevitable, but justice is not.” You can respond to climate change in a way that people putting up solar panels are paid terrible wages. In the US prison inmates are making some of the solar panels that they’re putting up. … There has to be a road map for responding to climate change in an intersectional way, which solves multiple problems at once.”

She cites the German Energy Transition as an encouraging example:

There are 900 new energy co-operatives that have sprung up in Germany. Two hundred towns and cities in Germany have taken their energy grids back from the private companies that took them over in the 1990s, and they call it “energy democracy”. They’re taking back control over their energy, so that the resources stay in the communities and they can use the profits generated from renewable energy to pay for services. They’ve also created 400,000 jobs as part of this transition. So they’re showing how you solve multiple problems at once. Lower emissions create good unionized jobs and generate the revenue we need to fight the logic of austerity at the local level.”

In Klein’s formulation, democratic control of the energy economy is a key to prosperity. Because of this energy democracy, the new jobs are “good unionized jobs” which “fight the logic of austerity”. But is that sustainable in the long run?

As Klein says, in Germany’s “energy democracy” they use “the profits generated from renewable energy to pay for services”. But that presupposes that the renewable energy technologies being used do indeed generate “profits”.

It remains an open question how much profit – how much surplus energy – will be generated from renewable energy development. If renewable energy developments consume nearly as much energy as they produce, then in the long run the energy sector may produce many pay-cheques but they won’t be generous pay-cheques, however egalitarian society might be.

Book cover, Life After Growth by Tim MorganEnergy sprawl

Tim Morgan uses the apt phrase “energy sprawl” to describe what happens as we switch to energy technologies with a lower Energy Return on Energy Invested (EROEI).

‘energy sprawl’ … has both physical and economic meanings. In physical terms, the infrastructure required to access energy and deliver it to where it is needed is going to expand exponentially. At the same time, the proportion of GDP absorbed by the energy infrastructure is going to increase as well, which means that the rest of the economy will shrink.” (Life After Growth, Harriman House, 2013, locus 2224)

As Morgan makes clear, energy sprawl is not at all unique to renewable energy transition – it applies equally to non-conventional, bottom-of-the-barrel fossil fuels such as fracked oil and gas, and bitumen extracted from Alberta’s tar sands. There will indeed be more jobs in a renewable resource economy, compared to the glory days of the fossil fuel economy, but there will also be more energy jobs if we cling to fossil fuels.

As energy sprawl proceeds, more of us will work in energy production and distribution, and fewer of us will be free to work at other pursuits. As Klein and the other authors of the Leap Manifesto argue, the higher number of energy jobs might be a net plus for society, if we use energy more wisely AND we allocate surplus more equitably.

But unless our energy technologies provide a good Energy Return On Energy Invested, there will be little surplus to distribute. In other words, there will be lots of new jobs, but few good pay-cheques.

Top photo: Canadian author and activist Naomi Klein, photographed by Joe Mabel in October 2015, accessed via Wikimedia Commons

Insulators on high-voltage electricity transmission line.

Timetables of power

Also published at Resilience.org.

accounting_for_energy_2For more than three decades, Vaclav Smil has been developing the concepts presented in his 2015 book Power Density: A Key to Understanding Energy Sources and Uses.

The concept is (perhaps deceptively) simple: power density, in Smil’s formulation, is “the quotient of power and land area”. To facilitate comparisons between widely disparate energy technologies, Smil states power density using common units: watts per square meter.

Wonkometer-225Smil makes clear his belief that it’s important that citizens be numerate as well as literate, and Power Density is heavily salted with numbers. But what is being counted?

Perhaps the greatest advantage of power density is its universal applicability: the rate can be used to evaluate and compare all energy fluxes in nature and in any society. – Vaclav Smil, Power Density, pg 21

A major theme in Smil’s writing is that current renewable energy resources and technologies cannot quickly replace the energy systems that fuel industrial society. He presents convincing evidence that for current world energy demand to be supplied by renewable energies alone, the land area of the energy system would need to increase drastically.

Study of Smil’s figures will be time well spent for students of many energy sources. Whether it’s concentrated solar reflectors, cellulosic ethanol, wood-fueled generators, fracked light oil, natural gas or wind farms, Smil takes a careful look at power densities, and then estimates how much land would be taken up if each of these respective energy sources were to supply a significant fraction of current energy demand.

This consideration of land use goes some way to addressing a vacuum in mainstream contemporary economics. In the opening pages of Power Density, Smil notes that economists used to talk about land, labour and capital as three key factors in production, but in the last century, land dropped out of the theory.

The measurement of power per unit of land is one way to account for use of land in an economic system. As we will discuss later, those units of land may prove difficult to adequately quantify. But first we’ll look at another simple but troublesome issue.

Does the clock tick in seconds or in centuries?

It may not be immediately obvious to English majors or philosophers (I plead guilty), but Smil’s statement of power density – watts per square meter – includes a unit of time. That’s because a watt is itself a rate, defined as a joule per second. So power density equals joules per second per square meter.

There’s nothing sacrosanct about the second as the unit of choice. Power densities could also be calculated if power were stated in joules per millisecond or per megasecond, and with only slightly more difficult mathematical gymnastics, per century or per millenium. That is of course stretching a point, but Smil’s discussion of power density would take on a different flavor if we thought in longer time frames.

Consider the example with which Smil opens the book. In the early stages of the industrial age, English iron smelting was accomplished with the heat from charcoal, which in turn was made from coppiced beech and oak trees. As pig iron production grew, large areas of land were required solely for charcoal production. This changed in the blink of an eye, in historical terms, with the development of coal mining and the process of coking, which converted coal to nearly 100% pure carbon with energy equivalent to good charcoal.

As a result, the charcoal from thousands of hectares of hardwood forest could be replaced by coal from a mine site of only a few hectares. Or in Smil’s favored terms,

The overall power density of mid-eighteenth-century English coke production was thus roughly 500 W/m2, approximately 7,000 times higher than the power density of charcoal production. (Power Density, pg 4)

Smil notes rightly that this shift had enormous consequences for the English countryside, English economy and English society. Yet my immediate reaction to this passage was to cry foul – there is a sleight of hand going on.

While the charcoal production figures are based on the amount of wood that a hectare might produce on average each year, in perpetuity, the coal from the mine will dwindle and then run out in a century or two. If we averaged the power densities of the woodlot and mine over several centuries or millennia, the comparison look much different.

And that’s a problem throughout Power Density. Smil often grapples with the best way to average power densities over time, but never establishes a rule that works well for all energy sources.

Generating station near Niagara Falls

The Toronto Power Generating Station was built in 1906, just upstream from Horseshoe Falls in Niagara Falls, Ontario. It was mothballed in 1974. Photographed in February, 2014.

In discussing photovoltaic generation, he notes that solar radiation varies greatly by hour and month. It would make no sense to calculate the power output of a solar panel solely by the results at noon in mid-summer, just as it would make no sense to run the calculation solely at twilight in mid-winter. It is reasonable to average the power density over a whole year’s time, and that’s what Smil does.

When considering the power density of ethanol from sugar cane, it would be crazy to run the calculation based solely on the month of harvest, so again, the figures Smil uses are annual average outputs. Likewise, wood grown for biomass fuel can be harvested approximately every 20 years, so Smil divides the energy output during a harvest year by 20 to arrive at the power density of this energy source.

Using the year as the averaging unit makes obvious sense for many renewable energy sources, but this method breaks down just as obviously when considering non-renewable sources.

How do you calculate the average annual power density for a coal mine which produces high amounts of power for a hundred years or so, and then produces no power for the rest of time? Or the power density of a fracked gas well whose output will continue only a few decades at most?

The obvious rejoinder to this line of questioning is that when the energy output of a coal mine, for example, ceases, the land use also ceases, and at that point the power density of the coal mine is neither high nor low nor zero; it simply cannot be part of a calculation. As we’ll discuss later in this series, however, there are many cases where reclamations are far from certain, and so a “claim” on the land goes on.

Smil is aware of the transitory nature of fossil fuel sources, of course, and he cites helpful and eye-opening figures for the declining power densities of major oil fields, gas fields and coal mines over the past century. Yet in Power Density, most of the figures presented for non-renewable energy facilities apply for that (relatively brief) period when these facilities are in full production, but they are routinely compared with power densities of renewable energy facilities which could continue indefinitely.

So is it really true that power density is a measure “which can be used to evaluate and compare all energy fluxes in nature and in any society”? Only with some critical qualifications.

In summary, we return to Smil’s oft-emphasized theme, that current renewable resource technologies are no match for the energy demands of our present civilization. He argues convincingly that the power density of consumption on a busy expressway will not be matched to the power density of production of ethanol from corn: it would take a ridiculous and unsustainable area of corn fields to fuel all that high-energy transport. Widening the discussion, he establishes no less convincingly, to my mind, that solar power, wind power, and biofuels are not going to fuel our current high-energy way of life.

Yet if we extend our averaging units to just a century or two, we could calculate just as convincingly that the power densities of non-renewable fuel sources will also fail to support our high-energy society. And since we’re already a century into this game, we might be running out of time.

Top photo: insulators on high-voltage transmission line near Darlington Nuclear Generating Station, Bowmanville, Ontario.

Freight expectations

Also published at Resilience.org.

Alice J. Friedemann’s new book When Trucks Stop Running explains concisely how dependent American cities are on truck transport, and makes a convincing case that renewable energies cannot and will not power our transportation system in anything like its current configuration.

But will some trucks stop running, or all of them? Will the change happen suddenly over 10 years, or gradually over 40 years or more? Those are more difficult questions, and they highlight the limitations of guesstimating future supply trends while taking future demand as basically given.

When Trucks Stop Running, Springer, 2016

When Trucks Stop Running, Springer, 2016

Alice J. Friedemann worked for more than 20 years in transportation logistics. She brings her skills in systems analysis to her book When Trucks Stop Running: Energy and the Future of Transportation (Springer Briefs in Energy, 2016).

In a quick historical overview, Friedemann explains that in 2012, a severely shrunken rail network still handled 45% of the ton-miles of US freight, while burning only 2% of transportation fuel. But the post-war highway-building boom had made it convenient for towns and suburbs to grow where there are neither rails nor ports, with the result that “four out of five communities depend entirely on trucks for all of their goods.”

After a brief summary of peak oil forecasts, Friedemann looks at the prospects for running trains and trucks on something other than diesel fuel, and the prospects are not encouraging. Electrification, whether using batteries or overhead wires, is ill-suited to the power requirements of trains and trucks with heavy loads over long distances. Friedemann also analyzes liquid fuel options including biofuels and coal-to-liquid conversions, but all of these options have poor Energy Return On Investment ratios.

While we search for ways to retool the economy and transportation systems, we would be wise to prioritize the use of precious fuels. Friedemann notes that while trains are much more energy-efficient than heavy-duty trucks, trucks in turn are far more efficient than cars and planes.

So “instead of electrifying rail, which uses only 2% of all U.S. transportation fuel, we should discourage light-duty cars and light trucks, which guzzle 63% of all transportation fuel and give the fuel saved to diesel-electric locomotives.” Prioritizing fuel use this way could buy us some much-needed time – time to change infrastructure that took decades or generations to build.

If it strains credulity to imagine US policy-makers facing these kinds of choices of their own free will, it is nevertheless true that the unsustainable will not be sustained. Hard choices will be made, whether we want to make them or not.

A question of timing

Friedemann’s book joins other recent titles which put the damper on rosy predictions of a smooth transition to renewable energy economies. She covers some of the same ground as David MacKay’s Sustainable Energy – Without The Hot Air or Vaclav Smil’s Power Density, but in more concise and readable fashion, focused specifically on the energy needs of transportation.

In all three of these books, there is an understandable tendency to answer the (relatively) simple question: can future supply keep up with demand, assuming that demand is in line with today’s trends?

But of course, supply will influence demand, and vice versa. The interplay will be complex, and may confound apparently straight-forward predictions.

It’s important to keep in mind that in economic terms, demand does not equal what we want or even what we need. We can, and probably will, jump up and down and stamp our feet and DEMAND that we have abundant cheap fuel, but that will mean nothing in the marketplace. The economic demand equals the amount of fuel that we are willing and able to buy at a given price. As the price changes, so will demand – which will in turn affect the supply, at least in the short term.

Consider the Gross and Net Hubbert Curves graph which Friedemann reproduces.

Gross and Net Hubbert Curve, from When Trucks Stop Running, page 124

From When Trucks Stop Running, page 124

While the basic trend lines make obvious sense, the steepness of the projected decline depends in part on a steady demand: the ultimately recoverable resource is finite, and if we continue to extract the oil as fast as possible (the trend through our lifetimes) then the post-peak decline will indeed be steep, perhaps cliff-life.

But can we and will we sustain demand if prices spike again? That seems unlikely, particularly given our experience over the past 15 years. And if effective demand drops dramatically due to much higher pricing, then the short-term supply-on-the-market should also drop, while long-term available supply-in-the-ground will be prolonged. The right side of that Hubbert curve might eventually end up at the same place, but at a slower pace.

The most wasteful uses of fuels might soon be out of our price range, so we simply won’t be able to waste fuel at the same breathtaking rate. The economy might shudder and shrink, but we might find ways to pay for the much smaller quantities of fuel required to transport essential goods.

In other words, there may soon be far fewer trucks on the road, but they might run long enough to give us time to develop infrastructure appropriate to a low-energy economy.

Top photo: fracking supply trucks crossing the Missouri River in the Fort Berthold Indian Reservation in North Dakota, June 2014.