S-curves and other paths

Also published at Resilience.org.

Oxford University economist Kate Raworth is getting a lot of good press for her recently released book Doughnut Economics: 7 Ways to Think Like a 21st Century Economist.

The book’s strengths are many, starting with the accessibility of Raworth’s prose. Whether she is discussing the changing faces of economic orthodoxy, the caricature that is homo economicus, or the importance of according non-monetized activities their proper recognition, Raworth keeps things admirably clear.

Doughnut Economics makes a great crash course in promising new approaches to economics. In Raworth’s own words, her work “draws on diverse schools of thought, such as complexity, ecological, feminist, institutional and behavioural economics.” Yet the integration of ecological economics into her framework is incomplete, leading to a frustratingly unimaginative concluding chapter on economic growth.

Laying the groundwork for that discussion of economic growth has resulted in an article about three times as long as most of my posts, so here is the ‘tl;dr’ version:

Continued exponential economic growth is impossible, but the S-curve of slowing growth followed by a steady state is not the only other alternative. If the goal is maintaining GDP at the highest possible level, then the S-curve is the best case scenario, but in today’s world that isn’t necessarily desirable or even possible.

The central metaphor

Full disclosure: for as long as I can remember, the doughnut has been my least favourite among refined-sugar-white-flour-and-grease confections. So try as I might to be unbiased, I was no doubt predisposed to react critically to Raworth’s title metaphor.

What is the Doughnut? As Raworth explains, the Doughnut is the picture that emerged when she sketched a “safe space” between the Social Foundation necessary for prosperity, and the Ecological Ceiling beyond which we should not go.

Source: Doughnut Economics, page 38

There are many good things to be said about this picture. It affords a prominent place to both the social factors and the ecological factors which are essential to prosperity, but which are omitted from many orthodox economic models. The picture also restores ethics, and the choosing of goals, to central roles in economics.

Particularly given Raworth’s extensive background in development economics, it is easy to understand the appeal of this diagram.

But I agree with Ugo Bardi (here and here) that there is no particular reason the diagram should be circular – Shortfall, Social Foundation, Safe and Just Space, Ecological Ceiling and Overshoot would have the same meaning if arranged in horizontal layers rather than in concentric circles.

From the standpoint of economic analysis, I find it unhelpful to include a range of quite dissimilar factors all at the same level in the centre of the diagram. A society could have adequate energy, water and food without having good housing and health care – but you couldn’t have good housing and health care without energy, water and food. So some of these factors are clearly preconditions for others.

Likewise, some of the factors in the centre of the diagram are clearly and directly related to “overshoot” in the outer ring, while others are not. Excessive consumption of energy, water, or food often leads to ecological overshoot, but you can’t say the same about “excessive” gender equality, political voice, or peace and justice.

Beyond these quibbles with the Doughnut diagram, I further agree with Bardi that a failure to incorporate biophysical economics is the major weakness of Doughnut Economics. In spite of her acknowledgment of the pioneering work of Herman Daly, and a brief but lucid discussion of the work of Robert Ayres and Benjamin Warr showing that fossil fuels have been critical for the past century’s GDP growth, Raworth does not include energy supply as a basic determining factor in economic development.

Economists as spin doctors

Raworth makes clear that key doctrines of economic orthodoxy often obscure rather than illuminate economic reality. Thus economists in rich countries extoll the virtues of free trade, though their own countries relied on protectionism to nurture their industrial base.

Likewise standard economic modeling starts with a reductionist “homo economicus” whose decisions are always based on rational pursuit of self-interest – even though behavioral science shows that real people are not consistently rational, and are motivated by co-operation as much as by self-interest. Various studies indicate, however, that economics students and professors show a greater-than-average degree of self-interest. And for those who are already wealthy but striving to become wealthier still, it is comforting to believe that everyone is similarly self-interested, and that their self-interest works to the good of all.

When considering a principle of mainstream economics, then, it makes sense to ask: what truths does this principle hide, and for whose benefit?

Unfortunately, when it comes to GDP growth as the accepted measure of a healthy economy, Raworth leaves out an important part of the story.

The concept of Gross Domestic Product has its roots in the 1930s, when statisticians were looking for ways to quantify economic activity, making temporal trends easier to discern. Simon Kuznets developed a way to calculate Gross National Product – the total of all income generated worldwide by US residents.

As Raworth stresses, Kuznets himself was clear that his national income tally was a very limited way of measuring an economy.

Emphasising that national income captured only the market value of goods and services produced in an economy, he pointed out that it therefore excluded the enormous value of goods and services produced by and for households, and by society in the course of daily life. … And since national income is a flow measure (recording only the amount of income generated each year), Kuznets saw that it needed to be complemented by a stock measure, accounting for the wealth from which it was generated ….” (Doughnut Economics, page 34; emphasis mine)

The distinction between flows and stocks is crucial. Imagine a simple agrarian nation which uses destructive farming methods to work its rich land. For a number of years it may earn increasingly high income – the flow – though its wealth-giving topsoil – the stock – is rapidly eroding. Is this country getting richer or poorer? Measured by GDP alone, this economy is healthy as long as current income grows; no matter that the topsoil, and future prospects, are blowing away in the wind.

In the years immediately preceding and following World War II, GDP became the primary measure of economic health, and it became political and economic orthodoxy that GDP should grow every year. (To date no western leader has ever campaigned by promising “In my first year I will increase GDP by 3%, in my second year by 2%, in my third year it will grow by 1%, and by my fourth year I will have tamed GDP growth to 0!”)

What truth does this reliance on GDP hide, and for whose benefit? The answers are fairly obvious, in my humble opinion: a myopic focus on GDP obscured the inevitability of resource depletion, for the benefit of the fossil fuel and automative interests who dominated the US economy in the mid-twentieth century.

For context, in 1955 the top ten US corporations by number of employees included: General Motors, Chrysler, Standard Oil of New Jersey, Amoco, Goodyear and Firestone. (Source: 24/7 Wall St)

In 1960, the top ten largest US companies by revenue included General Motors, Exxon, Ford, Mobil, Gulf Oil, Texaco, and Chrysler. (Fortune 500)

These companies, plus the steel companies that made sheet metal for cars and the construction interests building the rapidly-growing network of roads, were clear beneficiaries of a new way of life that consumed ever-greater quantities of fossil fuels.

In the decades after World War II, the US industrial complex threw its efforts into rapid exploitation of energy reserves, along with mass production of machines that would burn that energy as fast as it could be pulled out of the ground. This transformation was not a simple result of “the invisible hand of the free market”; it relied on the enthusiastic collaboration of every level of government, from local zoning boards, to metropolitan transit authorities, to state and federal transportation planners.

But way back then, was it politically necessary to distract people from the inevitability of resource depletion?

The Peak Oil movement in the 1930s

From the very beginnings of the petroleum age, there were prominent voices who saw clearly that exponential growth in use of a finite commodity could not go on indefinitely.

One such voice was William Jevons, now known particularly for the “Jevons Paradox”. In 1865 he argued that since coal provided vastly more usable energy than industry had previously been able to harness, and since this new-found power was the very foundation of modern industrial civilization, it was particularly important to a nation to prudently manage supplies:

Describing the novel social experience that coal and steam power had created, the experience that today we would call ‘exponential growth’, in which practically infinite values are reached in finite time, Jevons showed how quickly even very large stores of coal might be depleted.” (Timothy Mitchell, Carbon Democracy, pg 129)

In the 1920s petroleum was the new miracle energy source, but thoughtful geologists and economists alike realized that as a finite commodity, petroleum could not fuel infinite growth.

Marion King Hubbert was a student in 1926, but more than sixty years later he still recalled the eye-opening lesson he received when a professor asked pupils to consider the implications of ongoing rapid increases in the consumption of coal and oil resources.

As Mason Inman relates in his excellent biography of Hubbert,

When a quantity grows by a constant percentage each year, its history forms a straight line on a semilogarithmic graph. Hubbert plotted the points for coal, year after year, and found a fairly straight line that persisted for several decades: a continual growth rate of around 6 percent a year. At that rate, the production doubled about every dozen years. When he looked at this graph, it was obvious to him that such rapid growth could persist for decades – his graph showed that had already happened – but couldn’t continue forever.” (The Oracle of Oil, 2016, pg 19)

Hubbert soon learned that there were many others who shared his concerns. This thinking coalesced in the 1930s in a very popular movement known as Technocracy. They argued that wealth depended primarily not on the circulation of money, but on the flow of energy.

The leaders of Technocracy, including Hubbert, were soon speaking to packed houses and were featured in cover stories in leading magazines. Hubbert was also tasked with producing a study guide that interested people could work through at home.

In the years prior to the Great Depression, people had become accustomed to economic growth of about 5% per year. Hubbert wanted people to realize it made no sense to take that kind of growth for granted.

“It has come to be naively expected by our business men and their apologists, the economists, that such a rate of growth was somehow inherent in the industrial processes,” Hubbert wrote. But since Earth and its physical resources are finite, he said, infinite growth is an impossibility.

In short, Technocracy pointed out that the fossil fuel age was likely to be a flash in the pan, historically speaking – unless the nation’s fuel reserves were managed carefully by engineers who understood energy efficiency and depletion.

Without sensible accounting and allocation of the true sources of a nation’s wealth – its energy reserves – private corporations would rake in massive profits for a few decades and two or three generations of Americans might prosper, but in the longer term the nation would be “burning its capital”.

Full speed ahead

After the convulsions of the Depression and World War II, the US emerged with the same leading corporations in an even more dominant position. Now the US had control, or at least major influence, not only over rich domestic fossil fuel reserves, but also the much greater reserves in the Middle East. And as the world’s greatest military and financial power, they were in a position to set the terms of trade.

For fossil fuel corporations the major problem was that oil was temporarily too cheap. It came flowing out of wells so easily and in such quantity that it was often difficult to keep the price up. It was in their interests that economies consume oil at a faster rate than ever before, and that the rate of consumption would speed up each year.

Fortunately for these interests, a new theory of economics had emerged just in time.

In this new theory, economists should not worry about measuring the exhaustion of resources. In Timothy Mitchell’s words, “Economics became instead a science of money.”

The great thing about money supply was that, unlike water or land or oil, the quantity of money could grow exponentially forever. And as long as one didn’t look too far backwards or forwards, it was easy to imagine that energy resources would prove no barrier. After all, for several decades, the price of oil had been dropping.

So although increasing quantities of energy were consumed, the cost of energy did not appear to represent a limit to economic growth. … Oil could be treated as something inexhaustible. Its cost included no calculation for the exhaustion of reserves. The growth of the economy, measured in terms of GNP, had no need to account for the depletion of energy resources.” (Carbon Democracy, pg 140)

GDP was thus installed as the supreme measure of an economy, with continuous GDP growth the unquestionable political goal.

A few voices dissented, of course. Hubbert warned in the mid-1950s that the US would hit the peak of its conventional fossil fuel production by the early 1970s, a prediction that proved correct. But large quantities of cheap oil remained in the Middle East. Additional new finds in Alaska and the North Sea helped to buy another couple of decades for the oil economy (though these fields are also now in decline).

Thanks to the persistent work of a small number of researchers who called themselves “ecological economists”, a movement grew to account for stocks of resources, in addition to tallying income flows in the GDP. By the early 1990s, the US Bureau of Economic Analysis gave its blessing to this effort.

In April 1994 the Bureau published a first set of tables called Integrated Environmental-Economic System of Accounts (IEESA).

The official effort was short-lived indeed. As described in Beyond GDP,

progress toward integrated environmental-economic accounting in the US came to a screeching halt immediately after the first IEESA tables were published. The US Congress responded swiftly and negatively. The House report that accompanied the next appropriations bill explicitly forbade the BEA from spending any additional resources to develop or extend the integrated environmental and economic accounting methodology ….” (Beyond GDP, by Heun, Carbajales-Dale, Haney and Roselius, 2016)

All the way through Fiscal Year 2002, appropriations bills made sure this outbreak of ecological economics was nipped in the bud. The bills stated,

The Committee continues the prohibition on use of funds under this appropriation, or under the Census Bureau appropriation accounts, to carry out the Integrated Environmental-Economic Accounting or ‘Green GDP’ initiative.” (quoted in Beyond GDP)

One can only guess that, when it came to contributing to Congressional campaign funds, the struggling fossil fuel interests had somehow managed to outspend the deep-pocketed biophysical economists lobby.

S-curves and other paths

With that lengthy detour complete, we are ready to rejoin Raworth and Doughnut Economics.

The final chapter is entitled “Be Agnostic About Growth: from growth addicted to growth agnostic”.

This sounds like a significant improvement over current economic orthodoxy – but I found this section weak in several ways.

First, it is unclear just what it is that we are to be agnostic about. While Raworth has made clear earlier in the book why GDP is an incomplete and misleading measure of an economy, in the final chapter GDP growth is nevertheless used as the only significant measure of economic growth. Are we to be agnostic about “GDP growth”, which might well be meaningless anyway? Or should we be agnostic about “economic growth”, which might be something quite different and quite a bit more essential – especially to the hundreds of millions of people still living without basic necessities?

Second, Raworth may be agnostic about growth, but she is not agnostic about degrowth. (She has discussed elsewhere why she can’t bring herself to use the word “degrowth”.) True, she remarks at one point that “I mean agnostic in the sense of designing an economy that promotes human prosperity whether GDP is going up, down, or holding steady.” Yet in the pictures she draws and in the ensuing discussion, there is no clear recognition either that degrowth might be desirable, or that degrowth might be forced on us by biophysical realities.

She includes two graphs for possible paths of economic growth –  with growth measured here simply by GDP.

Source: Doughnut Economics, page 210 and page 214

As she notes, the first graph shows GDP increasing at steady annual percentage. While the politicians would like us to believe this is possible and desirable, the graph showing what quickly becomes a near-vertical climb is seldom presented in economics textbooks, as it is clearly unrealistic.

The second graph shows GDP growing slowly at first, then picking up speed, and then leveling off into a high but steady state with no further growth. This path for growth is commonly seen and recognized in ecology. The S-curve was also recognized by pre-20th-century economists, including Adam Smith and John Stuart Mill, as the ideal for a healthy economy.

I would concur that an S-curve which lands smoothly on a high plateau is an ideal outcome. But can we take for granted that this outcome is still possible? And do these two paths – continued exponential growth or an S-curve – really exhaust the conceptual possibilities that we should consider?

On the contrary, we can look back 80 years to the Technocracy Study Course for an illustration of varied and contrasting paths of economic growth and degrowth.

Source: The Oracle of Oil, page 58

M. King Hubbert produced this set of graphs to illustrate what can be expected with various key commodities on which a modern industrial economy depends – and by extension, what might happen with the economy as a whole.

While pure exponential growth is impossible, the S-curve may work for a dependably renewable resource, or a renewable-resource based economy. However, the next possibility – with a rise, peak, decline, and then a leveling off – is also a common scenario. For example, a society may harvest increasing amounts of wood until the regenerating power of the forests are exceeded; the harvest must then drop before any production plateau can be established.

The bell curve which starts at zero, climbs to a high peak, and drops back to zero, could characterize an economy which is purely based on a non-renewable resource such as fossils fuels. Hopefully this “decline to zero” will remain a theoretical conception, since no society to date has run 100% on a non-renewable resource. Nevertheless our fossil-fuel-based industrial society will face a severe decline unless we can build a new energy system on a global scale, in very short order.

This range of economic decline scenarios is not really represented in Doughnut Economics. That may have something to do with the design of the title metaphor.

While ecological overshoot, on the outside of the doughnut, represents things we should not do, the diagram doesn’t have a way of representing the things we can not do.

We should not continue to burn large quantities of fossil fuel because that will destabilize the climate that our children and grandchildren inherit. But once our cheaply accessible fossil fuels are used up, then we can not consume energy at the same frenetic pace that today’s wealthy populations take for granted.

The same principle applies to many essential economic resources. As long as there is significant fertility left in farmland, we can choose to farm the land with methods that produce a high annual return even though they gradually strip away the topsoil. But once the topsoil is badly depleted, then we no longer have a choice to continue production at the same level – we simply need to take the time to let the land recover.

In other words, these biophysical realities are more fundamental than any choices we can make – they set hard limits on which choices remain open to us.

The S-curve economy may be the best-case scenario, an outcome which could in principle provide global prosperity with a minimum of system disruption. But with each passing year during which our economy is based overwhelmingly on rapidly depleting non-renewable resources, the smooth S-curve becomes a less likely outcome.

If some degree of economic decline is unavoidable, then clear-sighted planning for that decline can help us make the transition a just and peaceful one.

If we really want to think like 21st century economists, don’t we need to openly face the possibility of economic decline?

 

Top photo: North Dakota State Highway 22, June 2014. (click here for larger view)

The stratospheric costs of The American Century

Also published at Resilience.org.

“Political power grows out of the barrel of a gun,” Chairman Mao famously stated in 1927.

Political power grows out of a barrel of oil – that’s an important theme in Daniel Yergin’s classic book The Prize: The Epic Quest for Oil, Money & Power.

Political power, including the use of state violence, goes hand in hand with control of authorized currency – that’s one of the key lessons of David Graeber’s Debt: The First 5,000 Years.

Guns, energy, money – each of these factors of power comes to mind in reading the recently released book by John Dower, The Violent American Century: War And Terror Since World War Two. (Chicago, Haymarket Books, 2017)

This brief book keeps a tight focus: cataloguing the extent of violence associated with the US role as the world’s dominant superpower. Dower avoids many closely related questions, such as Which persons or sectors in the US benefit most from military conflict? or, Was there justification for any of the violent overseas adventures by US forces in the past 75 years? or, Might the world have been more, or less, violent if the US had not been the dominant superpower?

It may be easy to forget, in Canada or western Europe or especially in the United States, that wars big and small have been raging somewhere in the world nearly every year through our lifetimes. Dower’s book is prompted in part by the recently popularized notion that on a world historical scale, violence is recently at an all-time low. Stephen Pinker, in his 2011 book The Better Angels of Our Nature, marshaled both statistics and anecdotes to advance the view that “today we may be living in the most peaceable era in our species’ existence.”

Dower doesn’t try to definitively refute the idea of a “Long Peace”, but he does ask us to question widely held assumptions.

He begins with the important point that if you start with the unprecedented mass slaughter of World War II as a baseline, it’s easy to make a case that succeeding decades have been relatively peaceful.

Yet one of the key military strategies used by the US in World War II was retained in both practice and theory by subsequent US warlords – aerial bombardment of civilian populations.

By the time the United States began carpet-bombing Japan, ‘industrial war’ and psychological warfare were firmly wedded, and the destruction of enemy morale by deliberately targeting densely populated urban centers had become standard operating procedure. US air forces would later carry this most brutal of inheritances from World War Two to the populations of Korea and Indochina.” (The Violent American Century, pg 22)

The result of this policy carry-over was that

During the Korean War … the tonnage of bombs dropped by US forces was more than four times greater than had been dropped on Japan in 1945. … In the Vietnam War … an intensive US bombing campaign that eventually extended to Cambodia and Laos dropped more than forty times the tonnage of bombs used on Japan.” (The Violent American Century, pg 43)

The massive bombardments failed to produce unambiguous victories in Korea or in Indochina, but it’s hard to look at these wars and avoid the conclusion that the scope and scale of violence had remained terribly high.

Meanwhile US war planners were preparing for destruction on an even greater scale. Both US and Soviet nuclear forces held the capability of destroying all human life – and yet they continued to build more nuclear missiles and continued to discuss whether they would ever launch a first strike.

By the time of his retirement, former Strategic Air Command director General (George) Lee Butler had become an advocate of nuclear abolition. In his insider’s view, “mankind escaped the Cold War without a nuclear holocaust by some combination of diplomatic skill, blind luck and divine intervention, probably the latter in greatest proportion.”

Yet the danger remains. Even Nobel Peace Prize winner Barack Obama, who stirred hopes for peace in 2009 by calling for abolition of nuclear weapons, left office having approved a $1 trillion, 30-year program of upgrading US nuclear weapons.

Though the Cold War ended without conflagration between the world’s major powers, a CIA tabulation listed 331 “Major Episodes of Political Violence” between 1946 and 2013. The US armed, financed and/or coached at least one side in scores of these conflicts, and participated more directly in dozens. This history leads Dower to conclude

Branding the long postwar era as an epoch of relative peace is disingenuous …. It also obscures the degree to which the United States bears responsibility for contributing to, rather than impeding, militarization and mayhem after 1945.” (The Violent American Century, pg 3)

Dower also notes that violence doesn’t always end in death – sometimes it leads to flight. In this regard the recent, rapid increase in numbers of refugees calls into question the idea of a new era of peace. The United Nations High Commissioner for Refugees recently reported that the number of forcibly displaced individuals “had surpassed sixty million and was the highest level recorded since World War Two and its immediate aftermath.”

The wages of war

Since the US victory in World War II, the nation has responded by building an ever larger, ever more extensive military presence around the world. By the early 2000s, according to former CIA consultant Chalmers Johnson, the US owned or rented more than 700 military bases in 130 countries.

Dower gives a brief tally of the financial costs to the US of this military occupation of the globe. In addition to the “base” defense department budget of about $600 billion per year, Dower says many extra expenses include “contingency” costs of engagements in the Middle East, care for veterans, the “black budget” for the CIA, and interest on the military component of the national debt, pushing the cost of the US military complex to around $1 trillion per year.

He concludes, “Creating a capacity for violence greater than the world has ever seen is costly – and remunerative.”

In coming installments of this essay we’ll consider especially those last three words: “costly and remunerative”. Who pays for and who benefits from the massive maintenance and exercise of military muscle, and over what time scale? In doing so, we’ll explore the interrelationships of three types of power: power from the barrel of a gun, power that comes from a barrel of oil, and power that comes from control of the monetary system.

Part Two of this series

Top photo: U.S. Air Force Republic F-105D Thunderchief fighters refuel from a Boeing KC-135A Stratotanker en route to North Vietnam in 1966. Photo in Wikimedia Commons is from US National Archives and Records Administration. A 2007 report for the Brookings Institution found that the Air Force alone used 52% of the fuel burned by the US government, and that all branches of the Department of Defense together burned 93% of US government fuel consumption. (“Department of Defense Energy Strategy: Teaching an Old Dog New Tricks”)

Energy And Civilization: a review

Also published at Resilience.org and BiophysEco.

If you were to find yourself huddled with a small group of people in a post-crash, post-internet world, hoping to recreate some of the comforts of civilization, you’d do well to have saved a printed copy of Vaclav Smil’s Energy and Civilization: A History.

Smil’s new 550-page magnum opus would help you understand why for most applications a draft horse is a more efficient engine than an ox – but only if you utilize an effective harness, which is well illustrated. He could help you decide whether building a canal or a hard-topped road would be a more productive use of your energies. When you were ready to build capstans or block-and-tackle mechanisms for accomplishing heavy tasks, his discussion and his illustrations would be invaluable.

But hold those thoughts of apocalypse for a moment. Smil’s book is not written as a doomer’s handbook, but as a thorough guide to the role of energy conversions in human history to date. Based on his 1994 book Energy in World History, the new book is about 60% longer and includes 40% more illustrations.

Though the initial chapters on prehistory are understandably brief, Smil lays the groundwork with his discussion of the dependency of all living organisms on their ability to acquire enough energy in usable forms.

The earliest humanoids had some distinct advantages and liabilities in this regard. Unlike other primates, humans evolved to walk on two feet all the time, not just occasionally. Ungainly though this “sequence of arrested falls” may be, “human walking costs about 75% less energy than both quadrupedal and bipedal walking in chimpanzees.” (Energy and Civilization, pg 22)

What to do with all that saved energy? Just think:

The human brain claims 20–25% of resting metabolic energy, compared to 8–10% in other primates and just 3–5% in other mammals.” (Energy and Civilization, pg 23)

In his discussion of the earliest agricultures, a recurring theme is brought forward: energy availability is always a limiting factor, but other social factors also come into play throughout history. In one sense, Smil explains, the move from foraging to farming was a step backwards:

Net energy returns of early farming were often inferior to those of earlier or concurrent foraging activities. Compared to foraging, early farming usually required higher human energy inputs – but it could support higher population densities and provide a more reliable food supply.” (Energy and Civilization, pg 42)

The higher population densities allowed a significant number of people to work at tasks not immediately connected to securing daily energy requirements. The result, over many millennia, was the development of new materials, tools and processes.

Smil gives succinct explanations of why the smelting of brass and bronze was less energy-intensive than production of pure copper. Likewise he illustrates why the iron age, with its much higher energy requirements, resulted in widespread deforestation, and iron production was necessarily very limited until humans learned to exploit coal deposits in the most recent centuries.

Cooking snails in a pot over an open fire. In Energy and Civilization, Smil covers topics as diverse as the importance of learning to use fire to supply the energy-rich foods humans need; the gradual deployment of better sails which allowed mariners to sail closer to the wind; and the huge boost in information consumption that occurred a century ago due to a sudden drop in the energy cost of printing. This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the United Kingdom, via Wikimedia Commons.

Energy explosion

The past two hundred years of fossil-fuel-powered civilization takes up the biggest chunk of the book. But the effective use of fossil fuels had to be preceded by many centuries of development in metallurgy, chemistry, understanding of electromagnetism, and a wide array of associated technologies.

While making clear how drastically human civilizations have changed in the last several generations, Smil also takes care to point out that even the most recent energy transitions didn’t take place all at once.

While the railways were taking over long-distance shipments and travel, the horse-drawn transport of goods and people dominated in all rapidly growing cities of Europe and North America.” (Energy and Civilization, pg 185)

Likewise the switches from wood to coal or from coal to oil happened only with long overlaps:

The two common impressions – that the twentieth century was dominated by oil, much as the nineteenth century was dominated by coal – are both wrong: wood was the most important fuel before 1900 and, taken as a whole, the twentieth century was still dominated by coal. My best calculations show coal about 15% ahead of crude oil …” (Energy and Civilization, pg 275)

Smil draws an important lesson for the future from his careful examination of the past:

Every transition to a new form of energy supply has to be powered by the intensive deployment of existing energies and prime movers: the transition from wood to coal had to be energized by human muscles, coal combustion powered the development of oil, and … today’s solar photovoltaic cells and wind turbines are embodiments of fossil energies required to smelt the requisite metals, synthesize the needed plastics, and process other materials requiring high energy inputs.” (Energy and Civilization, pg 230)

A missing chapter

Energy and Civilization is a very ambitious book, covering a wide spread of history and science with clarity. But a significant omission is any discussion of the role of slavery or colonialism in the rise of western Europe.

Smil does note the extensive exploitation of slave energy in ancient construction works, and slave energy in rowing the war ships of the democratic cities in ancient Greece. He carefully calculates the power output needed for these projects, whether supplied by slaves, peasants, or animals.

In his look at recent European economies, Smil also notes the extensive use of physical and child labour that occurred simultaneously with the growth of fossil-fueled industry. For example, he describes the brutal work conditions endured by women and girls who carried coal up long ladders from Scottish coal mines, in the period before effective machinery was developed for this purpose.

But what of the 20 million or more slaves taken from Africa to work in the European colonies of the “New World”? Did the collected energies of all these unwilling participants play no notable role in the progress of European economies?

Likewise, vast quantities of resources in the Americas, including oil-rich marine mammals and old-growth forests, were exploited by the colonies for the benefit of European nations which had run short of these important energy commodities. Did this sudden influx of energy wealth play a role in European supremacy over the past few centuries? Attention to such questions would have made Energy and Civilization a more complete look at our history.

An uncertain future

Smil closes the book with a well-composed rumination on our current predicaments and the energy constraints on our future.

While the timing of transition is uncertain, Smil leaves little doubt that a shift away from fossil fuels is necessary, inevitable, and very difficult. Necessary, because fossil fuel consumption is rapidly destabilizing our climate. Inevitable, because fossil fuel reserves are being depleted and will not regenerate in any relevant timeframe. Difficult, both because our industrial economies are based on a steady growth in consumption, and because much of the global population still doesn’t have access to a sufficient quantity of energy to provide even the basic necessities for a healthy life.

The change, then, should be led by those who are now consuming quantities of energy far beyond the level where this consumption furthers human development.

Average per capita energy consumption and the human development index in 2010. Smil, Energy and Civilization, pg 363

 

Smil notes that energy consumption rises in correlation with the Human Development Index up to a point. But increases in energy use beyond, roughly the level of present-day Turkey or Italy, provide no significant boost in Human Development. Some of the ways we consume a lot of energy, he argues, are pointless, wasteful and ineffective.

In affluent countries, he concludes,

Growing energy use cannot be equated with effective adaptations and we should be able to stop and even to reverse that trend …. Indeed, high energy use by itself does not guarantee anything except greater environmental burdens.

Opportunities for a grand transition to less energy-intensive society can be found primarily among the world’s preeminent abusers of energy and materials in Western Europe, North America, and Japan. Many of these savings could be surprisingly easy to realize.” (Energy and Civilization, pg 439)

Smil’s book would indeed be a helpful post-crash guide – but it would be much better if we heed the lessons, and save the valuable aspects of civilization, before apocalypse overtakes us.

 

Top photo: Common factory produced brass olive oil lamp from Italy, c. late 19th century, adapted from photo on Wikimedia Commons.

The Carbon Code – imperfect answers to impossible questions

Also published at Resilience.org.

“How can we reconcile our desire to save the planet from the worst effects of climate change with our dependence on the systems that cause it? How can we demand that industry and governments reduce their pollution, when ultimately we are the ones buying the polluting products and contributing to the emissions that harm our shared biosphere?”

These thorny questions are at the heart of Brett Favaro’s new book The Carbon Code (Johns Hopkins University Press, 2017). While he  readily concedes there can be no perfect answers, his book provides a helpful framework for working towards the immediate, ongoing carbon emission reductions that most of us already know are necessary.

Favaro’s proposals may sound modest, but his carbon code could play an important role if it is widely adopted by individuals, by civil organizations – churches, labour unions, universities – and by governments.

As a marine biologist at Newfoundland’s Memorial University, Favaro is keenly aware of the urgency of the problem. “Conservation is a frankly devastating field to be in,” he writes. “Much of what we do deals in quantifying how many species are declining or going extinct  ….”

He recognizes that it is too late to prevent climate catastrophe, but that doesn’t lessen the impetus to action:

There’s no getting around the prospect of droughts and resource wars, and the creation of climate refugees is certain. But there’s a big difference between a world afflicted by 2-degree warming and one warmed by 3, 4, or even more degrees.”

In other words, we can act now to prevent climate chaos going from worse to worst.

The code of conduct that Favaro presents is designed to help us be conscious of the carbon impacts of our own lives, and work steadily toward the goal of a nearly-complete cessation of carbon emissions.

The carbon code of conduct consists of four “R” principles that must be applied to one’s carbon usage:

1. Reduce your use of carbon as much as possible.

2. Replace carbon-intensive activities with those that use less carbon to achieve the same outcome.

3. Refine the activity to get the most benefit for each unit of carbon emitted.

4. Finally, Rehabilitate the atmosphere by offsetting carbon usage.”

There’s a good bit of wiggle room in each of those four ’R’s, and Favaro presents that flexibility not as a bug but as a feature. “Codes of conduct are not the same thing as laws – laws are dichotomous, and you are either following them or you’re not,” he says. “Codes of conduct are interpretable and general and are designed to shape expectations.”

Street level

The bulk of the book is given to discussion of how we can apply the carbon code to home energy use, day-to-day transportation, a lower-carbon diet, and long distance travel.

There is a heavy emphasis on a transition to electric cars – an emphasis that I’d say is one of the book’s weaker points. For one thing, Favaro overstates the energy efficiency of electric vehicles.

EVs are far more efficient. Whereas only around 20% of the potential energy stored in a liter of gasoline actually goes to making an ICE [Internal Combustion Engine] car move, EVs convert about 60% of their stored energy into motion ….”

In a narrow sense this is true, but it ignores the conversion costs in common methods of producing the electricity that charges the batteries. A typical fossil-fueled generating plant operates in the range of 35% energy efficiency. So the actual efficiency of an electric vehicle is likely to be closer to 35% X 60%, or 21% – in other words, not significantly better than the internal combustion engine.

By the same token, if a large proportion of new renewable energy capacity over the next 15 years must be devoted to charging electric cars, it will be extremely challenging to simultaneously switch home heating, lighting and cooling processes away from fossil fuel reliance.

Yet if the principles of Favaro’s carbon code were followed, we would not only stop building internal combustion cars, we would also make the new electric cars smaller and lighter, provide strong incentives to reduce the number of miles they travel (especially miles with only one passenger), and rapidly improve bicycling networks and public transit facilities to get people out of cars for most of their ordinary transportation. To his credit, Favaro recognizes the importance of all these steps.

Flight paths

As a researcher invited to many international conferences, and a person who lives in Newfoundland but whose family is based in far-away British Columbia, Favaro has given a lot of thought to the conundrum of air travel. He notes that most of the readers of his book will be members of a particular global elite: the small percentage of the world’s population who board a plane more than a few times in their lives.

We members of that elite group have a disproportionate carbon footprint, and therefore we bear particular responsibility for carbon emission reductions.

The Air Transport Action Group, a UK-based industry association, estimated that the airline industry accounts for about 2% of global CO2 emissions. That may sound small, but given the tiny percentage of the world population that flies regularly, it represents a massive outlier in terms of carbon-intensive behaviors. In the United States, air travel is responsible for about 8% of the country’s emissions ….”

Favaro is keenly aware that if the Carbon Code were read as “never get on an airplane again for the rest of your life”, hardly anyone would adopt the code (and those few who did would be ostracized from professional activities and in many cases cut off from family). Yet the four principles of the Carbon Code can be very helpful in deciding when, where and how often to use the most carbon-intensive means of transportation.

Remember that ultimately all of humanity needs to mostly stop using fossil fuels to achieve climate stability. Therefore, just like with your personal travel, your default assumption should be that no flights are necessary, and then from there you make the case for each flight you take.”

The Carbon Code is a wise, carefully optimistic book. Let’s hope it is widely read and that individuals and organizations take the Carbon Code to heart.

 

Top photo: temporary parking garage in vacant lot in Manhattan, July 2013.

Being right, and being persuasive: a primer on ‘talking climate’

Also published at Resilience.org.

Given that most people in industrialized countries accept that climate change is a scientific reality, why do so few rank climate change as one of their high priorities? Why do so few people discuss climate change with their families, friends, and neighbours? Are clear explanations of the ‘big numbers’ of climate change a good foundation for public engagement?

These are among the key questions in a thought-provoking new book by Adam Corner and Jamie Clarke – Talking Climate: From Research to Practice in Public Engagement.

In a brief review of climate change as a public policy issue, Corner and Clarke make the point that climate change action was initially shaped by international responses to the ozone layer depletion and the problem of acid rain. In these cases technocrats in research, government and industry were able to frame the problem and implement solutions with little need for deep public engagement.

The same model might once have worked for climate change response. But today, we are faced with a situation where climate change will be an ongoing crisis for at least several generations. Corner and Clarke argue that responding to climate change will require public engagement that is both deep and broad.

That kind of engagement can only be built through wide-ranging public conversations which tap into people’s deepest values – and climate change communicators must learn from social science research on what works, and what doesn’t work, in growing a public consensus.

Talking Climate is at its best in explaining the limitations of dominant climate change communication threads. But the book is disappointingly weak in describing the ‘public conversations’ that the authors say are so important.

 


Narratives and numbers

“Stories – rather than scientific facts – are the vehicles with which to build public engagement”, Corner and Clarke say. But climate policy is most often framed by scientifically valid and scientifically important numbers which remain abstract to most people. In particular, the concept of a 2°C limit to overall global warming has received oceans of ink, and this concept was the key component of the 2015 Paris Agreement.

Unfortunately, the 2° warming threshold does not help move climate change from a ‘scientific reality’ to a ‘social reality’:

In research conducted just before the Paris negotiations with members of the UK public, we found that people were baffled by the 2 degrees concept and puzzled that the challenge of climate change would be expressed in such a way. … People understandably gauge temperature changes according to their everyday experiences, and a daily temperature fluctuation of 2 degrees is inconsequential, pleasant even – so why should they worry?

“Being right is not the same as being persuasive,” Corner and Clarke add, “and the ‘big numbers’ of the climate change and energy debate do not speak to the lived experience of ordinary people going about their daily lives ….”

While they cite interesting research on what doesn’t work in building public engagement, the book is frustratingly skimpy on what does work.

In particular, there are no good examples of the narratives or stories that the authors hold out as the primary way most people make sense of the world.

“Narratives have a setting, a plot (beginning, middle, and end), characters (heroes, villains, and victims), and a moral of the story,” Corner and Clarke write. How literally should we read that statement? What are some examples of stories that have emerged to help people understand climate change and link their responses to their deepest values? Unfortunately we’re left guessing.

Likewise, the authors write that they have been involved with several public consultation projects that helped build public engagement around climate change. How did these projects select or attract participants, given that only a small percentage of the population regards climate change as an issue of deep personal importance?

Talking Climate packs a lot of important research and valuable perspectives into a mere 125 pages, plus notes. Another 25 pages outlining successful communication efforts might have made it an even better book.

Photos: rainbow over South Dakota grasslands, and sagebrush in Badlands National Park, June 2014.

Fake news as official policy

Also published at Resilience.org.

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)

 

Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from Wikipedia.org.

Fake news, failed states

Also published at Resilience.org.

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From Wikimedia.org.

A container train on the Canadian National rail line.

Door to Door – A selective look at our “system of systems”

Also published at Resilience.org.

Our transportation system is “magnificent, mysterious and maddening,” says the subtitle of Edward Humes’ new book. Open the cover and you’ll encounter more than a little “mayhem” too.

Is the North American economy a consumer economy or a transportation economy? The answer, of course, is “both”. Exponential growth in consumerism has gone hand in hand with exponential growth in transport, and Edward Humes’ new book provides an enlightening, entertaining, and often sobering look at several key aspects of our transportation systems.

door to door cover 275Much of what we consume in North America is produced at least in part on other continents. Even as manufacturing jobs have been outsourced, transportation has been an area of continuing job growth – to the point where truck driving is the single most common job in a majority of US states.

Manufacturing jobs come and go, but the logistics field just keeps growing—32 percent growth even during the Great Recession, while all other fields grew by a collective average of 1 percent. Some say logistics is the new manufacturing. (Door to Door, Harper Collins 2016, Kindle Edition, locus 750)

With a focus on the operations of the Ports of Los Angeles and Long Beach, Humes shows how the standardized shipping container – the “can” in shipping industry parlance – has enabled the transfer of running shoes, iPhones and toasters from low-wage manufacturing complexes in China to consumers around the world. Since 1980, Humes writes, the global container fleet’s capacity has gone from 11 millions tons to 169 million tons – a fifteen-fold increase.

While some links in the supply chain have been “rationalized” in ways that lower costs (and eliminate many jobs), other trends work in opposite directions. The growth of online shopping, for example, has resulted in mid-size delivery trucks driving into suburban cul-de-sacs to drop off single parcels.

The rise of online shopping is exacerbating the goods-movement overload, because shipping one product at a time to homes requires many more trips than delivering the same amount of goods en masse to stores. In yet another door-to-door paradox, the phenomenon of next-day and same-day delivery, while personally efficient and seductively convenient for consumers, is grossly inefficient for the transportation system at large. (Door to Door, locus 695)

Humes devotes almost no attention in this book to passenger rail, passenger airlines, or freight rail beyond the short-line rail that connects the port of Los Angeles to major trucking terminals. He does, however, provide a good snapshot of the trucking industry in general and UPS in particular.

Among the most difficult challenges faced by UPS administrators and drivers is the unpredictable snarl of traffic on roads and streets used by trucks and passenger cars alike. This traffic is not only maddening but terribly violent. “Motor killings”, to use the 1920s terminology, or “traffic accidents”, to use the contemporary euphemism, “are the leading cause of death for Americans between the ages of one and thirty-nine. They rank in the top five killers for Americans sixty-five and under ….” (locus 1514)

In the US there are 35,000 traffic fatalities a year, or one death every fifteen minutes. Humes notes that these deaths seldom feature on major newscasts – and in his own journalistic way he sets out to humanize the scale of the tragedy.

Delving into the records for one representative day during the writing of the book, Humes finds there were at least 62 fatal collisions in 27 states on Friday, February 13, 2015. He gives at least a brief description of dozens of these tragedies: who was driving, where, at what time, and who was killed or seriously injured.

Other than in collisions where alcohol is involved, Humes notes, there are seldom serious legal sanctions against drivers, even when they strike down and kill pedestrians who have the right of way. In this sense our legal system simply reflects the physical design of the motor vehicle-dominated transport system.

Drawing on the work of Strong Towns founder Charles Marohn, Humes explains that roads are typically designed for higher speeds than the posted speed limits. While theoretically this is supposed to provide a margin of safety for a driver who drifts out of line, in practice it encourages nearly all drivers to routinely exceed speed limits. The quite predictable result is that there are more collisions, and more serious injuries or death per collision, than there would be if speeding were not promoted-by-design.

In the design of cars, meanwhile, great attention has been devoted to saving drivers from the consequences of their own errors. Seat belts and air bags have saved the lives of many vehicle occupants. Yet during the same decades that such safety features have become standard, the auto industry has relentlessly promoted vehicles that are more dangerous simply because they are bigger and heavier.

A study by University of California economist Michelle J. White found that

for every crash death avoided inside an SUV or light truck, there were 4.3 additional collisions that took the lives of car occupants, pedestrians, bicyclists, or motorcyclists. The supposedly safer SUVs were, in fact, “extremely deadly,” White concluded. (Door to Door, locus 1878)

Another University of California study found that “for every additional 1,000 pounds in a vehicle’s weight, it raises the probability of a death in any other vehicle in a collision by 47 percent.” (locus 1887)

Is there a solution to the intertwined problems of gridlock, traffic deaths, respiratory-disease causing emissions and greenhouse gas emissions? Humes takes an enthusiastic leap of faith here to sing the praises of the driverless – or self-driving, if you prefer – car.

“The car that travels on its own can remedy each and every major problem facing the transportation system of systems,” Humes boldly forecasts. Deadly collisions, carbon dioxide and particulate emissions, parking lots that take so much urban real estate, the perceived need to keep adding lanes of roadway at tremendous expense, and soul-killing commutes on congested roads – Humes says these will all be in the rear-view mirror once our auto fleets have been replaced by autonomous electric vehicles.

We’ll need to wait a generation for definitive judgment on his predictions, but Humes’ description of our present transportation system is eminently readable and thought-provoking.

Top photo: container train on Canadian National line east of Toronto.

Oil well in southeast Saskatchewan, with flared gas.

Energy at any cost?

Also published at Resilience.org.

If all else is uncertain, how can growing demand for energy be guaranteed? A review of Vaclav Smil’s Natural Gas.

Near the end of his 2015 book Natural Gas: Fuel for the 21st Century, Vaclav Smil makes two statements which are curious in juxtaposition.

On page 211, he writes:

I will adhere to my steadfast refusal to engage in any long-term forecasting, but I will restate some basic contours of coming development before I review a long array of uncertainties ….”

Link to Vaclav Smil series list.And in the next paragraph:

Given the scale of existing energy demand and the inevitability of its further growth, it is quite impossible that during the twenty-first century, natural gas could come to occupy such a dominant position in the global primary energy supply as wood did in the preindustrial era or as coal did until the middle of the twentieth century.”

If you think that second statement sounds like a long-term forecast, that makes two of us. But apparently to Smil it is not a forecast to say that the growth of energy demand is inevitable, and it’s not a forecast to state with certainty that natural gas cannot become the dominant energy source during the twenty-first century – these are simply “basic contours of coming development.” Let’s investigate.

An oddly indiscriminate name

Natural Gas is a general survey of the sources and uses of what Smil calls the fuel with “an oddly indiscriminate name”. It begins much as it ends: with a strongly-stated forecast (or “basic contour”, if you prefer) about the scale of natural gas and other fossil fuel usage relative to other energy sources.

why dwell on the resources of a fossil fuel and why extol its advantages at a time when renewable fuels and decentralized electricity generation converting solar radiation and wind are poised to take over the global energy supply. That may be a fashionable narrative – but it is wrong, and there will be no rapid takeover by the new renewables. We are a fossil-fueled civilization, and we will continue to be one for decades to come as the pace of grand energy transition to new forms of energy is inherently slow.” – Vaclav Smil, preface to Natural Gas

And in the next paragraph:

Share of new renewables in the global commercial primary energy supply will keep on increasing, but a more consequential energy transition of the coming decades will be from coal and crude oil to natural gas.”

In support of his view that a transition away from fossil fuel reliance will take at least several decades, Smil looks at major energy source transitions over the past two hundred years. These transitions have indeed been multi-decadal or multi-generational processes.

Obvious absence of any acceleration in successive transitions is significant: moving from coal to oil has been no faster than moving from traditional biofuels to coal – and substituting coal and oil by natural gas has been measurably slower than the two preceding shifts.” – Natural Gas, page 154

It would seem obvious that global trade and communications were far less developed 150 years ago, and that would be one major reason why the transition from traditional biofuels to coal proceeded slowly on a global scale. Smil cites another reason why successive transitions have been so slow:

Scale of the requisite transitions is the main reason why natural gas shares of the TPES [Total Primary Energy System] have been slower to rise: replicating a relative rise needs much more energy in a growing system. … going from 5 to 25% of natural gas required nearly eight times more energy than accomplishing the identical coal-to-oil shift.” – Natural Gas, page 155

Open-pit coal mine in south-east Saskatchewan.

Open-pit coal mine in south-east Saskatchewan. June 2014.

Today only – you’ll love our low, low prices!

There is another obvious reason why transitions from coal to oil, and from oil to natural gas, could have been expected to move slowly throughout the last 100 years: there have been abundant supplies of easily accessible, and therefore cheap, coal and oil. When a new energy source was brought online, the result was a further increase in total energy consumption, instead of any rapid shift in the relative share of different sources.

The role of price in influencing demand is easy to ignore when the price is low. But that’s not a condition we can count on for the coming decades.

Returning to Smil’s “basic contour” that total energy demand will inevitably rise, that would imply that energy prices will inevitably remain relatively low – because there is effective demand for a product only to the extent that people can afford to buy it.

Remarkably, however, even as he states confidently that demand must grow, Smil notes the major uncertainty about the investment needed simply to maintain existing levels of supply:

if the first decade of the twenty-first century was a trendsetter, then all fossil energy sources will cost substantially more, both to develop new capacities and to maintain production of established projects at least at today’s levels. … The IEA estimates that between 2014 and 2035, the total investment in energy supply will have to reach just over $40 trillion if the world is to meet the expected demand, with some 60% destined to maintain existing output and 40% to supply the rising requirements. The likelihood of meeting this need will be determined by many other interrelated factors.” – Natural Gas, page 212

What is happening here? Both Smil and the IEA are cognizant of the uncertain effects of rising prices on supply, while graphing demand steadily upward as if price has no effect. This is not how economies function in the real world, of course.

Likewise, we cannot assume that because total energy demand kept rising throughout the twentieth century, it must continue to rise through the twenty-first century. On the contrary, if energy supplies are difficult to access and therefore much more costly, then we should also expect demand to grow much more slowly, to stop growing, or to fall.

Falling demand, in turn, would have a major impact on the possibility of a rapid change in the relative share of demand met by different sources. In very simple terms, if we increased total supply of renewable energy rapidly (as we are doing now), but the total energy demand were dropping rapidly, then the relative share of renewables in the energy market could increase even more rapidly.

Smil’s failure to consider such a scenario (indeed, his peremptory dismissal of the possibility of such a scenario) is one of the major weaknesses of his approach. Acceptance of business-as-usual as a reliable baseline may strike some people as conservative. But there is nothing cautious about ignoring one of the fundamental factors of economics, and nothing safe in assuming that the historically rare condition of abundant cheap energy must somehow continue indefinitely.

In closing, just a few words about the implications of Smil’s work as it relates to the threat of climate change. In Natural Gas, he provides much valuable background on the relative amounts of carbon emissions produced by all of our major energy sources. He explains why natural gas is the best of the fossil fuels in terms of energy output relative to carbon emissions (while noting that leaks of natural gas – methane – could in fact outweigh the savings in carbon emissions). He explains that the carbon intensity of our economies has dropped as we have gradually moved from coal to oil to natural gas.

But he also makes it clear that this relative decarbonisation has been far too slow to stave off the threat of climate change.

If he turns out to be right that total energy demand will keep rising, that there will only be a slow transition from other fossil fuels to natural gas, and that the transition away from all fossil fuels will be slower still, then the chances of avoiding catastrophic climate change will be slim indeed.

Top photo: Oil well in southeast Saskatchewan, with flared gas. June 2014.

Wind turbine on site of Pickering Nuclear Generating Station.

How big is that hectare? It depends.

Also published at Resilience.org.

link to Accounting For Energy seriesThe Pickering Nuclear Generating Station, on the east edge of Canada’s largest city, Toronto, is a good take-off point for a discussion of the strengths and limitations of Vaclav Smil’s power density framework.

The Pickering complex is one of the older nuclear power plants operating in North America. Brought on line in 1971, the plant includes eight CANDU reactors (two of which are now permanently shut down). The complex also includes a single wind turbine, brought online in 2001.

Wonkometer-225The CANDU reactors are rated, at full power, at about 3100 Megawatts (MW). The wind turbine, which at 117 meters high was one of North America’s largest when it was installed, is rated at 1.8 MW at full power. (Because the nuclear reactor runs at full power for many more hours in a year, the disparity in actual output is even greater than the above figures suggest.)

How do these figures translate to power density, or power per unit of land?

The Pickering nuclear station stands cheek-by-jowl with other industrial sites and with well-used Lake Ontario waterfront parks. With a small land footprint, its power density is likely towards the high end – 7,600 W/m2 – of the range of nuclear generating stations Smil considers in Power Density. Had it been built with a substantial buffer zone, as is the case with many newer nuclear power plants, the power density might only be half as high.

A nuclear power plant, of course, requires a complex fuel supply chain that starts at a uranium mine. To arrive at more realistic power density estimates, Smil considers a range of mining and processing scenarios. When a nuclear station’s output is prorated over all the land used – land for the plant site itself, plus land for mining, processing and spent fuel storage – Smil estimates a power density of about 500 W/m2 in what he considers the most representative, mid-range of several examples.

Cameco uranium processing plant in Port Hope, Ontario

The Cameco facility in Port Hope, Ontario processes uranium for nuclear reactors. With no significant buffer around the plant, its land area is small and its power density high. Smil calculates its conversion power density at approximately 100,000 W / square meter, with the plant running at 50% capacity.

And wind turbines? Smil looks at average outputs from a variety of wind farm sites, and arrives at an estimated power density of about 1 W/m2.

So nuclear power has about 500 times the power density of wind turbines? If only it were that simple.

Inside and outside the boundary

In Power Density, Smil takes care to explain the “boundary problem”: defining what is being included or excluded in an analysis. With wind farms, for example, which land area is used in the calculation? Is it just the area of the turbine’s concrete base, or should it be all the land around and between turbines (in the common scenario of a large cluster of turbines spaced throughout a wind farm)?  There is no obviously correct answer to this question.

On the one hand, land between turbines can be and often is used as pasture or as crop land. On the other hand, access roads may break up the landscape and make some human uses impractical, as well as reducing the viability of the land for species that require larger uninterrupted spaces. Finally, there is considerable controversy about how close to wind turbines people can safely live, leading to buffer zones of varying sizes around turbine sites. Thus in this case the power output side of the quotient is relatively easy to determine, but the land area is not.

Wind turbines in southwestern Minnesota

Wind turbines line the horizon in Murray County, Minnesota, 2012.

Smil emphasizes the importance of clearly stating the boundary assumptions used in a particular analysis. For the average wind turbine power density of 1 W/m2, he is including the whole land area of a wind farm.

That approach is useful in giving us a sense of how much area would need to be occupied by wind farms to produce the equivalent power of a single nuclear power plant. The mid-range power station cited above (with overall power density of 500 W/m2) takes up about 1360 hectares in the uranium mining-processing-generating station chain. A wind farm of equivalent total power output would sprawl across 680,000 hectares of land, or 6,800 square kilometers, or a square with 82 km per side.

A wind power evangelist, on the other hand, could argue that the wind farms remain mostly devoted to agriculture, and with the concrete bases of the towers only taking 1% of the wind farm area, the power density should be calculated at 100 instead of 1W/m2.

Similar questions apply in many power density calculations. A hydro transmission corridor takes a broad stripe of countryside, but the area fenced off for the pylons is small. Most land in the corridor may continue to be used for grazing, though many other land uses will be off-limits. So you could use the area of the whole corridor in calculating power density – plus, perhaps, another buffer on each side if you believe that electromagnetic fields near power lines make those areas unsafe for living creatures. Or you could use just the area fenced off directly around the pylons. The respective power densities will vary by orders of magnitude.

If the land area is not simple to quantify when things go right, it is even more difficult when things go wrong. A drilling pad for a fracked shale gas may only be a hectare or two, so during the brief decade or two of the well’s productive life, the power density is quite high. But if fracking water leaks into an aquifer, the gas well may have drastic impacts on a far greater area of land – and that impact may continue even when the fracking boom is history.

The boundary problem is most tangled when resource extraction and consumption effects have uncertain extents in both space and time. As mentioned in the previous installment in this series, sometimes non-renewable energy facilities can be reclaimed for a full range of other uses. But the best-case scenario doesn’t always apply.

In mountain-top removal coal mining, there is a wide area of ecological devastation during the mining. But once the energy extraction drops to 0 and the mining corporation files bankruptcy, how much time will pass before the flattened mountains and filled-in valleys become healthy ecosystems again?

Or take the Pickering Nuclear Generation Station. The plant is scheduled to shut down about 2020, but its operators, Ontario Power Generation, say they will need to allow the interior radioactivity to cool for 15 years before they can begin to dismantle the reactor. By their own estimates the power plant buildings won’t be available for other uses until around 2060. Those placing bets on whether this will all go according to schedule can check back in 45 years.

In the meantime the plant will occupy land but produce no power; should the years of non-production be included in calculating an average power density? If decommissioning fails to make the site safe for a century or more, the overall power density will be paltry indeed.

In summary, Smil’s power density framework helps explain why it has taken high-power-density technologies to fuel our high-energy-consumption society, even for a single century. It helps explain why low power density technologies, such as solar and wind power, will not replace our current energy infrastructure or current demand for decades, if ever.

But the boundary problem is a window on the inherent limitations of the approach. For the past century our energy has appeared cheap and power densities have appeared high. Perhaps the low cost and the high power density are both due, in significant part, to important externalities that were not included in calculations.

Top photo: Pickering Nuclear Generating Station site, including wind turbine, on the shoreline of Lake Ontario near Toronto.