Being right, and being persuasive: a primer on ‘talking climate’

Also published at

Given that most people in industrialized countries accept that climate change is a scientific reality, why do so few rank climate change as one of their high priorities? Why do so few people discuss climate change with their families, friends, and neighbours? Are clear explanations of the ‘big numbers’ of climate change a good foundation for public engagement?

These are among the key questions in a thought-provoking new book by Adam Corner and Jamie Clarke – Talking Climate: From Research to Practice in Public Engagement.

In a brief review of climate change as a public policy issue, Corner and Clarke make the point that climate change action was initially shaped by international responses to the ozone layer depletion and the problem of acid rain. In these cases technocrats in research, government and industry were able to frame the problem and implement solutions with little need for deep public engagement.

The same model might once have worked for climate change response. But today, we are faced with a situation where climate change will be an ongoing crisis for at least several generations. Corner and Clarke argue that responding to climate change will require public engagement that is both deep and broad.

That kind of engagement can only be built through wide-ranging public conversations which tap into people’s deepest values – and climate change communicators must learn from social science research on what works, and what doesn’t work, in growing a public consensus.

Talking Climate is at its best in explaining the limitations of dominant climate change communication threads. But the book is disappointingly weak in describing the ‘public conversations’ that the authors say are so important.


Narratives and numbers

“Stories – rather than scientific facts – are the vehicles with which to build public engagement”, Corner and Clarke say. But climate policy is most often framed by scientifically valid and scientifically important numbers which remain abstract to most people. In particular, the concept of a 2°C limit to overall global warming has received oceans of ink, and this concept was the key component of the 2015 Paris Agreement.

Unfortunately, the 2° warming threshold does not help move climate change from a ‘scientific reality’ to a ‘social reality’:

In research conducted just before the Paris negotiations with members of the UK public, we found that people were baffled by the 2 degrees concept and puzzled that the challenge of climate change would be expressed in such a way. … People understandably gauge temperature changes according to their everyday experiences, and a daily temperature fluctuation of 2 degrees is inconsequential, pleasant even – so why should they worry?

“Being right is not the same as being persuasive,” Corner and Clarke add, “and the ‘big numbers’ of the climate change and energy debate do not speak to the lived experience of ordinary people going about their daily lives ….”

While they cite interesting research on what doesn’t work in building public engagement, the book is frustratingly skimpy on what does work.

In particular, there are no good examples of the narratives or stories that the authors hold out as the primary way most people make sense of the world.

“Narratives have a setting, a plot (beginning, middle, and end), characters (heroes, villains, and victims), and a moral of the story,” Corner and Clarke write. How literally should we read that statement? What are some examples of stories that have emerged to help people understand climate change and link their responses to their deepest values? Unfortunately we’re left guessing.

Likewise, the authors write that they have been involved with several public consultation projects that helped build public engagement around climate change. How did these projects select or attract participants, given that only a small percentage of the population regards climate change as an issue of deep personal importance?

Talking Climate packs a lot of important research and valuable perspectives into a mere 125 pages, plus notes. Another 25 pages outlining successful communication efforts might have made it an even better book.

Photos: rainbow over South Dakota grasslands, and sagebrush in Badlands National Park, June 2014.

Fake news as official policy

Also published at

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)


Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from

Fake news, failed states

Also published at

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From

A container train on the Canadian National rail line.

Door to Door – A selective look at our “system of systems”

Also published at

Our transportation system is “magnificent, mysterious and maddening,” says the subtitle of Edward Humes’ new book. Open the cover and you’ll encounter more than a little “mayhem” too.

Is the North American economy a consumer economy or a transportation economy? The answer, of course, is “both”. Exponential growth in consumerism has gone hand in hand with exponential growth in transport, and Edward Humes’ new book provides an enlightening, entertaining, and often sobering look at several key aspects of our transportation systems.

door to door cover 275Much of what we consume in North America is produced at least in part on other continents. Even as manufacturing jobs have been outsourced, transportation has been an area of continuing job growth – to the point where truck driving is the single most common job in a majority of US states.

Manufacturing jobs come and go, but the logistics field just keeps growing—32 percent growth even during the Great Recession, while all other fields grew by a collective average of 1 percent. Some say logistics is the new manufacturing. (Door to Door, Harper Collins 2016, Kindle Edition, locus 750)

With a focus on the operations of the Ports of Los Angeles and Long Beach, Humes shows how the standardized shipping container – the “can” in shipping industry parlance – has enabled the transfer of running shoes, iPhones and toasters from low-wage manufacturing complexes in China to consumers around the world. Since 1980, Humes writes, the global container fleet’s capacity has gone from 11 millions tons to 169 million tons – a fifteen-fold increase.

While some links in the supply chain have been “rationalized” in ways that lower costs (and eliminate many jobs), other trends work in opposite directions. The growth of online shopping, for example, has resulted in mid-size delivery trucks driving into suburban cul-de-sacs to drop off single parcels.

The rise of online shopping is exacerbating the goods-movement overload, because shipping one product at a time to homes requires many more trips than delivering the same amount of goods en masse to stores. In yet another door-to-door paradox, the phenomenon of next-day and same-day delivery, while personally efficient and seductively convenient for consumers, is grossly inefficient for the transportation system at large. (Door to Door, locus 695)

Humes devotes almost no attention in this book to passenger rail, passenger airlines, or freight rail beyond the short-line rail that connects the port of Los Angeles to major trucking terminals. He does, however, provide a good snapshot of the trucking industry in general and UPS in particular.

Among the most difficult challenges faced by UPS administrators and drivers is the unpredictable snarl of traffic on roads and streets used by trucks and passenger cars alike. This traffic is not only maddening but terribly violent. “Motor killings”, to use the 1920s terminology, or “traffic accidents”, to use the contemporary euphemism, “are the leading cause of death for Americans between the ages of one and thirty-nine. They rank in the top five killers for Americans sixty-five and under ….” (locus 1514)

In the US there are 35,000 traffic fatalities a year, or one death every fifteen minutes. Humes notes that these deaths seldom feature on major newscasts – and in his own journalistic way he sets out to humanize the scale of the tragedy.

Delving into the records for one representative day during the writing of the book, Humes finds there were at least 62 fatal collisions in 27 states on Friday, February 13, 2015. He gives at least a brief description of dozens of these tragedies: who was driving, where, at what time, and who was killed or seriously injured.

Other than in collisions where alcohol is involved, Humes notes, there are seldom serious legal sanctions against drivers, even when they strike down and kill pedestrians who have the right of way. In this sense our legal system simply reflects the physical design of the motor vehicle-dominated transport system.

Drawing on the work of Strong Towns founder Charles Marohn, Humes explains that roads are typically designed for higher speeds than the posted speed limits. While theoretically this is supposed to provide a margin of safety for a driver who drifts out of line, in practice it encourages nearly all drivers to routinely exceed speed limits. The quite predictable result is that there are more collisions, and more serious injuries or death per collision, than there would be if speeding were not promoted-by-design.

In the design of cars, meanwhile, great attention has been devoted to saving drivers from the consequences of their own errors. Seat belts and air bags have saved the lives of many vehicle occupants. Yet during the same decades that such safety features have become standard, the auto industry has relentlessly promoted vehicles that are more dangerous simply because they are bigger and heavier.

A study by University of California economist Michelle J. White found that

for every crash death avoided inside an SUV or light truck, there were 4.3 additional collisions that took the lives of car occupants, pedestrians, bicyclists, or motorcyclists. The supposedly safer SUVs were, in fact, “extremely deadly,” White concluded. (Door to Door, locus 1878)

Another University of California study found that “for every additional 1,000 pounds in a vehicle’s weight, it raises the probability of a death in any other vehicle in a collision by 47 percent.” (locus 1887)

Is there a solution to the intertwined problems of gridlock, traffic deaths, respiratory-disease causing emissions and greenhouse gas emissions? Humes takes an enthusiastic leap of faith here to sing the praises of the driverless – or self-driving, if you prefer – car.

“The car that travels on its own can remedy each and every major problem facing the transportation system of systems,” Humes boldly forecasts. Deadly collisions, carbon dioxide and particulate emissions, parking lots that take so much urban real estate, the perceived need to keep adding lanes of roadway at tremendous expense, and soul-killing commutes on congested roads – Humes says these will all be in the rear-view mirror once our auto fleets have been replaced by autonomous electric vehicles.

We’ll need to wait a generation for definitive judgment on his predictions, but Humes’ description of our present transportation system is eminently readable and thought-provoking.

Top photo: container train on Canadian National line east of Toronto.

Oil well in southeast Saskatchewan, with flared gas.

Energy at any cost?

Also published at

If all else is uncertain, how can growing demand for energy be guaranteed? A review of Vaclav Smil’s Natural Gas.

Near the end of his 2015 book Natural Gas: Fuel for the 21st Century, Vaclav Smil makes two statements which are curious in juxtaposition.

On page 211, he writes:

I will adhere to my steadfast refusal to engage in any long-term forecasting, but I will restate some basic contours of coming development before I review a long array of uncertainties ….”

Link to Vaclav Smil series list.And in the next paragraph:

Given the scale of existing energy demand and the inevitability of its further growth, it is quite impossible that during the twenty-first century, natural gas could come to occupy such a dominant position in the global primary energy supply as wood did in the preindustrial era or as coal did until the middle of the twentieth century.”

If you think that second statement sounds like a long-term forecast, that makes two of us. But apparently to Smil it is not a forecast to say that the growth of energy demand is inevitable, and it’s not a forecast to state with certainty that natural gas cannot become the dominant energy source during the twenty-first century – these are simply “basic contours of coming development.” Let’s investigate.

An oddly indiscriminate name

Natural Gas is a general survey of the sources and uses of what Smil calls the fuel with “an oddly indiscriminate name”. It begins much as it ends: with a strongly-stated forecast (or “basic contour”, if you prefer) about the scale of natural gas and other fossil fuel usage relative to other energy sources.

why dwell on the resources of a fossil fuel and why extol its advantages at a time when renewable fuels and decentralized electricity generation converting solar radiation and wind are poised to take over the global energy supply. That may be a fashionable narrative – but it is wrong, and there will be no rapid takeover by the new renewables. We are a fossil-fueled civilization, and we will continue to be one for decades to come as the pace of grand energy transition to new forms of energy is inherently slow.” – Vaclav Smil, preface to Natural Gas

And in the next paragraph:

Share of new renewables in the global commercial primary energy supply will keep on increasing, but a more consequential energy transition of the coming decades will be from coal and crude oil to natural gas.”

In support of his view that a transition away from fossil fuel reliance will take at least several decades, Smil looks at major energy source transitions over the past two hundred years. These transitions have indeed been multi-decadal or multi-generational processes.

Obvious absence of any acceleration in successive transitions is significant: moving from coal to oil has been no faster than moving from traditional biofuels to coal – and substituting coal and oil by natural gas has been measurably slower than the two preceding shifts.” – Natural Gas, page 154

It would seem obvious that global trade and communications were far less developed 150 years ago, and that would be one major reason why the transition from traditional biofuels to coal proceeded slowly on a global scale. Smil cites another reason why successive transitions have been so slow:

Scale of the requisite transitions is the main reason why natural gas shares of the TPES [Total Primary Energy System] have been slower to rise: replicating a relative rise needs much more energy in a growing system. … going from 5 to 25% of natural gas required nearly eight times more energy than accomplishing the identical coal-to-oil shift.” – Natural Gas, page 155

Open-pit coal mine in south-east Saskatchewan.

Open-pit coal mine in south-east Saskatchewan. June 2014.

Today only – you’ll love our low, low prices!

There is another obvious reason why transitions from coal to oil, and from oil to natural gas, could have been expected to move slowly throughout the last 100 years: there have been abundant supplies of easily accessible, and therefore cheap, coal and oil. When a new energy source was brought online, the result was a further increase in total energy consumption, instead of any rapid shift in the relative share of different sources.

The role of price in influencing demand is easy to ignore when the price is low. But that’s not a condition we can count on for the coming decades.

Returning to Smil’s “basic contour” that total energy demand will inevitably rise, that would imply that energy prices will inevitably remain relatively low – because there is effective demand for a product only to the extent that people can afford to buy it.

Remarkably, however, even as he states confidently that demand must grow, Smil notes the major uncertainty about the investment needed simply to maintain existing levels of supply:

if the first decade of the twenty-first century was a trendsetter, then all fossil energy sources will cost substantially more, both to develop new capacities and to maintain production of established projects at least at today’s levels. … The IEA estimates that between 2014 and 2035, the total investment in energy supply will have to reach just over $40 trillion if the world is to meet the expected demand, with some 60% destined to maintain existing output and 40% to supply the rising requirements. The likelihood of meeting this need will be determined by many other interrelated factors.” – Natural Gas, page 212

What is happening here? Both Smil and the IEA are cognizant of the uncertain effects of rising prices on supply, while graphing demand steadily upward as if price has no effect. This is not how economies function in the real world, of course.

Likewise, we cannot assume that because total energy demand kept rising throughout the twentieth century, it must continue to rise through the twenty-first century. On the contrary, if energy supplies are difficult to access and therefore much more costly, then we should also expect demand to grow much more slowly, to stop growing, or to fall.

Falling demand, in turn, would have a major impact on the possibility of a rapid change in the relative share of demand met by different sources. In very simple terms, if we increased total supply of renewable energy rapidly (as we are doing now), but the total energy demand were dropping rapidly, then the relative share of renewables in the energy market could increase even more rapidly.

Smil’s failure to consider such a scenario (indeed, his peremptory dismissal of the possibility of such a scenario) is one of the major weaknesses of his approach. Acceptance of business-as-usual as a reliable baseline may strike some people as conservative. But there is nothing cautious about ignoring one of the fundamental factors of economics, and nothing safe in assuming that the historically rare condition of abundant cheap energy must somehow continue indefinitely.

In closing, just a few words about the implications of Smil’s work as it relates to the threat of climate change. In Natural Gas, he provides much valuable background on the relative amounts of carbon emissions produced by all of our major energy sources. He explains why natural gas is the best of the fossil fuels in terms of energy output relative to carbon emissions (while noting that leaks of natural gas – methane – could in fact outweigh the savings in carbon emissions). He explains that the carbon intensity of our economies has dropped as we have gradually moved from coal to oil to natural gas.

But he also makes it clear that this relative decarbonisation has been far too slow to stave off the threat of climate change.

If he turns out to be right that total energy demand will keep rising, that there will only be a slow transition from other fossil fuels to natural gas, and that the transition away from all fossil fuels will be slower still, then the chances of avoiding catastrophic climate change will be slim indeed.

Top photo: Oil well in southeast Saskatchewan, with flared gas. June 2014.

Wind turbine on site of Pickering Nuclear Generating Station.

How big is that hectare? It depends.

Also published at

link to Accounting For Energy seriesThe Pickering Nuclear Generating Station, on the east edge of Canada’s largest city, Toronto, is a good take-off point for a discussion of the strengths and limitations of Vaclav Smil’s power density framework.

The Pickering complex is one of the older nuclear power plants operating in North America. Brought on line in 1971, the plant includes eight CANDU reactors (two of which are now permanently shut down). The complex also includes a single wind turbine, brought online in 2001.

Wonkometer-225The CANDU reactors are rated, at full power, at about 3100 Megawatts (MW). The wind turbine, which at 117 meters high was one of North America’s largest when it was installed, is rated at 1.8 MW at full power. (Because the nuclear reactor runs at full power for many more hours in a year, the disparity in actual output is even greater than the above figures suggest.)

How do these figures translate to power density, or power per unit of land?

The Pickering nuclear station stands cheek-by-jowl with other industrial sites and with well-used Lake Ontario waterfront parks. With a small land footprint, its power density is likely towards the high end – 7,600 W/m2 – of the range of nuclear generating stations Smil considers in Power Density. Had it been built with a substantial buffer zone, as is the case with many newer nuclear power plants, the power density might only be half as high.

A nuclear power plant, of course, requires a complex fuel supply chain that starts at a uranium mine. To arrive at more realistic power density estimates, Smil considers a range of mining and processing scenarios. When a nuclear station’s output is prorated over all the land used – land for the plant site itself, plus land for mining, processing and spent fuel storage – Smil estimates a power density of about 500 W/m2 in what he considers the most representative, mid-range of several examples.

Cameco uranium processing plant in Port Hope, Ontario

The Cameco facility in Port Hope, Ontario processes uranium for nuclear reactors. With no significant buffer around the plant, its land area is small and its power density high. Smil calculates its conversion power density at approximately 100,000 W / square meter, with the plant running at 50% capacity.

And wind turbines? Smil looks at average outputs from a variety of wind farm sites, and arrives at an estimated power density of about 1 W/m2.

So nuclear power has about 500 times the power density of wind turbines? If only it were that simple.

Inside and outside the boundary

In Power Density, Smil takes care to explain the “boundary problem”: defining what is being included or excluded in an analysis. With wind farms, for example, which land area is used in the calculation? Is it just the area of the turbine’s concrete base, or should it be all the land around and between turbines (in the common scenario of a large cluster of turbines spaced throughout a wind farm)?  There is no obviously correct answer to this question.

On the one hand, land between turbines can be and often is used as pasture or as crop land. On the other hand, access roads may break up the landscape and make some human uses impractical, as well as reducing the viability of the land for species that require larger uninterrupted spaces. Finally, there is considerable controversy about how close to wind turbines people can safely live, leading to buffer zones of varying sizes around turbine sites. Thus in this case the power output side of the quotient is relatively easy to determine, but the land area is not.

Wind turbines in southwestern Minnesota

Wind turbines line the horizon in Murray County, Minnesota, 2012.

Smil emphasizes the importance of clearly stating the boundary assumptions used in a particular analysis. For the average wind turbine power density of 1 W/m2, he is including the whole land area of a wind farm.

That approach is useful in giving us a sense of how much area would need to be occupied by wind farms to produce the equivalent power of a single nuclear power plant. The mid-range power station cited above (with overall power density of 500 W/m2) takes up about 1360 hectares in the uranium mining-processing-generating station chain. A wind farm of equivalent total power output would sprawl across 680,000 hectares of land, or 6,800 square kilometers, or a square with 82 km per side.

A wind power evangelist, on the other hand, could argue that the wind farms remain mostly devoted to agriculture, and with the concrete bases of the towers only taking 1% of the wind farm area, the power density should be calculated at 100 instead of 1W/m2.

Similar questions apply in many power density calculations. A hydro transmission corridor takes a broad stripe of countryside, but the area fenced off for the pylons is small. Most land in the corridor may continue to be used for grazing, though many other land uses will be off-limits. So you could use the area of the whole corridor in calculating power density – plus, perhaps, another buffer on each side if you believe that electromagnetic fields near power lines make those areas unsafe for living creatures. Or you could use just the area fenced off directly around the pylons. The respective power densities will vary by orders of magnitude.

If the land area is not simple to quantify when things go right, it is even more difficult when things go wrong. A drilling pad for a fracked shale gas may only be a hectare or two, so during the brief decade or two of the well’s productive life, the power density is quite high. But if fracking water leaks into an aquifer, the gas well may have drastic impacts on a far greater area of land – and that impact may continue even when the fracking boom is history.

The boundary problem is most tangled when resource extraction and consumption effects have uncertain extents in both space and time. As mentioned in the previous installment in this series, sometimes non-renewable energy facilities can be reclaimed for a full range of other uses. But the best-case scenario doesn’t always apply.

In mountain-top removal coal mining, there is a wide area of ecological devastation during the mining. But once the energy extraction drops to 0 and the mining corporation files bankruptcy, how much time will pass before the flattened mountains and filled-in valleys become healthy ecosystems again?

Or take the Pickering Nuclear Generation Station. The plant is scheduled to shut down about 2020, but its operators, Ontario Power Generation, say they will need to allow the interior radioactivity to cool for 15 years before they can begin to dismantle the reactor. By their own estimates the power plant buildings won’t be available for other uses until around 2060. Those placing bets on whether this will all go according to schedule can check back in 45 years.

In the meantime the plant will occupy land but produce no power; should the years of non-production be included in calculating an average power density? If decommissioning fails to make the site safe for a century or more, the overall power density will be paltry indeed.

In summary, Smil’s power density framework helps explain why it has taken high-power-density technologies to fuel our high-energy-consumption society, even for a single century. It helps explain why low power density technologies, such as solar and wind power, will not replace our current energy infrastructure or current demand for decades, if ever.

But the boundary problem is a window on the inherent limitations of the approach. For the past century our energy has appeared cheap and power densities have appeared high. Perhaps the low cost and the high power density are both due, in significant part, to important externalities that were not included in calculations.

Top photo: Pickering Nuclear Generating Station site, including wind turbine, on the shoreline of Lake Ontario near Toronto.

Insulators on high-voltage electricity transmission line.

Timetables of power

Also published at

accounting_for_energy_2For more than three decades, Vaclav Smil has been developing the concepts presented in his 2015 book Power Density: A Key to Understanding Energy Sources and Uses.

The concept is (perhaps deceptively) simple: power density, in Smil’s formulation, is “the quotient of power and land area”. To facilitate comparisons between widely disparate energy technologies, Smil states power density using common units: watts per square meter.

Wonkometer-225Smil makes clear his belief that it’s important that citizens be numerate as well as literate, and Power Density is heavily salted with numbers. But what is being counted?

Perhaps the greatest advantage of power density is its universal applicability: the rate can be used to evaluate and compare all energy fluxes in nature and in any society. – Vaclav Smil, Power Density, pg 21

A major theme in Smil’s writing is that current renewable energy resources and technologies cannot quickly replace the energy systems that fuel industrial society. He presents convincing evidence that for current world energy demand to be supplied by renewable energies alone, the land area of the energy system would need to increase drastically.

Study of Smil’s figures will be time well spent for students of many energy sources. Whether it’s concentrated solar reflectors, cellulosic ethanol, wood-fueled generators, fracked light oil, natural gas or wind farms, Smil takes a careful look at power densities, and then estimates how much land would be taken up if each of these respective energy sources were to supply a significant fraction of current energy demand.

This consideration of land use goes some way to addressing a vacuum in mainstream contemporary economics. In the opening pages of Power Density, Smil notes that economists used to talk about land, labour and capital as three key factors in production, but in the last century, land dropped out of the theory.

The measurement of power per unit of land is one way to account for use of land in an economic system. As we will discuss later, those units of land may prove difficult to adequately quantify. But first we’ll look at another simple but troublesome issue.

Does the clock tick in seconds or in centuries?

It may not be immediately obvious to English majors or philosophers (I plead guilty), but Smil’s statement of power density – watts per square meter – includes a unit of time. That’s because a watt is itself a rate, defined as a joule per second. So power density equals joules per second per square meter.

There’s nothing sacrosanct about the second as the unit of choice. Power densities could also be calculated if power were stated in joules per millisecond or per megasecond, and with only slightly more difficult mathematical gymnastics, per century or per millenium. That is of course stretching a point, but Smil’s discussion of power density would take on a different flavor if we thought in longer time frames.

Consider the example with which Smil opens the book. In the early stages of the industrial age, English iron smelting was accomplished with the heat from charcoal, which in turn was made from coppiced beech and oak trees. As pig iron production grew, large areas of land were required solely for charcoal production. This changed in the blink of an eye, in historical terms, with the development of coal mining and the process of coking, which converted coal to nearly 100% pure carbon with energy equivalent to good charcoal.

As a result, the charcoal from thousands of hectares of hardwood forest could be replaced by coal from a mine site of only a few hectares. Or in Smil’s favored terms,

The overall power density of mid-eighteenth-century English coke production was thus roughly 500 W/m2, approximately 7,000 times higher than the power density of charcoal production. (Power Density, pg 4)

Smil notes rightly that this shift had enormous consequences for the English countryside, English economy and English society. Yet my immediate reaction to this passage was to cry foul – there is a sleight of hand going on.

While the charcoal production figures are based on the amount of wood that a hectare might produce on average each year, in perpetuity, the coal from the mine will dwindle and then run out in a century or two. If we averaged the power densities of the woodlot and mine over several centuries or millennia, the comparison look much different.

And that’s a problem throughout Power Density. Smil often grapples with the best way to average power densities over time, but never establishes a rule that works well for all energy sources.

Generating station near Niagara Falls

The Toronto Power Generating Station was built in 1906, just upstream from Horseshoe Falls in Niagara Falls, Ontario. It was mothballed in 1974. Photographed in February, 2014.

In discussing photovoltaic generation, he notes that solar radiation varies greatly by hour and month. It would make no sense to calculate the power output of a solar panel solely by the results at noon in mid-summer, just as it would make no sense to run the calculation solely at twilight in mid-winter. It is reasonable to average the power density over a whole year’s time, and that’s what Smil does.

When considering the power density of ethanol from sugar cane, it would be crazy to run the calculation based solely on the month of harvest, so again, the figures Smil uses are annual average outputs. Likewise, wood grown for biomass fuel can be harvested approximately every 20 years, so Smil divides the energy output during a harvest year by 20 to arrive at the power density of this energy source.

Using the year as the averaging unit makes obvious sense for many renewable energy sources, but this method breaks down just as obviously when considering non-renewable sources.

How do you calculate the average annual power density for a coal mine which produces high amounts of power for a hundred years or so, and then produces no power for the rest of time? Or the power density of a fracked gas well whose output will continue only a few decades at most?

The obvious rejoinder to this line of questioning is that when the energy output of a coal mine, for example, ceases, the land use also ceases, and at that point the power density of the coal mine is neither high nor low nor zero; it simply cannot be part of a calculation. As we’ll discuss later in this series, however, there are many cases where reclamations are far from certain, and so a “claim” on the land goes on.

Smil is aware of the transitory nature of fossil fuel sources, of course, and he cites helpful and eye-opening figures for the declining power densities of major oil fields, gas fields and coal mines over the past century. Yet in Power Density, most of the figures presented for non-renewable energy facilities apply for that (relatively brief) period when these facilities are in full production, but they are routinely compared with power densities of renewable energy facilities which could continue indefinitely.

So is it really true that power density is a measure “which can be used to evaluate and compare all energy fluxes in nature and in any society”? Only with some critical qualifications.

In summary, we return to Smil’s oft-emphasized theme, that current renewable resource technologies are no match for the energy demands of our present civilization. He argues convincingly that the power density of consumption on a busy expressway will not be matched to the power density of production of ethanol from corn: it would take a ridiculous and unsustainable area of corn fields to fuel all that high-energy transport. Widening the discussion, he establishes no less convincingly, to my mind, that solar power, wind power, and biofuels are not going to fuel our current high-energy way of life.

Yet if we extend our averaging units to just a century or two, we could calculate just as convincingly that the power densities of non-renewable fuel sources will also fail to support our high-energy society. And since we’re already a century into this game, we might be running out of time.

Top photo: insulators on high-voltage transmission line near Darlington Nuclear Generating Station, Bowmanville, Ontario.

Tractor-trailers hauling oil and water on North Dakota highway.

‘Are we there yet?’ The uncertain road to the twenty-first century.

Also published at

What made the twentieth century such a distinctive period in human history? Are we moving into the future at an ever-increasing speed? What measures provide the most meaningful comparisons of different energy technologies? Is it “conservative” to base forecasts on business-as-usual scenarios?

These questions provide handy lenses for looking at the work of prolific energy science writer Vaclav Smil.

accounting_for_energy_1Smil, a professor emeritus at the University of Manitoba, is not likely to publish any best-sellers, but his books are widely read by people looking for data-backed discussion of energy sources and their role in our civilization. While Smil’s seemingly effortless fluency in wide-ranging topics of energy science can be intimidating to non-scientists, many of his books require no more than a good high-school-level knowledge of physics, chemistry and mathematics.

This post is the first in a series on issues raised by Smil. How many posts? Let’s just say, to use a formulation familiar to anyone who reads Smil, that the number of posts in this series will be “in the range of an order of magnitude less” than the number of Smil’s books. (He’s at 37 books and counting.)

The myth of accelerating change

In early 2004, I wrote a newspaper column with the title “Got Any Change?” Some excerpts:

Think back 50 years. If you grew up in North America, people were already travelling in cars, which moved along at about 60 miles per hour. You lived in a house with heat and running water, and you could just flick a switch to turn on the lights. You turned on the TV or radio to get instant news. You could pick up the phone and actually talk to relatives on the other side of the country.

For ease of daily living and communication, things haven’t changed much in the last 50 years for most North Americans.

My grandparents, by contrast, who grew up “when motorcars were still exotic playthings”, really lived through rapid and fundamental changes:

The magic of telephone reached into rural areas, and soon my grandparents adjusted to the even more astonishing development of moving pictures, transmitted to television sets in the living room. The airplane was invented about the time my grandparents were born, but they lived long enough to fly on passenger jets, and they watched the live newscasts as astronauts landed on the moon. (“Got Any Change?”, in the Brighton Independent, January 7, 2004)

As it turns out Smil was working on a similar premise, and developing it with his customary authority and historical rigor. The result was his 2005 book Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact. This was the first Smil book I picked up, and naturally I read it while basking in the warm glow of confirmation bias.

In the course of 300 pages, Smil argues that many world-changing technologies swept the world in the twentieth century, but nearly all of them are directly traceable to scientific advances – both theoretical and applied – during the period 1867 to 1914. There is no other period in world history so far, he says, in which so many scientific discoveries made their way so rapidly into the fabric of everyday life.

Most of [these technical advances] are still with us not just as inconsequential survivors or marginal accoutrements from a bygone age but as the very foundations of modern civilization. Such a profound and abrupt discontinuity with such lasting consequences has no equivalent in history.

For anyone alive in North America today, it’s easy to take these advances for granted, because we have never known a world without them. That’s what makes Smil’s book so valuable. In detail and with clarity, he outlines the development of electrical generators, transformers, transmission systems, and motors; internal combustion engines; new industrial processes that turned steel, aluminum, concrete, and plastics from scarce or unknown products into mass-produced commodities; and the ability to harness the electromagnetic spectrum in ways that made telephone, radio and television commercially feasible within the first few decades of the twentieth century.

Ship docked at St. Mary's Cement plant at sunset.

The Peter R Cresswell docked at the St. Mary’s Cement plant on Lake Ontario near Bowmanville, Ontario. The plant converts quarried limestone to cement, in kilns fueled by coal and pet coke. Photo from July, 2015.

Energy matters

There is a good deal in Creating the Twentieth Century on increasingly efficient methods of energy conversion. For example, Smil writes that “Typical efficiency of new large stationary steam engines rose from 6–10% during the 1860s to 12–15% after 1900, a 50% efficiency gain, and when small machines were replaced by electric motors, the overall efficiency gain was typically more than fourfold.”

But I found it odd that Creating the Twentieth Century gives little ink to the sources of energy. Smil does note that

for the first time in human history the age was marked by the emergence of high-energy societies whose functioning, be it on mundane or sophisticated levels, became increasingly dependent on incessant supplies of fossil fuels and on rising need for electricity.

Yet there is no substantial examination in this book of the fossil fuel extraction and processing industries, which rapidly became (and remained) among the dominant industries of the twentieth century.

Clearly the new understandings of thermodynamics and electromagnetism, along with new processes for steel and concrete production, were key to the twentieth century as we knew it. But suppose those developments had occurred, but at the same time only a few sizable reservoirs of oil had been discovered, so that petroleum had remained useful but expensive. Would the twentieth century still have happened?

Perhaps we shouldn’t blame Smil for avoiding a counterfactual question about epochal changes a century and more ago. After all, he has devoted a great deal of attention to a more pressing quandary: how might we create a future, with the scientific knowledge that’s accumulated in the past century and a half, while also faced with the need to move beyond fossil fuel dependence? Can we make such a transition, and how long might it take? We’ll move to those issues in the coming installments.

Top photo: Trucks hauling crude oil and frac water near Watford City, North Dakota, June 2014.

Freight expectations

Also published at

Alice J. Friedemann’s new book When Trucks Stop Running explains concisely how dependent American cities are on truck transport, and makes a convincing case that renewable energies cannot and will not power our transportation system in anything like its current configuration.

But will some trucks stop running, or all of them? Will the change happen suddenly over 10 years, or gradually over 40 years or more? Those are more difficult questions, and they highlight the limitations of guesstimating future supply trends while taking future demand as basically given.

When Trucks Stop Running, Springer, 2016

When Trucks Stop Running, Springer, 2016

Alice J. Friedemann worked for more than 20 years in transportation logistics. She brings her skills in systems analysis to her book When Trucks Stop Running: Energy and the Future of Transportation (Springer Briefs in Energy, 2016).

In a quick historical overview, Friedemann explains that in 2012, a severely shrunken rail network still handled 45% of the ton-miles of US freight, while burning only 2% of transportation fuel. But the post-war highway-building boom had made it convenient for towns and suburbs to grow where there are neither rails nor ports, with the result that “four out of five communities depend entirely on trucks for all of their goods.”

After a brief summary of peak oil forecasts, Friedemann looks at the prospects for running trains and trucks on something other than diesel fuel, and the prospects are not encouraging. Electrification, whether using batteries or overhead wires, is ill-suited to the power requirements of trains and trucks with heavy loads over long distances. Friedemann also analyzes liquid fuel options including biofuels and coal-to-liquid conversions, but all of these options have poor Energy Return On Investment ratios.

While we search for ways to retool the economy and transportation systems, we would be wise to prioritize the use of precious fuels. Friedemann notes that while trains are much more energy-efficient than heavy-duty trucks, trucks in turn are far more efficient than cars and planes.

So “instead of electrifying rail, which uses only 2% of all U.S. transportation fuel, we should discourage light-duty cars and light trucks, which guzzle 63% of all transportation fuel and give the fuel saved to diesel-electric locomotives.” Prioritizing fuel use this way could buy us some much-needed time – time to change infrastructure that took decades or generations to build.

If it strains credulity to imagine US policy-makers facing these kinds of choices of their own free will, it is nevertheless true that the unsustainable will not be sustained. Hard choices will be made, whether we want to make them or not.

A question of timing

Friedemann’s book joins other recent titles which put the damper on rosy predictions of a smooth transition to renewable energy economies. She covers some of the same ground as David MacKay’s Sustainable Energy – Without The Hot Air or Vaclav Smil’s Power Density, but in more concise and readable fashion, focused specifically on the energy needs of transportation.

In all three of these books, there is an understandable tendency to answer the (relatively) simple question: can future supply keep up with demand, assuming that demand is in line with today’s trends?

But of course, supply will influence demand, and vice versa. The interplay will be complex, and may confound apparently straight-forward predictions.

It’s important to keep in mind that in economic terms, demand does not equal what we want or even what we need. We can, and probably will, jump up and down and stamp our feet and DEMAND that we have abundant cheap fuel, but that will mean nothing in the marketplace. The economic demand equals the amount of fuel that we are willing and able to buy at a given price. As the price changes, so will demand – which will in turn affect the supply, at least in the short term.

Consider the Gross and Net Hubbert Curves graph which Friedemann reproduces.

Gross and Net Hubbert Curve, from When Trucks Stop Running, page 124

From When Trucks Stop Running, page 124

While the basic trend lines make obvious sense, the steepness of the projected decline depends in part on a steady demand: the ultimately recoverable resource is finite, and if we continue to extract the oil as fast as possible (the trend through our lifetimes) then the post-peak decline will indeed be steep, perhaps cliff-life.

But can we and will we sustain demand if prices spike again? That seems unlikely, particularly given our experience over the past 15 years. And if effective demand drops dramatically due to much higher pricing, then the short-term supply-on-the-market should also drop, while long-term available supply-in-the-ground will be prolonged. The right side of that Hubbert curve might eventually end up at the same place, but at a slower pace.

The most wasteful uses of fuels might soon be out of our price range, so we simply won’t be able to waste fuel at the same breathtaking rate. The economy might shudder and shrink, but we might find ways to pay for the much smaller quantities of fuel required to transport essential goods.

In other words, there may soon be far fewer trucks on the road, but they might run long enough to give us time to develop infrastructure appropriate to a low-energy economy.

Top photo: fracking supply trucks crossing the Missouri River in the Fort Berthold Indian Reservation in North Dakota, June 2014.

Does your city have a future?

In the past, as in the future, local ecosystem resources were the key to the economies of cities. A review of America’s Most Sustainable Cities & Regions.

Also published at

America’s Most Sustainable Cities and Regions, by John W. Day and Charles Hall, published by Springer, 2016

America’s Most Sustainable Cities and Regions, by John W. Day and Charles Hall, published by Springer, 2016

Readers hoping to find their home town rated in America’s Most Sustainable Cities and Regions may be both disappointed and enlightened.

Disappointed, because the book doesn’t provide a systematic listing that covers all American cities – either the most sustainable or the least sustainable. Enlightened, because the authors do provide a systematic way of looking at sustainability, which can be applied to cities across the USA and around the world.

The authors are counted among the pioneers of ecological economics, and their new book is a lucid introduction to the fundamental concepts of this viewpoint.

While a textbook of ecological economics might lose some readers in abstraction, this book moves fluidly between abstract concepts, and easy-to-follow application of these principles to the past development, and possible futures, of twelve cities and ten regions.

In the process, Day and Hall show that cities which grew up before the heyday of the fossil fuel age were sited to benefit from strong ecosystem services:

until the beginning of the twentieth century, cities like New York, Albany, Chicago, and New Orleans grew up in resource-rich areas and along waterways that provided food and fiber and convenient trade routes. Second, the climate of all of these early cities was moist …. (America’s Most Sustainable Cities and Regions, page 16)

The combination of adequate rainfall, benign climate and fertile soils leads to high potential for agriculture, as shown in a map of Net Primary Productivity:

The growth rate of plants (expressed as NPP or net primary productivity in grams per meter square per year) across the central part of North America. The tan areas have very low productivity, and dark green areas are highly productive. From America’s Most Sustainable Cities and Regions, Springer, 2016, page 132

The growth rate of plants (expressed as NPP or net primary productivity in grams per meter square per year) across the central part of North America. The tan areas have very low productivity, and dark green areas are highly productive. From America’s Most Sustainable Cities and Regions, Springer, 2016, page 132

By contrast, some major cities in the arid west enjoyed minimal ecosystem resources to begin with, and they had scarcely outgrown village status before requiring vast resource inputs that could only realistically be supplied by fossil fuels.

Las Vegas, for example, was located at a small oasis fed by artesian wells amidst vast deserts. And the Los Angeles River initially supplied enough water for a small town, but as groundwater was withdrawn the River ceased to flow year round, and dried up in the 1920s. Thus both Las Vegas and Los Angeles had to wait for the high-energy economy of the fossil fuel age, which built dams and pumped water from hundreds of miles away, before they could grow into large cities.

Not surprisingly, Las Vegas and Los Angeles, along with other sunbelt cities in arid regions, get dismal ratings for sustainability in a future when cheap fossil fuels run short, and climate change exacerbates droughts.

At the other end of the spectrum, Cedar Rapids, Iowa, is located in a still-fertile plain with adequate rainfall for farming. Freight transportation is close at hand via the Mississippi River system, and relatively clear skies and dependable breezes can provide solar and wind generation of electricity (though the authors make clear that these energy sources are unlikely to provide anything like the quantities of energy we now routinely use). Perhaps most critically, Cedar Rapids is a small city, whose population can conceivably be supported by nearby resources in a low-energy future.

New Orleans was founded in an area with some of the continent’s richest ecosystem resources. But residents may not have the option to rely on these resources in future:

The ecosystem that has supported the unique, vibrant culture of the city is rapidly eroding into the sea as the impacts of sea level rise, levees, and oil industry canals exacerbate the natural land loss rates of the subsiding deltaic wetland environment that surrounds the city. (America’s Most Sustainable Cities and Regions, page 64)

Unless there is a major and effective restoration program, New Orleans’ future prospects are not good. Undoing the damage wrought by fossil-fueled projects will be difficult in a lower-energy economy – and doubly difficult as climate change brings stronger hurricanes, higher sea levels and storm surges, and more extreme fluctuations in the flow of the Mississippi.

Other cities are in very hard times currently, but the regional ecosystem services are still relatively strong, leading to more hopeful future prospects:

There are active plans in both Flint and Detroit to develop urban agriculture on vacant land. “Urban farms” from a few acres to several hundred acres have sprung up in both cities with vegetables, fruit trees, chickens and eggs. … Thus, in the face of pervasive urban decay and collapse, these cities may be able to produce a significant amount of food. (America’s Most Sustainable Cities and Regions, page 49)

While the book is a strong addition to the literature on sustainability, I do have a few quibbles. First, a reader expecting discussion of the sustainability of average citizens’ lifestyles in various cities will be disappointed. It gradually becomes clear that current per capita ecological footprints are not the subject of this book, nor are Hall and Day ranking the degree to which the economies of various American cities are sustainable in their current configurations. Rather, they elucidate the degree to which these cities will be sustainable as they cope with 21st century megatrends. A clear statement early in the book, explaining what the authors mean and what they don’t mean by “America’s most sustainable cities”, would have been helpful.

Finally, the book’s predictive usefulness is weakened by a lack of any mention of either large-scale migrations or political factors on future sustainability.

The authors note that the resources in the area around Cedar Rapids could likely support the current population (though not their current lifestyles). On the other hand, the population of the megalopolis from Washington DC to Boston, including New York City, is far too great to be supported by local resources. In theory, then, the current Cedar Rapids could become sustainable, while the current New York City cannot.

Eventually that which cannot be sustained, will not be sustained. However, suppose a severe resource crunch hits rapidly. Assuming the millions of people in New York City don’t just ascend in The Rapture, many will move to someplace that can provide the necessities of life. A large outflow of people from cities like New York, and an inflow into the smaller, theoretically sustainable cities like Cedar Rapids, would quickly alter the sustainability calculus.

Likewise, if sustainability is threatened for large numbers of people on a short time-line, political leaders could force through desperately short-sighted measures to feed populations. Thus regions which currently have relatively strong ecosystems may not be able to maintain those environments, as more populous and more powerful regions exert their demands.

In summary, John W. Day and Charles Hall have provided a great overview of the factors that can make a city and a region sustainable, even in the face of restricted energy shortages and the challenges of climate change. If we move quickly enough in adopting an “economics as if reality matters”, then this book may also serve as a road map to a reasonably prosperous future.