Energy And Civilization: a review

Also published at Resilience.org.

If you were to find yourself huddled with a small group of people in a post-crash, post-internet world, hoping to recreate some of the comforts of civilization, you’d do well to have saved a printed copy of Vaclav Smil’s Energy and Civilization: A History.

Smil’s new 550-page magnum opus would help you understand why for most applications a draft horse is a more efficient engine than an ox – but only if you utilize an effective harness, which is well illustrated. He could help you decide whether building a canal or a hard-topped road would be a more productive use of your energies. When you were ready to build capstans or block-and-tackle mechanisms for accomplishing heavy tasks, his discussion and his illustrations would be invaluable.

But hold those thoughts of apocalypse for a moment. Smil’s book is not written as a doomer’s handbook, but as a thorough guide to the role of energy conversions in human history to date. Based on his 1994 book Energy in World History, the new book is about 60% longer and includes 40% more illustrations.

Though the initial chapters on prehistory are understandably brief, Smil lays the groundwork with his discussion of the dependency of all living organisms on their ability to acquire enough energy in usable forms.

The earliest humanoids had some distinct advantages and liabilities in this regard. Unlike other primates, humans evolved to walk on two feet all the time, not just occasionally. Ungainly though this “sequence of arrested falls” may be, “human walking costs about 75% less energy than both quadrupedal and bipedal walking in chimpanzees.” (Energy and Civilization, pg 22)

What to do with all that saved energy? Just think:

The human brain claims 20–25% of resting metabolic energy, compared to 8–10% in other primates and just 3–5% in other mammals.” (Energy and Civilization, pg 23)

In his discussion of the earliest agricultures, a recurring theme is brought forward: energy availability is always a limiting factor, but other social factors also come into play throughout history. In one sense, Smil explains, the move from foraging to farming was a step backwards:

Net energy returns of early farming were often inferior to those of earlier or concurrent foraging activities. Compared to foraging, early farming usually required higher human energy inputs – but it could support higher population densities and provide a more reliable food supply.” (Energy and Civilization, pg 42)

The higher population densities allowed a significant number of people to work at tasks not immediately connected to securing daily energy requirements. The result, over many millennia, was the development of new materials, tools and processes.

Smil gives succinct explanations of why the smelting of brass and bronze was less energy-intensive than production of pure copper. Likewise he illustrates why the iron age, with its much higher energy requirements, resulted in widespread deforestation, and iron production was necessarily very limited until humans learned to exploit coal deposits in the most recent centuries.

Cooking snails in a pot over an open fire. In Energy and Civilization, Smil covers topics as diverse as the importance of learning to use fire to supply the energy-rich foods humans need; the gradual deployment of better sails which allowed mariners to sail closer to the wind; and the huge boost in information consumption that occurred a century ago due to a sudden drop in the energy cost of printing. This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the United Kingdom, via Wikimedia Commons.

Energy explosion

The past two hundred years of fossil-fuel-powered civilization takes up the biggest chunk of the book. But the effective use of fossil fuels had to be preceded by many centuries of development in metallurgy, chemistry, understanding of electromagnetism, and a wide array of associated technologies.

While making clear how drastically human civilizations have changed in the last several generations, Smil also takes care to point out that even the most recent energy transitions didn’t take place all at once.

While the railways were taking over long-distance shipments and travel, the horse-drawn transport of goods and people dominated in all rapidly growing cities of Europe and North America.” (Energy and Civilization, pg 185)

Likewise the switches from wood to coal or from coal to oil happened only with long overlaps:

The two common impressions – that the twentieth century was dominated by oil, much as the nineteenth century was dominated by coal – are both wrong: wood was the most important fuel before 1900 and, taken as a whole, the twentieth century was still dominated by coal. My best calculations show coal about 15% ahead of crude oil …” (Energy and Civilization, pg 275)

Smil draws an important lesson for the future from his careful examination of the past:

Every transition to a new form of energy supply has to be powered by the intensive deployment of existing energies and prime movers: the transition from wood to coal had to be energized by human muscles, coal combustion powered the development of oil, and … today’s solar photovoltaic cells and wind turbines are embodiments of fossil energies required to smelt the requisite metals, synthesize the needed plastics, and process other materials requiring high energy inputs.” (Energy and Civilization, pg 230)

A missing chapter

Energy and Civilization is a very ambitious book, covering a wide spread of history and science with clarity. But a significant omission is any discussion of the role of slavery or colonialism in the rise of western Europe.

Smil does note the extensive exploitation of slave energy in ancient construction works, and slave energy in rowing the war ships of the democratic cities in ancient Greece. He carefully calculates the power output needed for these projects, whether supplied by slaves, peasants, or animals.

In his look at recent European economies, Smil also notes the extensive use of physical and child labour that occurred simultaneously with the growth of fossil-fueled industry. For example, he describes the brutal work conditions endured by women and girls who carried coal up long ladders from Scottish coal mines, in the period before effective machinery was developed for this purpose.

But what of the 20 million or more slaves taken from Africa to work in the European colonies of the “New World”? Did the collected energies of all these unwilling participants play no notable role in the progress of European economies?

Likewise, vast quantities of resources in the Americas, including oil-rich marine mammals and old-growth forests, were exploited by the colonies for the benefit of European nations which had run short of these important energy commodities. Did this sudden influx of energy wealth play a role in European supremacy over the past few centuries? Attention to such questions would have made Energy and Civilization a more complete look at our history.

An uncertain future

Smil closes the book with a well-composed rumination on our current predicaments and the energy constraints on our future.

While the timing of transition is uncertain, Smil leaves little doubt that a shift away from fossil fuels is necessary, inevitable, and very difficult. Necessary, because fossil fuel consumption is rapidly destabilizing our climate. Inevitable, because fossil fuel reserves are being depleted and will not regenerate in any relevant timeframe. Difficult, both because our industrial economies are based on a steady growth in consumption, and because much of the global population still doesn’t have access to a sufficient quantity of energy to provide even the basic necessities for a healthy life.

The change, then, should be led by those who are now consuming quantities of energy far beyond the level where this consumption furthers human development.

Average per capita energy consumption and the human development index in 2010. Smil, Energy and Civilization, pg 363

 

Smil notes that energy consumption rises in correlation with the Human Development Index up to a point. But increases in energy use beyond, roughly the level of present-day Turkey or Italy, provide no significant boost in Human Development. Some of the ways we consume a lot of energy, he argues, are pointless, wasteful and ineffective.

In affluent countries, he concludes,

Growing energy use cannot be equated with effective adaptations and we should be able to stop and even to reverse that trend …. Indeed, high energy use by itself does not guarantee anything except greater environmental burdens.

Opportunities for a grand transition to less energy-intensive society can be found primarily among the world’s preeminent abusers of energy and materials in Western Europe, North America, and Japan. Many of these savings could be surprisingly easy to realize.” (Energy and Civilization, pg 439)

Smil’s book would indeed be a helpful post-crash guide – but it would be much better if we heed the lessons, and save the valuable aspects of civilization, before apocalypse overtakes us.

 

Top photo: Common factory produced brass olive oil lamp from Italy, c. late 19th century, adapted from photo on Wikimedia Commons.

The Carbon Code – imperfect answers to impossible questions

Also published at Resilience.org.

“How can we reconcile our desire to save the planet from the worst effects of climate change with our dependence on the systems that cause it? How can we demand that industry and governments reduce their pollution, when ultimately we are the ones buying the polluting products and contributing to the emissions that harm our shared biosphere?”

These thorny questions are at the heart of Brett Favaro’s new book The Carbon Code (Johns Hopkins University Press, 2017). While he  readily concedes there can be no perfect answers, his book provides a helpful framework for working towards the immediate, ongoing carbon emission reductions that most of us already know are necessary.

Favaro’s proposals may sound modest, but his carbon code could play an important role if it is widely adopted by individuals, by civil organizations – churches, labour unions, universities – and by governments.

As a marine biologist at Newfoundland’s Memorial University, Favaro is keenly aware of the urgency of the problem. “Conservation is a frankly devastating field to be in,” he writes. “Much of what we do deals in quantifying how many species are declining or going extinct  ….”

He recognizes that it is too late to prevent climate catastrophe, but that doesn’t lessen the impetus to action:

There’s no getting around the prospect of droughts and resource wars, and the creation of climate refugees is certain. But there’s a big difference between a world afflicted by 2-degree warming and one warmed by 3, 4, or even more degrees.”

In other words, we can act now to prevent climate chaos going from worse to worst.

The code of conduct that Favaro presents is designed to help us be conscious of the carbon impacts of our own lives, and work steadily toward the goal of a nearly-complete cessation of carbon emissions.

The carbon code of conduct consists of four “R” principles that must be applied to one’s carbon usage:

1. Reduce your use of carbon as much as possible.

2. Replace carbon-intensive activities with those that use less carbon to achieve the same outcome.

3. Refine the activity to get the most benefit for each unit of carbon emitted.

4. Finally, Rehabilitate the atmosphere by offsetting carbon usage.”

There’s a good bit of wiggle room in each of those four ’R’s, and Favaro presents that flexibility not as a bug but as a feature. “Codes of conduct are not the same thing as laws – laws are dichotomous, and you are either following them or you’re not,” he says. “Codes of conduct are interpretable and general and are designed to shape expectations.”

Street level

The bulk of the book is given to discussion of how we can apply the carbon code to home energy use, day-to-day transportation, a lower-carbon diet, and long distance travel.

There is a heavy emphasis on a transition to electric cars – an emphasis that I’d say is one of the book’s weaker points. For one thing, Favaro overstates the energy efficiency of electric vehicles.

EVs are far more efficient. Whereas only around 20% of the potential energy stored in a liter of gasoline actually goes to making an ICE [Internal Combustion Engine] car move, EVs convert about 60% of their stored energy into motion ….”

In a narrow sense this is true, but it ignores the conversion costs in common methods of producing the electricity that charges the batteries. A typical fossil-fueled generating plant operates in the range of 35% energy efficiency. So the actual efficiency of an electric vehicle is likely to be closer to 35% X 60%, or 21% – in other words, not significantly better than the internal combustion engine.

By the same token, if a large proportion of new renewable energy capacity over the next 15 years must be devoted to charging electric cars, it will be extremely challenging to simultaneously switch home heating, lighting and cooling processes away from fossil fuel reliance.

Yet if the principles of Favaro’s carbon code were followed, we would not only stop building internal combustion cars, we would also make the new electric cars smaller and lighter, provide strong incentives to reduce the number of miles they travel (especially miles with only one passenger), and rapidly improve bicycling networks and public transit facilities to get people out of cars for most of their ordinary transportation. To his credit, Favaro recognizes the importance of all these steps.

Flight paths

As a researcher invited to many international conferences, and a person who lives in Newfoundland but whose family is based in far-away British Columbia, Favaro has given a lot of thought to the conundrum of air travel. He notes that most of the readers of his book will be members of a particular global elite: the small percentage of the world’s population who board a plane more than a few times in their lives.

We members of that elite group have a disproportionate carbon footprint, and therefore we bear particular responsibility for carbon emission reductions.

The Air Transport Action Group, a UK-based industry association, estimated that the airline industry accounts for about 2% of global CO2 emissions. That may sound small, but given the tiny percentage of the world population that flies regularly, it represents a massive outlier in terms of carbon-intensive behaviors. In the United States, air travel is responsible for about 8% of the country’s emissions ….”

Favaro is keenly aware that if the Carbon Code were read as “never get on an airplane again for the rest of your life”, hardly anyone would adopt the code (and those few who did would be ostracized from professional activities and in many cases cut off from family). Yet the four principles of the Carbon Code can be very helpful in deciding when, where and how often to use the most carbon-intensive means of transportation.

Remember that ultimately all of humanity needs to mostly stop using fossil fuels to achieve climate stability. Therefore, just like with your personal travel, your default assumption should be that no flights are necessary, and then from there you make the case for each flight you take.”

The Carbon Code is a wise, carefully optimistic book. Let’s hope it is widely read and that individuals and organizations take the Carbon Code to heart.

 

Top photo: temporary parking garage in vacant lot in Manhattan, July 2013.

Being right, and being persuasive: a primer on ‘talking climate’

Also published at Resilience.org.

Given that most people in industrialized countries accept that climate change is a scientific reality, why do so few rank climate change as one of their high priorities? Why do so few people discuss climate change with their families, friends, and neighbours? Are clear explanations of the ‘big numbers’ of climate change a good foundation for public engagement?

These are among the key questions in a thought-provoking new book by Adam Corner and Jamie Clarke – Talking Climate: From Research to Practice in Public Engagement.

In a brief review of climate change as a public policy issue, Corner and Clarke make the point that climate change action was initially shaped by international responses to the ozone layer depletion and the problem of acid rain. In these cases technocrats in research, government and industry were able to frame the problem and implement solutions with little need for deep public engagement.

The same model might once have worked for climate change response. But today, we are faced with a situation where climate change will be an ongoing crisis for at least several generations. Corner and Clarke argue that responding to climate change will require public engagement that is both deep and broad.

That kind of engagement can only be built through wide-ranging public conversations which tap into people’s deepest values – and climate change communicators must learn from social science research on what works, and what doesn’t work, in growing a public consensus.

Talking Climate is at its best in explaining the limitations of dominant climate change communication threads. But the book is disappointingly weak in describing the ‘public conversations’ that the authors say are so important.

 


Narratives and numbers

“Stories – rather than scientific facts – are the vehicles with which to build public engagement”, Corner and Clarke say. But climate policy is most often framed by scientifically valid and scientifically important numbers which remain abstract to most people. In particular, the concept of a 2°C limit to overall global warming has received oceans of ink, and this concept was the key component of the 2015 Paris Agreement.

Unfortunately, the 2° warming threshold does not help move climate change from a ‘scientific reality’ to a ‘social reality’:

In research conducted just before the Paris negotiations with members of the UK public, we found that people were baffled by the 2 degrees concept and puzzled that the challenge of climate change would be expressed in such a way. … People understandably gauge temperature changes according to their everyday experiences, and a daily temperature fluctuation of 2 degrees is inconsequential, pleasant even – so why should they worry?

“Being right is not the same as being persuasive,” Corner and Clarke add, “and the ‘big numbers’ of the climate change and energy debate do not speak to the lived experience of ordinary people going about their daily lives ….”

While they cite interesting research on what doesn’t work in building public engagement, the book is frustratingly skimpy on what does work.

In particular, there are no good examples of the narratives or stories that the authors hold out as the primary way most people make sense of the world.

“Narratives have a setting, a plot (beginning, middle, and end), characters (heroes, villains, and victims), and a moral of the story,” Corner and Clarke write. How literally should we read that statement? What are some examples of stories that have emerged to help people understand climate change and link their responses to their deepest values? Unfortunately we’re left guessing.

Likewise, the authors write that they have been involved with several public consultation projects that helped build public engagement around climate change. How did these projects select or attract participants, given that only a small percentage of the population regards climate change as an issue of deep personal importance?

Talking Climate packs a lot of important research and valuable perspectives into a mere 125 pages, plus notes. Another 25 pages outlining successful communication efforts might have made it an even better book.

Photos: rainbow over South Dakota grasslands, and sagebrush in Badlands National Park, June 2014.

Fake news as official policy

Also published at Resilience.org.

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)

 

Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from Wikipedia.org.

Fake news, failed states

Also published at Resilience.org.

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From Wikimedia.org.

A container train on the Canadian National rail line.

Door to Door – A selective look at our “system of systems”

Also published at Resilience.org.

Our transportation system is “magnificent, mysterious and maddening,” says the subtitle of Edward Humes’ new book. Open the cover and you’ll encounter more than a little “mayhem” too.

Is the North American economy a consumer economy or a transportation economy? The answer, of course, is “both”. Exponential growth in consumerism has gone hand in hand with exponential growth in transport, and Edward Humes’ new book provides an enlightening, entertaining, and often sobering look at several key aspects of our transportation systems.

door to door cover 275Much of what we consume in North America is produced at least in part on other continents. Even as manufacturing jobs have been outsourced, transportation has been an area of continuing job growth – to the point where truck driving is the single most common job in a majority of US states.

Manufacturing jobs come and go, but the logistics field just keeps growing—32 percent growth even during the Great Recession, while all other fields grew by a collective average of 1 percent. Some say logistics is the new manufacturing. (Door to Door, Harper Collins 2016, Kindle Edition, locus 750)

With a focus on the operations of the Ports of Los Angeles and Long Beach, Humes shows how the standardized shipping container – the “can” in shipping industry parlance – has enabled the transfer of running shoes, iPhones and toasters from low-wage manufacturing complexes in China to consumers around the world. Since 1980, Humes writes, the global container fleet’s capacity has gone from 11 millions tons to 169 million tons – a fifteen-fold increase.

While some links in the supply chain have been “rationalized” in ways that lower costs (and eliminate many jobs), other trends work in opposite directions. The growth of online shopping, for example, has resulted in mid-size delivery trucks driving into suburban cul-de-sacs to drop off single parcels.

The rise of online shopping is exacerbating the goods-movement overload, because shipping one product at a time to homes requires many more trips than delivering the same amount of goods en masse to stores. In yet another door-to-door paradox, the phenomenon of next-day and same-day delivery, while personally efficient and seductively convenient for consumers, is grossly inefficient for the transportation system at large. (Door to Door, locus 695)

Humes devotes almost no attention in this book to passenger rail, passenger airlines, or freight rail beyond the short-line rail that connects the port of Los Angeles to major trucking terminals. He does, however, provide a good snapshot of the trucking industry in general and UPS in particular.

Among the most difficult challenges faced by UPS administrators and drivers is the unpredictable snarl of traffic on roads and streets used by trucks and passenger cars alike. This traffic is not only maddening but terribly violent. “Motor killings”, to use the 1920s terminology, or “traffic accidents”, to use the contemporary euphemism, “are the leading cause of death for Americans between the ages of one and thirty-nine. They rank in the top five killers for Americans sixty-five and under ….” (locus 1514)

In the US there are 35,000 traffic fatalities a year, or one death every fifteen minutes. Humes notes that these deaths seldom feature on major newscasts – and in his own journalistic way he sets out to humanize the scale of the tragedy.

Delving into the records for one representative day during the writing of the book, Humes finds there were at least 62 fatal collisions in 27 states on Friday, February 13, 2015. He gives at least a brief description of dozens of these tragedies: who was driving, where, at what time, and who was killed or seriously injured.

Other than in collisions where alcohol is involved, Humes notes, there are seldom serious legal sanctions against drivers, even when they strike down and kill pedestrians who have the right of way. In this sense our legal system simply reflects the physical design of the motor vehicle-dominated transport system.

Drawing on the work of Strong Towns founder Charles Marohn, Humes explains that roads are typically designed for higher speeds than the posted speed limits. While theoretically this is supposed to provide a margin of safety for a driver who drifts out of line, in practice it encourages nearly all drivers to routinely exceed speed limits. The quite predictable result is that there are more collisions, and more serious injuries or death per collision, than there would be if speeding were not promoted-by-design.

In the design of cars, meanwhile, great attention has been devoted to saving drivers from the consequences of their own errors. Seat belts and air bags have saved the lives of many vehicle occupants. Yet during the same decades that such safety features have become standard, the auto industry has relentlessly promoted vehicles that are more dangerous simply because they are bigger and heavier.

A study by University of California economist Michelle J. White found that

for every crash death avoided inside an SUV or light truck, there were 4.3 additional collisions that took the lives of car occupants, pedestrians, bicyclists, or motorcyclists. The supposedly safer SUVs were, in fact, “extremely deadly,” White concluded. (Door to Door, locus 1878)

Another University of California study found that “for every additional 1,000 pounds in a vehicle’s weight, it raises the probability of a death in any other vehicle in a collision by 47 percent.” (locus 1887)

Is there a solution to the intertwined problems of gridlock, traffic deaths, respiratory-disease causing emissions and greenhouse gas emissions? Humes takes an enthusiastic leap of faith here to sing the praises of the driverless – or self-driving, if you prefer – car.

“The car that travels on its own can remedy each and every major problem facing the transportation system of systems,” Humes boldly forecasts. Deadly collisions, carbon dioxide and particulate emissions, parking lots that take so much urban real estate, the perceived need to keep adding lanes of roadway at tremendous expense, and soul-killing commutes on congested roads – Humes says these will all be in the rear-view mirror once our auto fleets have been replaced by autonomous electric vehicles.

We’ll need to wait a generation for definitive judgment on his predictions, but Humes’ description of our present transportation system is eminently readable and thought-provoking.

Top photo: container train on Canadian National line east of Toronto.

Oil well in southeast Saskatchewan, with flared gas.

Energy at any cost?

Also published at Resilience.org.

If all else is uncertain, how can growing demand for energy be guaranteed? A review of Vaclav Smil’s Natural Gas.

Near the end of his 2015 book Natural Gas: Fuel for the 21st Century, Vaclav Smil makes two statements which are curious in juxtaposition.

On page 211, he writes:

I will adhere to my steadfast refusal to engage in any long-term forecasting, but I will restate some basic contours of coming development before I review a long array of uncertainties ….”

Link to Vaclav Smil series list.And in the next paragraph:

Given the scale of existing energy demand and the inevitability of its further growth, it is quite impossible that during the twenty-first century, natural gas could come to occupy such a dominant position in the global primary energy supply as wood did in the preindustrial era or as coal did until the middle of the twentieth century.”

If you think that second statement sounds like a long-term forecast, that makes two of us. But apparently to Smil it is not a forecast to say that the growth of energy demand is inevitable, and it’s not a forecast to state with certainty that natural gas cannot become the dominant energy source during the twenty-first century – these are simply “basic contours of coming development.” Let’s investigate.

An oddly indiscriminate name

Natural Gas is a general survey of the sources and uses of what Smil calls the fuel with “an oddly indiscriminate name”. It begins much as it ends: with a strongly-stated forecast (or “basic contour”, if you prefer) about the scale of natural gas and other fossil fuel usage relative to other energy sources.

why dwell on the resources of a fossil fuel and why extol its advantages at a time when renewable fuels and decentralized electricity generation converting solar radiation and wind are poised to take over the global energy supply. That may be a fashionable narrative – but it is wrong, and there will be no rapid takeover by the new renewables. We are a fossil-fueled civilization, and we will continue to be one for decades to come as the pace of grand energy transition to new forms of energy is inherently slow.” – Vaclav Smil, preface to Natural Gas

And in the next paragraph:

Share of new renewables in the global commercial primary energy supply will keep on increasing, but a more consequential energy transition of the coming decades will be from coal and crude oil to natural gas.”

In support of his view that a transition away from fossil fuel reliance will take at least several decades, Smil looks at major energy source transitions over the past two hundred years. These transitions have indeed been multi-decadal or multi-generational processes.

Obvious absence of any acceleration in successive transitions is significant: moving from coal to oil has been no faster than moving from traditional biofuels to coal – and substituting coal and oil by natural gas has been measurably slower than the two preceding shifts.” – Natural Gas, page 154

It would seem obvious that global trade and communications were far less developed 150 years ago, and that would be one major reason why the transition from traditional biofuels to coal proceeded slowly on a global scale. Smil cites another reason why successive transitions have been so slow:

Scale of the requisite transitions is the main reason why natural gas shares of the TPES [Total Primary Energy System] have been slower to rise: replicating a relative rise needs much more energy in a growing system. … going from 5 to 25% of natural gas required nearly eight times more energy than accomplishing the identical coal-to-oil shift.” – Natural Gas, page 155

Open-pit coal mine in south-east Saskatchewan.

Open-pit coal mine in south-east Saskatchewan. June 2014.

Today only – you’ll love our low, low prices!

There is another obvious reason why transitions from coal to oil, and from oil to natural gas, could have been expected to move slowly throughout the last 100 years: there have been abundant supplies of easily accessible, and therefore cheap, coal and oil. When a new energy source was brought online, the result was a further increase in total energy consumption, instead of any rapid shift in the relative share of different sources.

The role of price in influencing demand is easy to ignore when the price is low. But that’s not a condition we can count on for the coming decades.

Returning to Smil’s “basic contour” that total energy demand will inevitably rise, that would imply that energy prices will inevitably remain relatively low – because there is effective demand for a product only to the extent that people can afford to buy it.

Remarkably, however, even as he states confidently that demand must grow, Smil notes the major uncertainty about the investment needed simply to maintain existing levels of supply:

if the first decade of the twenty-first century was a trendsetter, then all fossil energy sources will cost substantially more, both to develop new capacities and to maintain production of established projects at least at today’s levels. … The IEA estimates that between 2014 and 2035, the total investment in energy supply will have to reach just over $40 trillion if the world is to meet the expected demand, with some 60% destined to maintain existing output and 40% to supply the rising requirements. The likelihood of meeting this need will be determined by many other interrelated factors.” – Natural Gas, page 212

What is happening here? Both Smil and the IEA are cognizant of the uncertain effects of rising prices on supply, while graphing demand steadily upward as if price has no effect. This is not how economies function in the real world, of course.

Likewise, we cannot assume that because total energy demand kept rising throughout the twentieth century, it must continue to rise through the twenty-first century. On the contrary, if energy supplies are difficult to access and therefore much more costly, then we should also expect demand to grow much more slowly, to stop growing, or to fall.

Falling demand, in turn, would have a major impact on the possibility of a rapid change in the relative share of demand met by different sources. In very simple terms, if we increased total supply of renewable energy rapidly (as we are doing now), but the total energy demand were dropping rapidly, then the relative share of renewables in the energy market could increase even more rapidly.

Smil’s failure to consider such a scenario (indeed, his peremptory dismissal of the possibility of such a scenario) is one of the major weaknesses of his approach. Acceptance of business-as-usual as a reliable baseline may strike some people as conservative. But there is nothing cautious about ignoring one of the fundamental factors of economics, and nothing safe in assuming that the historically rare condition of abundant cheap energy must somehow continue indefinitely.

In closing, just a few words about the implications of Smil’s work as it relates to the threat of climate change. In Natural Gas, he provides much valuable background on the relative amounts of carbon emissions produced by all of our major energy sources. He explains why natural gas is the best of the fossil fuels in terms of energy output relative to carbon emissions (while noting that leaks of natural gas – methane – could in fact outweigh the savings in carbon emissions). He explains that the carbon intensity of our economies has dropped as we have gradually moved from coal to oil to natural gas.

But he also makes it clear that this relative decarbonisation has been far too slow to stave off the threat of climate change.

If he turns out to be right that total energy demand will keep rising, that there will only be a slow transition from other fossil fuels to natural gas, and that the transition away from all fossil fuels will be slower still, then the chances of avoiding catastrophic climate change will be slim indeed.

Top photo: Oil well in southeast Saskatchewan, with flared gas. June 2014.

Wind turbine on site of Pickering Nuclear Generating Station.

How big is that hectare? It depends.

Also published at Resilience.org.

link to Accounting For Energy seriesThe Pickering Nuclear Generating Station, on the east edge of Canada’s largest city, Toronto, is a good take-off point for a discussion of the strengths and limitations of Vaclav Smil’s power density framework.

The Pickering complex is one of the older nuclear power plants operating in North America. Brought on line in 1971, the plant includes eight CANDU reactors (two of which are now permanently shut down). The complex also includes a single wind turbine, brought online in 2001.

Wonkometer-225The CANDU reactors are rated, at full power, at about 3100 Megawatts (MW). The wind turbine, which at 117 meters high was one of North America’s largest when it was installed, is rated at 1.8 MW at full power. (Because the nuclear reactor runs at full power for many more hours in a year, the disparity in actual output is even greater than the above figures suggest.)

How do these figures translate to power density, or power per unit of land?

The Pickering nuclear station stands cheek-by-jowl with other industrial sites and with well-used Lake Ontario waterfront parks. With a small land footprint, its power density is likely towards the high end – 7,600 W/m2 – of the range of nuclear generating stations Smil considers in Power Density. Had it been built with a substantial buffer zone, as is the case with many newer nuclear power plants, the power density might only be half as high.

A nuclear power plant, of course, requires a complex fuel supply chain that starts at a uranium mine. To arrive at more realistic power density estimates, Smil considers a range of mining and processing scenarios. When a nuclear station’s output is prorated over all the land used – land for the plant site itself, plus land for mining, processing and spent fuel storage – Smil estimates a power density of about 500 W/m2 in what he considers the most representative, mid-range of several examples.

Cameco uranium processing plant in Port Hope, Ontario

The Cameco facility in Port Hope, Ontario processes uranium for nuclear reactors. With no significant buffer around the plant, its land area is small and its power density high. Smil calculates its conversion power density at approximately 100,000 W / square meter, with the plant running at 50% capacity.

And wind turbines? Smil looks at average outputs from a variety of wind farm sites, and arrives at an estimated power density of about 1 W/m2.

So nuclear power has about 500 times the power density of wind turbines? If only it were that simple.

Inside and outside the boundary

In Power Density, Smil takes care to explain the “boundary problem”: defining what is being included or excluded in an analysis. With wind farms, for example, which land area is used in the calculation? Is it just the area of the turbine’s concrete base, or should it be all the land around and between turbines (in the common scenario of a large cluster of turbines spaced throughout a wind farm)?  There is no obviously correct answer to this question.

On the one hand, land between turbines can be and often is used as pasture or as crop land. On the other hand, access roads may break up the landscape and make some human uses impractical, as well as reducing the viability of the land for species that require larger uninterrupted spaces. Finally, there is considerable controversy about how close to wind turbines people can safely live, leading to buffer zones of varying sizes around turbine sites. Thus in this case the power output side of the quotient is relatively easy to determine, but the land area is not.

Wind turbines in southwestern Minnesota

Wind turbines line the horizon in Murray County, Minnesota, 2012.

Smil emphasizes the importance of clearly stating the boundary assumptions used in a particular analysis. For the average wind turbine power density of 1 W/m2, he is including the whole land area of a wind farm.

That approach is useful in giving us a sense of how much area would need to be occupied by wind farms to produce the equivalent power of a single nuclear power plant. The mid-range power station cited above (with overall power density of 500 W/m2) takes up about 1360 hectares in the uranium mining-processing-generating station chain. A wind farm of equivalent total power output would sprawl across 680,000 hectares of land, or 6,800 square kilometers, or a square with 82 km per side.

A wind power evangelist, on the other hand, could argue that the wind farms remain mostly devoted to agriculture, and with the concrete bases of the towers only taking 1% of the wind farm area, the power density should be calculated at 100 instead of 1W/m2.

Similar questions apply in many power density calculations. A hydro transmission corridor takes a broad stripe of countryside, but the area fenced off for the pylons is small. Most land in the corridor may continue to be used for grazing, though many other land uses will be off-limits. So you could use the area of the whole corridor in calculating power density – plus, perhaps, another buffer on each side if you believe that electromagnetic fields near power lines make those areas unsafe for living creatures. Or you could use just the area fenced off directly around the pylons. The respective power densities will vary by orders of magnitude.

If the land area is not simple to quantify when things go right, it is even more difficult when things go wrong. A drilling pad for a fracked shale gas may only be a hectare or two, so during the brief decade or two of the well’s productive life, the power density is quite high. But if fracking water leaks into an aquifer, the gas well may have drastic impacts on a far greater area of land – and that impact may continue even when the fracking boom is history.

The boundary problem is most tangled when resource extraction and consumption effects have uncertain extents in both space and time. As mentioned in the previous installment in this series, sometimes non-renewable energy facilities can be reclaimed for a full range of other uses. But the best-case scenario doesn’t always apply.

In mountain-top removal coal mining, there is a wide area of ecological devastation during the mining. But once the energy extraction drops to 0 and the mining corporation files bankruptcy, how much time will pass before the flattened mountains and filled-in valleys become healthy ecosystems again?

Or take the Pickering Nuclear Generation Station. The plant is scheduled to shut down about 2020, but its operators, Ontario Power Generation, say they will need to allow the interior radioactivity to cool for 15 years before they can begin to dismantle the reactor. By their own estimates the power plant buildings won’t be available for other uses until around 2060. Those placing bets on whether this will all go according to schedule can check back in 45 years.

In the meantime the plant will occupy land but produce no power; should the years of non-production be included in calculating an average power density? If decommissioning fails to make the site safe for a century or more, the overall power density will be paltry indeed.

In summary, Smil’s power density framework helps explain why it has taken high-power-density technologies to fuel our high-energy-consumption society, even for a single century. It helps explain why low power density technologies, such as solar and wind power, will not replace our current energy infrastructure or current demand for decades, if ever.

But the boundary problem is a window on the inherent limitations of the approach. For the past century our energy has appeared cheap and power densities have appeared high. Perhaps the low cost and the high power density are both due, in significant part, to important externalities that were not included in calculations.

Top photo: Pickering Nuclear Generating Station site, including wind turbine, on the shoreline of Lake Ontario near Toronto.

Insulators on high-voltage electricity transmission line.

Timetables of power

Also published at Resilience.org.

accounting_for_energy_2For more than three decades, Vaclav Smil has been developing the concepts presented in his 2015 book Power Density: A Key to Understanding Energy Sources and Uses.

The concept is (perhaps deceptively) simple: power density, in Smil’s formulation, is “the quotient of power and land area”. To facilitate comparisons between widely disparate energy technologies, Smil states power density using common units: watts per square meter.

Wonkometer-225Smil makes clear his belief that it’s important that citizens be numerate as well as literate, and Power Density is heavily salted with numbers. But what is being counted?

Perhaps the greatest advantage of power density is its universal applicability: the rate can be used to evaluate and compare all energy fluxes in nature and in any society. – Vaclav Smil, Power Density, pg 21

A major theme in Smil’s writing is that current renewable energy resources and technologies cannot quickly replace the energy systems that fuel industrial society. He presents convincing evidence that for current world energy demand to be supplied by renewable energies alone, the land area of the energy system would need to increase drastically.

Study of Smil’s figures will be time well spent for students of many energy sources. Whether it’s concentrated solar reflectors, cellulosic ethanol, wood-fueled generators, fracked light oil, natural gas or wind farms, Smil takes a careful look at power densities, and then estimates how much land would be taken up if each of these respective energy sources were to supply a significant fraction of current energy demand.

This consideration of land use goes some way to addressing a vacuum in mainstream contemporary economics. In the opening pages of Power Density, Smil notes that economists used to talk about land, labour and capital as three key factors in production, but in the last century, land dropped out of the theory.

The measurement of power per unit of land is one way to account for use of land in an economic system. As we will discuss later, those units of land may prove difficult to adequately quantify. But first we’ll look at another simple but troublesome issue.

Does the clock tick in seconds or in centuries?

It may not be immediately obvious to English majors or philosophers (I plead guilty), but Smil’s statement of power density – watts per square meter – includes a unit of time. That’s because a watt is itself a rate, defined as a joule per second. So power density equals joules per second per square meter.

There’s nothing sacrosanct about the second as the unit of choice. Power densities could also be calculated if power were stated in joules per millisecond or per megasecond, and with only slightly more difficult mathematical gymnastics, per century or per millenium. That is of course stretching a point, but Smil’s discussion of power density would take on a different flavor if we thought in longer time frames.

Consider the example with which Smil opens the book. In the early stages of the industrial age, English iron smelting was accomplished with the heat from charcoal, which in turn was made from coppiced beech and oak trees. As pig iron production grew, large areas of land were required solely for charcoal production. This changed in the blink of an eye, in historical terms, with the development of coal mining and the process of coking, which converted coal to nearly 100% pure carbon with energy equivalent to good charcoal.

As a result, the charcoal from thousands of hectares of hardwood forest could be replaced by coal from a mine site of only a few hectares. Or in Smil’s favored terms,

The overall power density of mid-eighteenth-century English coke production was thus roughly 500 W/m2, approximately 7,000 times higher than the power density of charcoal production. (Power Density, pg 4)

Smil notes rightly that this shift had enormous consequences for the English countryside, English economy and English society. Yet my immediate reaction to this passage was to cry foul – there is a sleight of hand going on.

While the charcoal production figures are based on the amount of wood that a hectare might produce on average each year, in perpetuity, the coal from the mine will dwindle and then run out in a century or two. If we averaged the power densities of the woodlot and mine over several centuries or millennia, the comparison look much different.

And that’s a problem throughout Power Density. Smil often grapples with the best way to average power densities over time, but never establishes a rule that works well for all energy sources.

Generating station near Niagara Falls

The Toronto Power Generating Station was built in 1906, just upstream from Horseshoe Falls in Niagara Falls, Ontario. It was mothballed in 1974. Photographed in February, 2014.

In discussing photovoltaic generation, he notes that solar radiation varies greatly by hour and month. It would make no sense to calculate the power output of a solar panel solely by the results at noon in mid-summer, just as it would make no sense to run the calculation solely at twilight in mid-winter. It is reasonable to average the power density over a whole year’s time, and that’s what Smil does.

When considering the power density of ethanol from sugar cane, it would be crazy to run the calculation based solely on the month of harvest, so again, the figures Smil uses are annual average outputs. Likewise, wood grown for biomass fuel can be harvested approximately every 20 years, so Smil divides the energy output during a harvest year by 20 to arrive at the power density of this energy source.

Using the year as the averaging unit makes obvious sense for many renewable energy sources, but this method breaks down just as obviously when considering non-renewable sources.

How do you calculate the average annual power density for a coal mine which produces high amounts of power for a hundred years or so, and then produces no power for the rest of time? Or the power density of a fracked gas well whose output will continue only a few decades at most?

The obvious rejoinder to this line of questioning is that when the energy output of a coal mine, for example, ceases, the land use also ceases, and at that point the power density of the coal mine is neither high nor low nor zero; it simply cannot be part of a calculation. As we’ll discuss later in this series, however, there are many cases where reclamations are far from certain, and so a “claim” on the land goes on.

Smil is aware of the transitory nature of fossil fuel sources, of course, and he cites helpful and eye-opening figures for the declining power densities of major oil fields, gas fields and coal mines over the past century. Yet in Power Density, most of the figures presented for non-renewable energy facilities apply for that (relatively brief) period when these facilities are in full production, but they are routinely compared with power densities of renewable energy facilities which could continue indefinitely.

So is it really true that power density is a measure “which can be used to evaluate and compare all energy fluxes in nature and in any society”? Only with some critical qualifications.

In summary, we return to Smil’s oft-emphasized theme, that current renewable resource technologies are no match for the energy demands of our present civilization. He argues convincingly that the power density of consumption on a busy expressway will not be matched to the power density of production of ethanol from corn: it would take a ridiculous and unsustainable area of corn fields to fuel all that high-energy transport. Widening the discussion, he establishes no less convincingly, to my mind, that solar power, wind power, and biofuels are not going to fuel our current high-energy way of life.

Yet if we extend our averaging units to just a century or two, we could calculate just as convincingly that the power densities of non-renewable fuel sources will also fail to support our high-energy society. And since we’re already a century into this game, we might be running out of time.

Top photo: insulators on high-voltage transmission line near Darlington Nuclear Generating Station, Bowmanville, Ontario.

Tractor-trailers hauling oil and water on North Dakota highway.

‘Are we there yet?’ The uncertain road to the twenty-first century.

Also published at Resilience.org.

What made the twentieth century such a distinctive period in human history? Are we moving into the future at an ever-increasing speed? What measures provide the most meaningful comparisons of different energy technologies? Is it “conservative” to base forecasts on business-as-usual scenarios?

These questions provide handy lenses for looking at the work of prolific energy science writer Vaclav Smil.

accounting_for_energy_1Smil, a professor emeritus at the University of Manitoba, is not likely to publish any best-sellers, but his books are widely read by people looking for data-backed discussion of energy sources and their role in our civilization. While Smil’s seemingly effortless fluency in wide-ranging topics of energy science can be intimidating to non-scientists, many of his books require no more than a good high-school-level knowledge of physics, chemistry and mathematics.

This post is the first in a series on issues raised by Smil. How many posts? Let’s just say, to use a formulation familiar to anyone who reads Smil, that the number of posts in this series will be “in the range of an order of magnitude less” than the number of Smil’s books. (He’s at 37 books and counting.)

The myth of accelerating change

In early 2004, I wrote a newspaper column with the title “Got Any Change?” Some excerpts:

Think back 50 years. If you grew up in North America, people were already travelling in cars, which moved along at about 60 miles per hour. You lived in a house with heat and running water, and you could just flick a switch to turn on the lights. You turned on the TV or radio to get instant news. You could pick up the phone and actually talk to relatives on the other side of the country.

For ease of daily living and communication, things haven’t changed much in the last 50 years for most North Americans.

My grandparents, by contrast, who grew up “when motorcars were still exotic playthings”, really lived through rapid and fundamental changes:

The magic of telephone reached into rural areas, and soon my grandparents adjusted to the even more astonishing development of moving pictures, transmitted to television sets in the living room. The airplane was invented about the time my grandparents were born, but they lived long enough to fly on passenger jets, and they watched the live newscasts as astronauts landed on the moon. (“Got Any Change?”, in the Brighton Independent, January 7, 2004)

As it turns out Smil was working on a similar premise, and developing it with his customary authority and historical rigor. The result was his 2005 book Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact. This was the first Smil book I picked up, and naturally I read it while basking in the warm glow of confirmation bias.

In the course of 300 pages, Smil argues that many world-changing technologies swept the world in the twentieth century, but nearly all of them are directly traceable to scientific advances – both theoretical and applied – during the period 1867 to 1914. There is no other period in world history so far, he says, in which so many scientific discoveries made their way so rapidly into the fabric of everyday life.

Most of [these technical advances] are still with us not just as inconsequential survivors or marginal accoutrements from a bygone age but as the very foundations of modern civilization. Such a profound and abrupt discontinuity with such lasting consequences has no equivalent in history.

For anyone alive in North America today, it’s easy to take these advances for granted, because we have never known a world without them. That’s what makes Smil’s book so valuable. In detail and with clarity, he outlines the development of electrical generators, transformers, transmission systems, and motors; internal combustion engines; new industrial processes that turned steel, aluminum, concrete, and plastics from scarce or unknown products into mass-produced commodities; and the ability to harness the electromagnetic spectrum in ways that made telephone, radio and television commercially feasible within the first few decades of the twentieth century.

Ship docked at St. Mary's Cement plant at sunset.

The Peter R Cresswell docked at the St. Mary’s Cement plant on Lake Ontario near Bowmanville, Ontario. The plant converts quarried limestone to cement, in kilns fueled by coal and pet coke. Photo from July, 2015.

Energy matters

There is a good deal in Creating the Twentieth Century on increasingly efficient methods of energy conversion. For example, Smil writes that “Typical efficiency of new large stationary steam engines rose from 6–10% during the 1860s to 12–15% after 1900, a 50% efficiency gain, and when small machines were replaced by electric motors, the overall efficiency gain was typically more than fourfold.”

But I found it odd that Creating the Twentieth Century gives little ink to the sources of energy. Smil does note that

for the first time in human history the age was marked by the emergence of high-energy societies whose functioning, be it on mundane or sophisticated levels, became increasingly dependent on incessant supplies of fossil fuels and on rising need for electricity.

Yet there is no substantial examination in this book of the fossil fuel extraction and processing industries, which rapidly became (and remained) among the dominant industries of the twentieth century.

Clearly the new understandings of thermodynamics and electromagnetism, along with new processes for steel and concrete production, were key to the twentieth century as we knew it. But suppose those developments had occurred, but at the same time only a few sizable reservoirs of oil had been discovered, so that petroleum had remained useful but expensive. Would the twentieth century still have happened?

Perhaps we shouldn’t blame Smil for avoiding a counterfactual question about epochal changes a century and more ago. After all, he has devoted a great deal of attention to a more pressing quandary: how might we create a future, with the scientific knowledge that’s accumulated in the past century and a half, while also faced with the need to move beyond fossil fuel dependence? Can we make such a transition, and how long might it take? We’ll move to those issues in the coming installments.

Top photo: Trucks hauling crude oil and frac water near Watford City, North Dakota, June 2014.