When boom is bust: the shale oil bonanza as a symptom of economic crisis

Also published at Resilience.org.

The gradual climb in oil prices in recent weeks has revived hopes that US shale oil producers will return to profitability, while also renewing fevered dreams of the US becoming a fossil fuel superpower once again.

Thus a few days ago my daily newspaper ran a Bloomberg article by Grant Smith which lead with this sweeping claim:

“The U.S. shale revolution is on course to be the greatest oil and gas boom in history, turning a nation once at the mercy of foreign imports into a global player. That seismic shift shattered the dominance of Saudi Arabia and the OPEC cartel, forcing them into an alliance with long-time rival Russia to keep a grip on world markets.”

I might have simply chuckled and turned the page, had I not just finished reading Oil and the Western Economic Crisis, by Cambridge University economist Helen Thompson. (Palgrave Macmillan, 2017)

Thompson looks at the same  shale oil revolution and draws strikingly different conclusions, both about the future of the oil economy and about the effects on US relations with OPEC, Saudi Arabia, and Russia.

Before diving into Thompson’s analysis, let’s first look at the idea that the shale revolution may be “the greatest oil and gas boom in history”. As backing for this claim, Grant Smith cites a report earlier in November by the International Energy Agency, predicting that US shale oil output will soar to about 8 million barrels/day by 2025.

Accordingly, “ ‘The United States will be the undisputed leader of global oil and gas markets for decades to come,’ IEA Executive Director Fatih Birol said … in an interview with Bloomberg television.”

Let’s leave this prediction unchallenged for the moment. (Though skeptics could start with David Hughes detailed look at the IEA’s 2016 forecasts here, or with a recent MIT report that confirms a key aspect of Hughes’ analysis.) Suppose the IEA turns out to be right. How will the shale bonanza rank among the great oil booms in history?

Grant Smith uses the following chart to bolster his claim that the fracking boom will equal Saudi Arabia’s expansion in the 1960s and 1970s.

 

Chart by Bloomberg

 

OK, so if US shale oil rises to 8 million barrels by 2025, that production will be about the same as Saudi oil production in 1981. Would that make these two booms roughly equivalent?

First, world oil consumption in the early 1980s was only about two-thirds what it is now. So 8 billion barrels/day represented a bigger proportion of the world’s oil needs in 1980 that it does today.

Second, Saudi Arabia used very little of its oil domestically in 1980, leaving most of it for sale abroad, and that gave it a huge impact on the world market. The US, by contrast, still burns more oil domestically than it produces – and in the best case scenario, its potential oil exports in 2025 would be a small percentage of global supply.

Third, Saudi Arabia has been able to keep roughly 8 million barrels/day flowing for the past 40 years, while even the IEA’s optimistic forecast shows US shale oil output starting to drop within ten years of a 2025 peak.

And last but not least, Saudi Arabia’s 8 million barrels/day have come with some of the world’s lowest production costs, while US shale oil comes from some of the world’s costliest wells.

All these factors come into play in Helen Thompson’s thorough analysis.

No more Mr. NICE Guy

In an October 2005 speech, Bank of England governor Mervyn King “argued that the rising price of oil was ending what he termed ‘NICE’ – a period of ‘non-inflationary consistently expansionary economic growth’ that began in 1992.” (Thompson, Oil and the Western Economic Crisis, page 28-29)

In spite of their best efforts in the first decade of this millennium, Western governments were not able to maintain steady economic growth, nor keep the price of oil in check, nor significantly increase the supply of oil, nor prevent the onslaught of a serious recession. Thompson traces the interplay of several major economic factors, both before and after this recession.

By the beginning of the George W. Bush administration, there was widespread concern that world oil production would not keep up with growing demand. The booming economies of China and India added to this fear.

“Of the increase of 17.9 million bpd in oil consumption that materialised between 1994 and 2008,” Thompson writes, “only 960,000 of the total came from the G7 economies.” Nearly all of the growth in demand came from China and India – and that growth in demand was forecast to continue.

The GW Bush administration appointed oilman and defense hawk Dick Cheney to lead a task force on the impending supply crunch. But “ultimately, for all the aspiration of the Cheney report, the Bush Jr administration’s energy strategy did little to increase the supply of oil over the first eight years of the twenty-first century.” (Thompson, page 20)

In fact, the only significant supply growth in the decade up to 2008 came from Russia. This boosted Putin’s power while fracturing Western economic interests, as “the western states divided between those who were significant importers of Russian oil and gas and those that were not.” (Thompson, page 23)

Meanwhile oil prices shot up dramatically until Western economies dropped into recession in 2007 as a precursor to the 2008 financial crash. Shouldn’t those high oil prices have spurred high investment in new wells, with consequent rises in production? It didn’t work out that way.

Between 2003 and the first half of 2008 the costs of the construction of production facilities, oil equipment and services, and energy soared in good part in response to the overall commodity boom produced by China’s economic rise. Consequently, whilst future oil supply was becoming ever more dependent on large-scale capital investment both to extract more from declining fields and to open up high-cost non-conventional production, the capital available was also required by 2008 simply to cover rising existing costs.” (Thompson, page 23)

Thus oil prices rose to the point where western economies could no longer maintain consumption levels, but these high prices still couldn’t finance the kind of new drilling needed to boost production.

Oddly enough, the right conditions for a boom in US oil production wouldn’t occur until well after the crash of 2008, when monetary policy-makers were struggling with little success to revive economic growth.

Zero Interest Rate Policy

In western Europe and the US, recovery from the financial crisis of 2008 has been sluggish and incomplete. But the growth in demand for oil by India and China continued, with the result that after a brief price drop in 2009 oil quickly rebounded to $100/barrel and stayed there for the next few years.

As in the years leading up to the crash, $100 oil proved too expensive for western economies, accustomed as they had been to running on cheap energy for decades. Consumer confidence, and consumer spending, remained low.

Simply pumping the markets with cash – Quantitative Easing – had little effect on the real economy (though it afforded bank execs huge bonuses and boosted the prices of stocks and other financial assets). But as interest rates dropped to historic lows, the flood of nearly-free money finally revived the US energy-production sector.

QE and ZIRP hugely increased the availability of credit to the energy sector. ZIRP allowed oil companies to borrow from banks at extremely low interest rates, with the worth of syndicated loans to the oil and gas sectors rising from $600 billion in 2006 to $1.6 trillion in 2014. Meanwhile, in raising the price and depressing the yield of the relatively safe assets central banks purchased, QE created incentives for investors to buy assets with a higher yield, including significantly riskier corporate bonds and equities. …” (Thompson, page 50)

Without this extraordinary monetary expansion “the rise of non-conventional oil production would not have been possible”, Thompson concludes.

And while a huge boost in shale oil production might be counted as a “win” for the economic growth team, the downsides have been equally serious. The Zero Interest Rate Policy has almost eliminated interest earnings for cautious middle-income savers, which depresses consumer spending in the short term and threatens the security of millions in the long term. The inflation in asset prices has boosted the profits of large corporations, while weak consumer confidence has removed corporate incentive to invest in greater production of most consumer goods.

The situation would be more stable if non-conventional oil producers had the ability to weather prolonged periods of low oil prices. But as the price drop of 2015 showed, that would be wishful thinking. “By the second quarter of 2015 more than half of all distressed bonds across investment and high-yield bond markets were issued by energy companies. Under these financial strains a wave of shale bankruptcies began in the first quarter of 2015” – a bankruptcy wave that grew three times as high in 2016.

Finally, financial markets with their high exposure to risky non-conventional oil production have been easily spooked by mere rumours of the end of quantitative easing or any significant rise in interest rates. So central bankers have good reason to fear they may go into the next recession with no tools left in their monetary policy toolbox.

Far from representing a way out of economic crisis, then, the shale oil and related tar sands booms are a symptom of an ongoing economic crisis, the end of which is nowhere in sight.

Energy and power

Thompson also discusses the geo-political effects of the changing global oil market. She notes that the shale oil boom created serious tensions in the US-Saudi relationship. The Saudis wanted oil prices to be moderately high, perhaps in the $50-60/barrel range, because that would afford the Saudis substantial profits without driving down demand for oil. The Americans, with their billions sunk into high-cost shale oil wells, now had a need for oil prices in the $70/barrel and up range, simply to make the fracked oil minimally profitable.

There was no way for both the Saudis and the Americans to win in this struggle, though they could both lose.

At the peak (to date) of the shale oil boom, there was only one significant geo-political development in which the Americans were able to flex some muscle specifically because of the big increase in US oil production, Thompson says. She attributes the nuclear treaty with Iran in part to the surge of new oil production in Texas and North Dakota. In her reading, world oil markets at the time needn’t fear the sudden loss of Iran’s oil output, and that gave European governments a comfort level in agreeing to impose sanctions on Iran. These sanctions, in turn, helped convince Iran to make a deal (a diplomatic success which the Trump administration is determined to undo).

But in 2014 OPEC still produced about three times as much oil as the US produced – with important implications:

“even at the height of the shale boom the obvious limits to any claim of geo-political transformation were also evident. The US remained a significant net importer of oil and, consequently, lacked the capacity to act as a swing producer capable of immediately and directly influencing the price.” (Thompson, page 56)

“Most consequentially, when the Obama administration turned towards sanctions against Russia after the onset of the Ukrainian crisis in the spring of 2014, it was not willing to contemplate significant action against Russian oil production.” (Thompson, page 57)

Thompson wraps up with a look at the oil shock of the 1970s, concluding that “There are striking similarities between aspects of the West’s current predicaments around oil and the problems western governments faced in the 1970s. … However, in a number of ways the present version of these problems is worse than those that were manifest in the 1970s.” (Thompson, page 57)

A much higher world oil demand today, the fact that new oil reserves in western countries are very high-cost, plus the explosion of oil-related financial derivatives, make the international monetary order highly unstable.

Has the US returned to the ranks of “fossil fuel superpowers”? Not as Thompson sees it:

Now the US has nothing like the power it had in the post-war period in providing other states access to oil. Shale oil … cannot change the fact that the largest reserves of cheaply accessible oil lie in the Middle East and Russia, or that China and others’ rise has fundamentally changed the volume of demand for oil in the world.” (Thompson, page 111)

S-curves and other paths

Also published at Resilience.org.

Oxford University economist Kate Raworth is getting a lot of good press for her recently released book Doughnut Economics: 7 Ways to Think Like a 21st Century Economist.

The book’s strengths are many, starting with the accessibility of Raworth’s prose. Whether she is discussing the changing faces of economic orthodoxy, the caricature that is homo economicus, or the importance of according non-monetized activities their proper recognition, Raworth keeps things admirably clear.

Doughnut Economics makes a great crash course in promising new approaches to economics. In Raworth’s own words, her work “draws on diverse schools of thought, such as complexity, ecological, feminist, institutional and behavioural economics.” Yet the integration of ecological economics into her framework is incomplete, leading to a frustratingly unimaginative concluding chapter on economic growth.

Laying the groundwork for that discussion of economic growth has resulted in an article about three times as long as most of my posts, so here is the ‘tl;dr’ version:

Continued exponential economic growth is impossible, but the S-curve of slowing growth followed by a steady state is not the only other alternative. If the goal is maintaining GDP at the highest possible level, then the S-curve is the best case scenario, but in today’s world that isn’t necessarily desirable or even possible.

The central metaphor

Full disclosure: for as long as I can remember, the doughnut has been my least favourite among refined-sugar-white-flour-and-grease confections. So try as I might to be unbiased, I was no doubt predisposed to react critically to Raworth’s title metaphor.

What is the Doughnut? As Raworth explains, the Doughnut is the picture that emerged when she sketched a “safe space” between the Social Foundation necessary for prosperity, and the Ecological Ceiling beyond which we should not go.

Source: Doughnut Economics, page 38

There are many good things to be said about this picture. It affords a prominent place to both the social factors and the ecological factors which are essential to prosperity, but which are omitted from many orthodox economic models. The picture also restores ethics, and the choosing of goals, to central roles in economics.

Particularly given Raworth’s extensive background in development economics, it is easy to understand the appeal of this diagram.

But I agree with Ugo Bardi (here and here) that there is no particular reason the diagram should be circular – Shortfall, Social Foundation, Safe and Just Space, Ecological Ceiling and Overshoot would have the same meaning if arranged in horizontal layers rather than in concentric circles.

From the standpoint of economic analysis, I find it unhelpful to include a range of quite dissimilar factors all at the same level in the centre of the diagram. A society could have adequate energy, water and food without having good housing and health care – but you couldn’t have good housing and health care without energy, water and food. So some of these factors are clearly preconditions for others.

Likewise, some of the factors in the centre of the diagram are clearly and directly related to “overshoot” in the outer ring, while others are not. Excessive consumption of energy, water, or food often leads to ecological overshoot, but you can’t say the same about “excessive” gender equality, political voice, or peace and justice.

Beyond these quibbles with the Doughnut diagram, I further agree with Bardi that a failure to incorporate biophysical economics is the major weakness of Doughnut Economics. In spite of her acknowledgment of the pioneering work of Herman Daly, and a brief but lucid discussion of the work of Robert Ayres and Benjamin Warr showing that fossil fuels have been critical for the past century’s GDP growth, Raworth does not include energy supply as a basic determining factor in economic development.

Economists as spin doctors

Raworth makes clear that key doctrines of economic orthodoxy often obscure rather than illuminate economic reality. Thus economists in rich countries extoll the virtues of free trade, though their own countries relied on protectionism to nurture their industrial base.

Likewise standard economic modeling starts with a reductionist “homo economicus” whose decisions are always based on rational pursuit of self-interest – even though behavioral science shows that real people are not consistently rational, and are motivated by co-operation as much as by self-interest. Various studies indicate, however, that economics students and professors show a greater-than-average degree of self-interest. And for those who are already wealthy but striving to become wealthier still, it is comforting to believe that everyone is similarly self-interested, and that their self-interest works to the good of all.

When considering a principle of mainstream economics, then, it makes sense to ask: what truths does this principle hide, and for whose benefit?

Unfortunately, when it comes to GDP growth as the accepted measure of a healthy economy, Raworth leaves out an important part of the story.

The concept of Gross Domestic Product has its roots in the 1930s, when statisticians were looking for ways to quantify economic activity, making temporal trends easier to discern. Simon Kuznets developed a way to calculate Gross National Product – the total of all income generated worldwide by US residents.

As Raworth stresses, Kuznets himself was clear that his national income tally was a very limited way of measuring an economy.

Emphasising that national income captured only the market value of goods and services produced in an economy, he pointed out that it therefore excluded the enormous value of goods and services produced by and for households, and by society in the course of daily life. … And since national income is a flow measure (recording only the amount of income generated each year), Kuznets saw that it needed to be complemented by a stock measure, accounting for the wealth from which it was generated ….” (Doughnut Economics, page 34; emphasis mine)

The distinction between flows and stocks is crucial. Imagine a simple agrarian nation which uses destructive farming methods to work its rich land. For a number of years it may earn increasingly high income – the flow – though its wealth-giving topsoil – the stock – is rapidly eroding. Is this country getting richer or poorer? Measured by GDP alone, this economy is healthy as long as current income grows; no matter that the topsoil, and future prospects, are blowing away in the wind.

In the years immediately preceding and following World War II, GDP became the primary measure of economic health, and it became political and economic orthodoxy that GDP should grow every year. (To date no western leader has ever campaigned by promising “In my first year I will increase GDP by 3%, in my second year by 2%, in my third year it will grow by 1%, and by my fourth year I will have tamed GDP growth to 0!”)

What truth does this reliance on GDP hide, and for whose benefit? The answers are fairly obvious, in my humble opinion: a myopic focus on GDP obscured the inevitability of resource depletion, for the benefit of the fossil fuel and automative interests who dominated the US economy in the mid-twentieth century.

For context, in 1955 the top ten US corporations by number of employees included: General Motors, Chrysler, Standard Oil of New Jersey, Amoco, Goodyear and Firestone. (Source: 24/7 Wall St)

In 1960, the top ten largest US companies by revenue included General Motors, Exxon, Ford, Mobil, Gulf Oil, Texaco, and Chrysler. (Fortune 500)

These companies, plus the steel companies that made sheet metal for cars and the construction interests building the rapidly-growing network of roads, were clear beneficiaries of a new way of life that consumed ever-greater quantities of fossil fuels.

In the decades after World War II, the US industrial complex threw its efforts into rapid exploitation of energy reserves, along with mass production of machines that would burn that energy as fast as it could be pulled out of the ground. This transformation was not a simple result of “the invisible hand of the free market”; it relied on the enthusiastic collaboration of every level of government, from local zoning boards, to metropolitan transit authorities, to state and federal transportation planners.

But way back then, was it politically necessary to distract people from the inevitability of resource depletion?

The Peak Oil movement in the 1930s

From the very beginnings of the petroleum age, there were prominent voices who saw clearly that exponential growth in use of a finite commodity could not go on indefinitely.

One such voice was William Jevons, now known particularly for the “Jevons Paradox”. In 1865 he argued that since coal provided vastly more usable energy than industry had previously been able to harness, and since this new-found power was the very foundation of modern industrial civilization, it was particularly important to a nation to prudently manage supplies:

Describing the novel social experience that coal and steam power had created, the experience that today we would call ‘exponential growth’, in which practically infinite values are reached in finite time, Jevons showed how quickly even very large stores of coal might be depleted.” (Timothy Mitchell, Carbon Democracy, pg 129)

In the 1920s petroleum was the new miracle energy source, but thoughtful geologists and economists alike realized that as a finite commodity, petroleum could not fuel infinite growth.

Marion King Hubbert was a student in 1926, but more than sixty years later he still recalled the eye-opening lesson he received when a professor asked pupils to consider the implications of ongoing rapid increases in the consumption of coal and oil resources.

As Mason Inman relates in his excellent biography of Hubbert,

When a quantity grows by a constant percentage each year, its history forms a straight line on a semilogarithmic graph. Hubbert plotted the points for coal, year after year, and found a fairly straight line that persisted for several decades: a continual growth rate of around 6 percent a year. At that rate, the production doubled about every dozen years. When he looked at this graph, it was obvious to him that such rapid growth could persist for decades – his graph showed that had already happened – but couldn’t continue forever.” (The Oracle of Oil, 2016, pg 19)

Hubbert soon learned that there were many others who shared his concerns. This thinking coalesced in the 1930s in a very popular movement known as Technocracy. They argued that wealth depended primarily not on the circulation of money, but on the flow of energy.

The leaders of Technocracy, including Hubbert, were soon speaking to packed houses and were featured in cover stories in leading magazines. Hubbert was also tasked with producing a study guide that interested people could work through at home.

In the years prior to the Great Depression, people had become accustomed to economic growth of about 5% per year. Hubbert wanted people to realize it made no sense to take that kind of growth for granted.

“It has come to be naively expected by our business men and their apologists, the economists, that such a rate of growth was somehow inherent in the industrial processes,” Hubbert wrote. But since Earth and its physical resources are finite, he said, infinite growth is an impossibility.

In short, Technocracy pointed out that the fossil fuel age was likely to be a flash in the pan, historically speaking – unless the nation’s fuel reserves were managed carefully by engineers who understood energy efficiency and depletion.

Without sensible accounting and allocation of the true sources of a nation’s wealth – its energy reserves – private corporations would rake in massive profits for a few decades and two or three generations of Americans might prosper, but in the longer term the nation would be “burning its capital”.

Full speed ahead

After the convulsions of the Depression and World War II, the US emerged with the same leading corporations in an even more dominant position. Now the US had control, or at least major influence, not only over rich domestic fossil fuel reserves, but also the much greater reserves in the Middle East. And as the world’s greatest military and financial power, they were in a position to set the terms of trade.

For fossil fuel corporations the major problem was that oil was temporarily too cheap. It came flowing out of wells so easily and in such quantity that it was often difficult to keep the price up. It was in their interests that economies consume oil at a faster rate than ever before, and that the rate of consumption would speed up each year.

Fortunately for these interests, a new theory of economics had emerged just in time.

In this new theory, economists should not worry about measuring the exhaustion of resources. In Timothy Mitchell’s words, “Economics became instead a science of money.”

The great thing about money supply was that, unlike water or land or oil, the quantity of money could grow exponentially forever. And as long as one didn’t look too far backwards or forwards, it was easy to imagine that energy resources would prove no barrier. After all, for several decades, the price of oil had been dropping.

So although increasing quantities of energy were consumed, the cost of energy did not appear to represent a limit to economic growth. … Oil could be treated as something inexhaustible. Its cost included no calculation for the exhaustion of reserves. The growth of the economy, measured in terms of GNP, had no need to account for the depletion of energy resources.” (Carbon Democracy, pg 140)

GDP was thus installed as the supreme measure of an economy, with continuous GDP growth the unquestionable political goal.

A few voices dissented, of course. Hubbert warned in the mid-1950s that the US would hit the peak of its conventional fossil fuel production by the early 1970s, a prediction that proved correct. But large quantities of cheap oil remained in the Middle East. Additional new finds in Alaska and the North Sea helped to buy another couple of decades for the oil economy (though these fields are also now in decline).

Thanks to the persistent work of a small number of researchers who called themselves “ecological economists”, a movement grew to account for stocks of resources, in addition to tallying income flows in the GDP. By the early 1990s, the US Bureau of Economic Analysis gave its blessing to this effort.

In April 1994 the Bureau published a first set of tables called Integrated Environmental-Economic System of Accounts (IEESA).

The official effort was short-lived indeed. As described in Beyond GDP,

progress toward integrated environmental-economic accounting in the US came to a screeching halt immediately after the first IEESA tables were published. The US Congress responded swiftly and negatively. The House report that accompanied the next appropriations bill explicitly forbade the BEA from spending any additional resources to develop or extend the integrated environmental and economic accounting methodology ….” (Beyond GDP, by Heun, Carbajales-Dale, Haney and Roselius, 2016)

All the way through Fiscal Year 2002, appropriations bills made sure this outbreak of ecological economics was nipped in the bud. The bills stated,

The Committee continues the prohibition on use of funds under this appropriation, or under the Census Bureau appropriation accounts, to carry out the Integrated Environmental-Economic Accounting or ‘Green GDP’ initiative.” (quoted in Beyond GDP)

One can only guess that, when it came to contributing to Congressional campaign funds, the struggling fossil fuel interests had somehow managed to outspend the deep-pocketed biophysical economists lobby.

S-curves and other paths

With that lengthy detour complete, we are ready to rejoin Raworth and Doughnut Economics.

The final chapter is entitled “Be Agnostic About Growth: from growth addicted to growth agnostic”.

This sounds like a significant improvement over current economic orthodoxy – but I found this section weak in several ways.

First, it is unclear just what it is that we are to be agnostic about. While Raworth has made clear earlier in the book why GDP is an incomplete and misleading measure of an economy, in the final chapter GDP growth is nevertheless used as the only significant measure of economic growth. Are we to be agnostic about “GDP growth”, which might well be meaningless anyway? Or should we be agnostic about “economic growth”, which might be something quite different and quite a bit more essential – especially to the hundreds of millions of people still living without basic necessities?

Second, Raworth may be agnostic about growth, but she is not agnostic about degrowth. (She has discussed elsewhere why she can’t bring herself to use the word “degrowth”.) True, she remarks at one point that “I mean agnostic in the sense of designing an economy that promotes human prosperity whether GDP is going up, down, or holding steady.” Yet in the pictures she draws and in the ensuing discussion, there is no clear recognition either that degrowth might be desirable, or that degrowth might be forced on us by biophysical realities.

She includes two graphs for possible paths of economic growth –  with growth measured here simply by GDP.

Source: Doughnut Economics, page 210 and page 214

As she notes, the first graph shows GDP increasing at steady annual percentage. While the politicians would like us to believe this is possible and desirable, the graph showing what quickly becomes a near-vertical climb is seldom presented in economics textbooks, as it is clearly unrealistic.

The second graph shows GDP growing slowly at first, then picking up speed, and then leveling off into a high but steady state with no further growth. This path for growth is commonly seen and recognized in ecology. The S-curve was also recognized by pre-20th-century economists, including Adam Smith and John Stuart Mill, as the ideal for a healthy economy.

I would concur that an S-curve which lands smoothly on a high plateau is an ideal outcome. But can we take for granted that this outcome is still possible? And do these two paths – continued exponential growth or an S-curve – really exhaust the conceptual possibilities that we should consider?

On the contrary, we can look back 80 years to the Technocracy Study Course for an illustration of varied and contrasting paths of economic growth and degrowth.

Source: The Oracle of Oil, page 58

M. King Hubbert produced this set of graphs to illustrate what can be expected with various key commodities on which a modern industrial economy depends – and by extension, what might happen with the economy as a whole.

While pure exponential growth is impossible, the S-curve may work for a dependably renewable resource, or a renewable-resource based economy. However, the next possibility – with a rise, peak, decline, and then a leveling off – is also a common scenario. For example, a society may harvest increasing amounts of wood until the regenerating power of the forests are exceeded; the harvest must then drop before any production plateau can be established.

The bell curve which starts at zero, climbs to a high peak, and drops back to zero, could characterize an economy which is purely based on a non-renewable resource such as fossils fuels. Hopefully this “decline to zero” will remain a theoretical conception, since no society to date has run 100% on a non-renewable resource. Nevertheless our fossil-fuel-based industrial society will face a severe decline unless we can build a new energy system on a global scale, in very short order.

This range of economic decline scenarios is not really represented in Doughnut Economics. That may have something to do with the design of the title metaphor.

While ecological overshoot, on the outside of the doughnut, represents things we should not do, the diagram doesn’t have a way of representing the things we can not do.

We should not continue to burn large quantities of fossil fuel because that will destabilize the climate that our children and grandchildren inherit. But once our cheaply accessible fossil fuels are used up, then we can not consume energy at the same frenetic pace that today’s wealthy populations take for granted.

The same principle applies to many essential economic resources. As long as there is significant fertility left in farmland, we can choose to farm the land with methods that produce a high annual return even though they gradually strip away the topsoil. But once the topsoil is badly depleted, then we no longer have a choice to continue production at the same level – we simply need to take the time to let the land recover.

In other words, these biophysical realities are more fundamental than any choices we can make – they set hard limits on which choices remain open to us.

The S-curve economy may be the best-case scenario, an outcome which could in principle provide global prosperity with a minimum of system disruption. But with each passing year during which our economy is based overwhelmingly on rapidly depleting non-renewable resources, the smooth S-curve becomes a less likely outcome.

If some degree of economic decline is unavoidable, then clear-sighted planning for that decline can help us make the transition a just and peaceful one.

If we really want to think like 21st century economists, don’t we need to openly face the possibility of economic decline?

 

Top photo: North Dakota State Highway 22, June 2014. (click here for larger view)

Guns, energy, and the coin of the realm

Also published at Resilience.org.

While US debt climbs to incomprehensible heights, US banking authorities continue to pump new money into the economy. How can they do it? David Graeber sees a  simple explanation:

There’s a reason why the wizard has such a strange capacity to create money out of nothing. Behind him, there’s a man with a gun.” (Debt: The First 5,000 Years, Melville House, 2013, pg 364)

In part one of this series, we looked at the extent of violence in the “American Century” – the period since World War II in which the US has been the number one superpower, and in which US garrisons have ringed the world. In part two we looked at the role of energy supplies in propelling the US to power, the rapid drawdown of energy supplies in the US post-WWII, and the more recent explosion of US debt.

In this concluding installment we’ll look at the links between military power and financial power.

A new set of financial institutions arose at the end of World War II, and for obvious reasons the US was ‘first among equals’ in setting the rules. Not only was the US in military occupation of Germany and Japan, but the US also had the financial capital to help shattered countries –whether on the war’s winning or losing sides – in reconstructing their infrastructures and restarting their economies.

The US was also able to offer military protection to many countries including previous mortal enemies. This meant that these countries could avoid large military outlays – but also that their elites were in no position to challenge US supremacy.

That being said, there were challenges both large and small in dozens of nations, particularly from the grass roots. The US exercised political power, both soft and hard, in attempts to influence the directions of scores of countries around the world. Planting of media reports, surreptitious aid to favoured electoral candidates, dirty tricks to discredit candidates seen as threatening, military aid and training to dictatorships and police forces who could put down movements for social justice, planning and helping to implement coups, and full-fledged military invasion – this range of intervention techniques resulted in hundreds of thousands, if not millions, of deaths. Cataloguing the bloody side of US “leadership of the free world” is the task taken on so ably by John Dower in The Violent American Century.

Dollars for oil

One of the rules of the game grew in importance with each passing decade. In Timothy Mitchell’s words,

Under the arrangements that governed the international oil trade, the commodity was sold in the currency not of the country where it was produced, nor of the place where it was consumed, but of the international companies that controlled production. ‘Sterling oil’, as it was known (principally oil from Iran), was traded in British pounds, but the bulk of global sales were in ‘dollar oil’.” (Carbon Democracy, Verso, 2013, pg 111)

As David Graeber’s Debt explains in detail, the ability to force people to acquire and use the ruler’s currency has, throughout history, been a key mechanism for extracting tribute from subject populations.

In today’s global economy, that is why the pricing of oil in dollars has been so important for the US. Again in Timothy Mitchell’s words:

Europe and other regions had to accumulate dollars, hold them and then return them to the United States in payment for oil. Inflation in the United States slowly eroded the value of the dollar, so that when these countries purchased oil, the dollars they used were worth less than their value when they acquired them. These seigniorage privileges, as they are called, enabled Washington to extract a tax from every other country in the world …. (Carbon Democracy, pg 120)

As Greg Grandin explains, the oil-US dollar relationship grew in importance even as OPEC countries were able to force big price increases:

With every rise in the price of oil, oil-importing countries had to borrow more to meet their energy needs. With every petrodollar placed in New York banks, the value of the US currency increased, and with it the value of the dollar-denominated debt that poor countries owed to those banks.” (“Down From The Mountain”, London Review of Books, June 19, 2017)

But the process did take on another important twist after US domestic oil production peaked and imports from Saudi Arabia soared in the 1970s. Although the oil trade continued to support the value of the US dollar, the US was now sending a lot more of those dollars to oil exporting countries. The Saudis, in particular, accumulated US dollars so fast there wasn’t a productive way for them to circulate these dollars back into the US by purchasing US-made goods. The burgeoning US exports of munitions provided a solution. Mitchell explains:

As the producer states gradually forced the major oil companies to share with them more of the profits from oil, increasing quantities of sterling and dollars flowed to the Middle East. To maintain the balance of payments and the viability of the international financial system, Britain and the United States needed a mechanism for these currency flows to be returned. … Arms were particularly suited to this task of financial recycling, for their acquisition was not limited by their usefulness. The dovetailing of the production of petroleum and the manufacture of arms made oil and militarism increasingly interdependent.” (Carbon Democracy, pg 155-156)

He adds, “The real value of US arms exports more than doubled between 1967 and 1975, with most of the new market in the Middle East.”

An F-15 Eagle aircraft of the Royal Saudi Air Force takes off during Operation Desert Shield, 1991. (Source: Wikimedia Commons)

Fast forward to today. Imported oil is a critical factor in the US economy, in spite of a supply blip from fracking. US industry leads the world in the export of weapons; the top three buyers, and five of the top ten buyers, are in the Middle East. (Source: CNN, May 25, 2016) Yet US arms sales are dwarfed by US military expenditures, which are roughly double in real terms what they were in the 1960s. (Source: Time, July 16, 2013)

Finally, US national debt, in 1983 dollars, is about 10 times as high as it was from 1950 to 1980. In other words the US government, along with its banking and military complexes, has been living far beyond its means (making bankruptcy king Donald Trump a fitting figurehead). (Source: Stephen Bloch, Adelphi University)

Yet the game goes on. As David Graeber sees it,

American imperial power is based on a debt that will never – can never – be repaid. Its national debt has become a promise, not just to its own people, but to the nations of the entire world, that everyone knows will not be kept.

At the same time, U.S. policy was to insist that those countries relying on U.S. treasury bonds as their reserve currency behaved in exactly the opposite way: observing tight money policies and scrupulously repaying their debts ….” (Debt, pg 367)

We’ll close with two speculations on how the “American century” may come to an end.

US supremacy rests on interrelated dominance in military power, financial power, and influence over fossil fuel energy markets. At present the US financial system can create ever larger sums of money, and the rest of the world may have no immediately preferable options than to continue buying US debt. But just as you can’t eat money, you can’t burn it in an electricity generator, a diesel truck, or a bomber flying sorties to a distant land. So no amount of financial wizardry will sustain the current outsized industrial economy or its military subsection, once prime fossil fuel sources have been tapped out.

On the other hand, suppose low-carbon renewable energy technologies improve so rapidly that they can replace fossil fuels within a few decades. This would be a momentous energy transition, and might also lead to a momentous transition in geopolitics.

In recent years, and especially under the Trump administration, the US is ceding renewable energy technology leadership to other countries, especially China. If many countries free themselves from fossil-fuel dependence, and they no longer need US dollars to purchase their energy needs, a pillar of US supremacy will fall.

Top photo: Commemorative silver dollar sold by the US Mint, 2012.

Fossil fuel empire: a world of vulnerability

Also published at Resilience.org.

“It’s all about the oil,” many commentators said about the US assault on Iraq in 2003.

Attributing a war to a single cause is almost always an oversimplification, but protecting access to the 20th century’s most important energy source has been a priority of US foreign policy since World War II.

In part one of this series we considered the effects of the US military complex which has ringed the world for the past 75 years. This complex has depended on vast amounts of fossil fuel energy to move troops and munitions, and the US became a world power in significant part because of its endowment of oil.

As Daniel Yergin recounts in The Prize: The Epic Quest for Oil, Money & Power

Petroleum was central to the course and outcome of World War II in both the Far East and Europe. The Japanese attacked Pearl Harbor to protect their flank as they grabbed for the petroleum resources of the East Indies. Among Hitler’s most important strategic objectives in the invasion of the Soviet Union was the capture of the oil fields in the Caucasus. But America’s predominance in oil proved decisive, and by the end of the war German and Japanese fuel tanks were empty.” (The Prize, Simon & Schuster, 1990, pg 13)

At the end of World War II the US was not only the world’s preeminent military force, but its industrial capacity was undamaged by war and it was running on seemingly abundant supplies of cheap domestic oil.

Spurred on by oil and car companies who had the most to gain from a high-energy way of life, the US embarked on a building spree of far-flung suburbs, interstate highways, and airports that allowed long-distance flight to become a routine activity.

This hyper-consumption was bolstered by a new economic orthodoxy which saw no need to factor in energy depletion when accounting for national wealth, and which portrayed exponential economic growth as a phenomenon that could and should continue decade after decade.

It took barely a generation, of course, for the US economy to suck up the bulk of its cheap domestic oil – conventional oil production peaked in the US in 1971. Did Americans then conclude they should change the basis of their economy, and make peace with reduced energy consumption? Far from it. Dependence on imported oil has now been a central feature of the US economy for fifty years.

Gap between US oil consumption and production. Chart by An Outside Chance for the post Alternative Geologies: Trump’s “America First Energy Plan”, from stats on ycharts.com

 

A world of vulnerability

The huge military complex which protects essential oil supply routes is sometimes seen as a sign of US strength, but it can just as accurately be seen as a sign of US weakness.

In a 2009 report entitled “Powering America’s Defense: Energy and the Risks to National Security”, a panel of twelve retired generals and admirals notes that “The U.S. consumes 25 percent of the world’s oil production, yet controls less than 3 percent of an increasingly tight supply.” This voracious appetite for oil, they say, is a dangerous vulnerability:

As we consider America’s current energy posture, we do so from a singular perspective: We gauge our energy choices solely by their impact on America’s national security. Our dependence on foreign oil reduces our international leverage, places our troops in dangerous global regions, funds nations and individuals who wish us harm, and weakens our economy; our dependency and inefficient use of oil also puts our troops at risk.” (Introduction to Powering America’s Defense)

One source of imported oil has outranked all others for the US and its western European allies. The US was already consolidating its “special relationship” with Saudi Arabia in the 1930s, in the first years of that country’s existence. As Timothy Mitchell describes this relationship,

Aramco [Arabian-American Oil Company] paid the oil royalty not to a national government but to a single household, that of Ibn Saud, who now called himself king and renamed the country … the ‘Kingdom of Saudi Arabia’. … This ‘privatisation’ of oil money was locally unpopular, and required outside help to keep it in place. In 1945 the US government established its military base at Dhahran, and later began to train and arm Ibn Saud’s security forces …. The religious establishment, on the other hand, created the moral and legal order of the new state, imposing the strict social regime that maintained discipline in the subject population and suppressed political dissent.” (Timothy Mitchell, Carbon Democracy, Verso, 2013, pg 210-211)

The alliance between a self-styled liberal democracy and an theocratic autocracy has not been a marriage made in heaven. But in spite of many points of tension the relationship has benefited powerful forces in both countries and has endured for most of the age of oil.

The need to protect US access to the world’s largest sources of conventional oil was formally recognized in the Carter Doctrine:

An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force.” (US President Jimmy Carter in his State of the Union Address, January 1980)

Ironically, this doctrine led the US to begin supporting the mujahideen, Islamic fundamentalists who were fighting the Soviet Union in Afghanistan. And ironically, after the Soviet-Afghan war ended one of the major irritants for the formerly lauded “freedom fighters” was the heavy military presence of the infidel United States in Saudi Arabia. The result has been almost 20 years of continuous warfare between the US and various offshoots of the mujahideen, with no prospect of victory for the US.

The costs of these wars, merely in dollar terms, have been staggering. While US military expenditures have remained high ever since World War II, these costs have recently gone through the roof. An analysis of military spending by Time in 2013 found that inflation-adjusted military spending in the 2000s was approximately twice as high as military spending in the 1960s, during the nuclear face-off with Russia and the massive deployment in Indochina.

In sum, the US has been importing increasing quantities of increasingly expensive oil for decades. During the same years US military spending has soared. Does this sound like a recipe for solvency? You might well wonder if it’s just coincidence that US national debt has soared during these years.

US national debt converted to 1983 dollars and plotted on logarithmic scale – each step up the ladder is 10 times as high as the previous step – by Stephen Bloch of Adelphi University.

 

Recall the curious formulation by John Dower cited in the first installment of this series:

Creating a capacity for violence greater than the world has ever seen is costly – and remunerative.” (The Violent American Century, pg 12, emphasis mine)

How is this world-wide military occupation remunerative? In our next installment we’ll look at the tie-in between the power that grows out of the barrel of a gun, and the power that comes with control of currency.

Part Three of this series

 

Top photo: well head at the Big Hill, Texas site of the US Strategic Petroleum Reserve. The Big Hill facility stores up to 160 million barrels of oil. The four sites of the Strategic Petroleum Reserve were developed in the 1970s, amid fears that a disruption in global supply lines could leave the US dangerously vulnerable. Photo from US Office of Fossil Energy.

The stratospheric costs of The American Century

Also published at Resilience.org.

“Political power grows out of the barrel of a gun,” Chairman Mao famously stated in 1927.

Political power grows out of a barrel of oil – that’s an important theme in Daniel Yergin’s classic book The Prize: The Epic Quest for Oil, Money & Power.

Political power, including the use of state violence, goes hand in hand with control of authorized currency – that’s one of the key lessons of David Graeber’s Debt: The First 5,000 Years.

Guns, energy, money – each of these factors of power comes to mind in reading the recently released book by John Dower, The Violent American Century: War And Terror Since World War Two. (Chicago, Haymarket Books, 2017)

This brief book keeps a tight focus: cataloguing the extent of violence associated with the US role as the world’s dominant superpower. Dower avoids many closely related questions, such as Which persons or sectors in the US benefit most from military conflict? or, Was there justification for any of the violent overseas adventures by US forces in the past 75 years? or, Might the world have been more, or less, violent if the US had not been the dominant superpower?

It may be easy to forget, in Canada or western Europe or especially in the United States, that wars big and small have been raging somewhere in the world nearly every year through our lifetimes. Dower’s book is prompted in part by the recently popularized notion that on a world historical scale, violence is recently at an all-time low. Stephen Pinker, in his 2011 book The Better Angels of Our Nature, marshaled both statistics and anecdotes to advance the view that “today we may be living in the most peaceable era in our species’ existence.”

Dower doesn’t try to definitively refute the idea of a “Long Peace”, but he does ask us to question widely held assumptions.

He begins with the important point that if you start with the unprecedented mass slaughter of World War II as a baseline, it’s easy to make a case that succeeding decades have been relatively peaceful.

Yet one of the key military strategies used by the US in World War II was retained in both practice and theory by subsequent US warlords – aerial bombardment of civilian populations.

By the time the United States began carpet-bombing Japan, ‘industrial war’ and psychological warfare were firmly wedded, and the destruction of enemy morale by deliberately targeting densely populated urban centers had become standard operating procedure. US air forces would later carry this most brutal of inheritances from World War Two to the populations of Korea and Indochina.” (The Violent American Century, pg 22)

The result of this policy carry-over was that

During the Korean War … the tonnage of bombs dropped by US forces was more than four times greater than had been dropped on Japan in 1945. … In the Vietnam War … an intensive US bombing campaign that eventually extended to Cambodia and Laos dropped more than forty times the tonnage of bombs used on Japan.” (The Violent American Century, pg 43)

The massive bombardments failed to produce unambiguous victories in Korea or in Indochina, but it’s hard to look at these wars and avoid the conclusion that the scope and scale of violence had remained terribly high.

Meanwhile US war planners were preparing for destruction on an even greater scale. Both US and Soviet nuclear forces held the capability of destroying all human life – and yet they continued to build more nuclear missiles and continued to discuss whether they would ever launch a first strike.

By the time of his retirement, former Strategic Air Command director General (George) Lee Butler had become an advocate of nuclear abolition. In his insider’s view, “mankind escaped the Cold War without a nuclear holocaust by some combination of diplomatic skill, blind luck and divine intervention, probably the latter in greatest proportion.”

Yet the danger remains. Even Nobel Peace Prize winner Barack Obama, who stirred hopes for peace in 2009 by calling for abolition of nuclear weapons, left office having approved a $1 trillion, 30-year program of upgrading US nuclear weapons.

Though the Cold War ended without conflagration between the world’s major powers, a CIA tabulation listed 331 “Major Episodes of Political Violence” between 1946 and 2013. The US armed, financed and/or coached at least one side in scores of these conflicts, and participated more directly in dozens. This history leads Dower to conclude

Branding the long postwar era as an epoch of relative peace is disingenuous …. It also obscures the degree to which the United States bears responsibility for contributing to, rather than impeding, militarization and mayhem after 1945.” (The Violent American Century, pg 3)

Dower also notes that violence doesn’t always end in death – sometimes it leads to flight. In this regard the recent, rapid increase in numbers of refugees calls into question the idea of a new era of peace. The United Nations High Commissioner for Refugees recently reported that the number of forcibly displaced individuals “had surpassed sixty million and was the highest level recorded since World War Two and its immediate aftermath.”

The wages of war

Since the US victory in World War II, the nation has responded by building an ever larger, ever more extensive military presence around the world. By the early 2000s, according to former CIA consultant Chalmers Johnson, the US owned or rented more than 700 military bases in 130 countries.

Dower gives a brief tally of the financial costs to the US of this military occupation of the globe. In addition to the “base” defense department budget of about $600 billion per year, Dower says many extra expenses include “contingency” costs of engagements in the Middle East, care for veterans, the “black budget” for the CIA, and interest on the military component of the national debt, pushing the cost of the US military complex to around $1 trillion per year.

He concludes, “Creating a capacity for violence greater than the world has ever seen is costly – and remunerative.”

In coming installments of this essay we’ll consider especially those last three words: “costly and remunerative”. Who pays for and who benefits from the massive maintenance and exercise of military muscle, and over what time scale? In doing so, we’ll explore the interrelationships of three types of power: power from the barrel of a gun, power that comes from a barrel of oil, and power that comes from control of the monetary system.

Part Two of this series

Top photo: U.S. Air Force Republic F-105D Thunderchief fighters refuel from a Boeing KC-135A Stratotanker en route to North Vietnam in 1966. Photo in Wikimedia Commons is from US National Archives and Records Administration. A 2007 report for the Brookings Institution found that the Air Force alone used 52% of the fuel burned by the US government, and that all branches of the Department of Defense together burned 93% of US government fuel consumption. (“Department of Defense Energy Strategy: Teaching an Old Dog New Tricks”)

Energy And Civilization: a review

Also published at Resilience.org and BiophysEco.

If you were to find yourself huddled with a small group of people in a post-crash, post-internet world, hoping to recreate some of the comforts of civilization, you’d do well to have saved a printed copy of Vaclav Smil’s Energy and Civilization: A History.

Smil’s new 550-page magnum opus would help you understand why for most applications a draft horse is a more efficient engine than an ox – but only if you utilize an effective harness, which is well illustrated. He could help you decide whether building a canal or a hard-topped road would be a more productive use of your energies. When you were ready to build capstans or block-and-tackle mechanisms for accomplishing heavy tasks, his discussion and his illustrations would be invaluable.

But hold those thoughts of apocalypse for a moment. Smil’s book is not written as a doomer’s handbook, but as a thorough guide to the role of energy conversions in human history to date. Based on his 1994 book Energy in World History, the new book is about 60% longer and includes 40% more illustrations.

Though the initial chapters on prehistory are understandably brief, Smil lays the groundwork with his discussion of the dependency of all living organisms on their ability to acquire enough energy in usable forms.

The earliest humanoids had some distinct advantages and liabilities in this regard. Unlike other primates, humans evolved to walk on two feet all the time, not just occasionally. Ungainly though this “sequence of arrested falls” may be, “human walking costs about 75% less energy than both quadrupedal and bipedal walking in chimpanzees.” (Energy and Civilization, pg 22)

What to do with all that saved energy? Just think:

The human brain claims 20–25% of resting metabolic energy, compared to 8–10% in other primates and just 3–5% in other mammals.” (Energy and Civilization, pg 23)

In his discussion of the earliest agricultures, a recurring theme is brought forward: energy availability is always a limiting factor, but other social factors also come into play throughout history. In one sense, Smil explains, the move from foraging to farming was a step backwards:

Net energy returns of early farming were often inferior to those of earlier or concurrent foraging activities. Compared to foraging, early farming usually required higher human energy inputs – but it could support higher population densities and provide a more reliable food supply.” (Energy and Civilization, pg 42)

The higher population densities allowed a significant number of people to work at tasks not immediately connected to securing daily energy requirements. The result, over many millennia, was the development of new materials, tools and processes.

Smil gives succinct explanations of why the smelting of brass and bronze was less energy-intensive than production of pure copper. Likewise he illustrates why the iron age, with its much higher energy requirements, resulted in widespread deforestation, and iron production was necessarily very limited until humans learned to exploit coal deposits in the most recent centuries.

Cooking snails in a pot over an open fire. In Energy and Civilization, Smil covers topics as diverse as the importance of learning to use fire to supply the energy-rich foods humans need; the gradual deployment of better sails which allowed mariners to sail closer to the wind; and the huge boost in information consumption that occurred a century ago due to a sudden drop in the energy cost of printing. This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the United Kingdom, via Wikimedia Commons.

Energy explosion

The past two hundred years of fossil-fuel-powered civilization takes up the biggest chunk of the book. But the effective use of fossil fuels had to be preceded by many centuries of development in metallurgy, chemistry, understanding of electromagnetism, and a wide array of associated technologies.

While making clear how drastically human civilizations have changed in the last several generations, Smil also takes care to point out that even the most recent energy transitions didn’t take place all at once.

While the railways were taking over long-distance shipments and travel, the horse-drawn transport of goods and people dominated in all rapidly growing cities of Europe and North America.” (Energy and Civilization, pg 185)

Likewise the switches from wood to coal or from coal to oil happened only with long overlaps:

The two common impressions – that the twentieth century was dominated by oil, much as the nineteenth century was dominated by coal – are both wrong: wood was the most important fuel before 1900 and, taken as a whole, the twentieth century was still dominated by coal. My best calculations show coal about 15% ahead of crude oil …” (Energy and Civilization, pg 275)

Smil draws an important lesson for the future from his careful examination of the past:

Every transition to a new form of energy supply has to be powered by the intensive deployment of existing energies and prime movers: the transition from wood to coal had to be energized by human muscles, coal combustion powered the development of oil, and … today’s solar photovoltaic cells and wind turbines are embodiments of fossil energies required to smelt the requisite metals, synthesize the needed plastics, and process other materials requiring high energy inputs.” (Energy and Civilization, pg 230)

A missing chapter

Energy and Civilization is a very ambitious book, covering a wide spread of history and science with clarity. But a significant omission is any discussion of the role of slavery or colonialism in the rise of western Europe.

Smil does note the extensive exploitation of slave energy in ancient construction works, and slave energy in rowing the war ships of the democratic cities in ancient Greece. He carefully calculates the power output needed for these projects, whether supplied by slaves, peasants, or animals.

In his look at recent European economies, Smil also notes the extensive use of physical and child labour that occurred simultaneously with the growth of fossil-fueled industry. For example, he describes the brutal work conditions endured by women and girls who carried coal up long ladders from Scottish coal mines, in the period before effective machinery was developed for this purpose.

But what of the 20 million or more slaves taken from Africa to work in the European colonies of the “New World”? Did the collected energies of all these unwilling participants play no notable role in the progress of European economies?

Likewise, vast quantities of resources in the Americas, including oil-rich marine mammals and old-growth forests, were exploited by the colonies for the benefit of European nations which had run short of these important energy commodities. Did this sudden influx of energy wealth play a role in European supremacy over the past few centuries? Attention to such questions would have made Energy and Civilization a more complete look at our history.

An uncertain future

Smil closes the book with a well-composed rumination on our current predicaments and the energy constraints on our future.

While the timing of transition is uncertain, Smil leaves little doubt that a shift away from fossil fuels is necessary, inevitable, and very difficult. Necessary, because fossil fuel consumption is rapidly destabilizing our climate. Inevitable, because fossil fuel reserves are being depleted and will not regenerate in any relevant timeframe. Difficult, both because our industrial economies are based on a steady growth in consumption, and because much of the global population still doesn’t have access to a sufficient quantity of energy to provide even the basic necessities for a healthy life.

The change, then, should be led by those who are now consuming quantities of energy far beyond the level where this consumption furthers human development.

Average per capita energy consumption and the human development index in 2010. Smil, Energy and Civilization, pg 363

 

Smil notes that energy consumption rises in correlation with the Human Development Index up to a point. But increases in energy use beyond, roughly the level of present-day Turkey or Italy, provide no significant boost in Human Development. Some of the ways we consume a lot of energy, he argues, are pointless, wasteful and ineffective.

In affluent countries, he concludes,

Growing energy use cannot be equated with effective adaptations and we should be able to stop and even to reverse that trend …. Indeed, high energy use by itself does not guarantee anything except greater environmental burdens.

Opportunities for a grand transition to less energy-intensive society can be found primarily among the world’s preeminent abusers of energy and materials in Western Europe, North America, and Japan. Many of these savings could be surprisingly easy to realize.” (Energy and Civilization, pg 439)

Smil’s book would indeed be a helpful post-crash guide – but it would be much better if we heed the lessons, and save the valuable aspects of civilization, before apocalypse overtakes us.

 

Top photo: Common factory produced brass olive oil lamp from Italy, c. late 19th century, adapted from photo on Wikimedia Commons.

The Carbon Code – imperfect answers to impossible questions

Also published at Resilience.org.

“How can we reconcile our desire to save the planet from the worst effects of climate change with our dependence on the systems that cause it? How can we demand that industry and governments reduce their pollution, when ultimately we are the ones buying the polluting products and contributing to the emissions that harm our shared biosphere?”

These thorny questions are at the heart of Brett Favaro’s new book The Carbon Code (Johns Hopkins University Press, 2017). While he  readily concedes there can be no perfect answers, his book provides a helpful framework for working towards the immediate, ongoing carbon emission reductions that most of us already know are necessary.

Favaro’s proposals may sound modest, but his carbon code could play an important role if it is widely adopted by individuals, by civil organizations – churches, labour unions, universities – and by governments.

As a marine biologist at Newfoundland’s Memorial University, Favaro is keenly aware of the urgency of the problem. “Conservation is a frankly devastating field to be in,” he writes. “Much of what we do deals in quantifying how many species are declining or going extinct  ….”

He recognizes that it is too late to prevent climate catastrophe, but that doesn’t lessen the impetus to action:

There’s no getting around the prospect of droughts and resource wars, and the creation of climate refugees is certain. But there’s a big difference between a world afflicted by 2-degree warming and one warmed by 3, 4, or even more degrees.”

In other words, we can act now to prevent climate chaos going from worse to worst.

The code of conduct that Favaro presents is designed to help us be conscious of the carbon impacts of our own lives, and work steadily toward the goal of a nearly-complete cessation of carbon emissions.

The carbon code of conduct consists of four “R” principles that must be applied to one’s carbon usage:

1. Reduce your use of carbon as much as possible.

2. Replace carbon-intensive activities with those that use less carbon to achieve the same outcome.

3. Refine the activity to get the most benefit for each unit of carbon emitted.

4. Finally, Rehabilitate the atmosphere by offsetting carbon usage.”

There’s a good bit of wiggle room in each of those four ’R’s, and Favaro presents that flexibility not as a bug but as a feature. “Codes of conduct are not the same thing as laws – laws are dichotomous, and you are either following them or you’re not,” he says. “Codes of conduct are interpretable and general and are designed to shape expectations.”

Street level

The bulk of the book is given to discussion of how we can apply the carbon code to home energy use, day-to-day transportation, a lower-carbon diet, and long distance travel.

There is a heavy emphasis on a transition to electric cars – an emphasis that I’d say is one of the book’s weaker points. For one thing, Favaro overstates the energy efficiency of electric vehicles.

EVs are far more efficient. Whereas only around 20% of the potential energy stored in a liter of gasoline actually goes to making an ICE [Internal Combustion Engine] car move, EVs convert about 60% of their stored energy into motion ….”

In a narrow sense this is true, but it ignores the conversion costs in common methods of producing the electricity that charges the batteries. A typical fossil-fueled generating plant operates in the range of 35% energy efficiency. So the actual efficiency of an electric vehicle is likely to be closer to 35% X 60%, or 21% – in other words, not significantly better than the internal combustion engine.

By the same token, if a large proportion of new renewable energy capacity over the next 15 years must be devoted to charging electric cars, it will be extremely challenging to simultaneously switch home heating, lighting and cooling processes away from fossil fuel reliance.

Yet if the principles of Favaro’s carbon code were followed, we would not only stop building internal combustion cars, we would also make the new electric cars smaller and lighter, provide strong incentives to reduce the number of miles they travel (especially miles with only one passenger), and rapidly improve bicycling networks and public transit facilities to get people out of cars for most of their ordinary transportation. To his credit, Favaro recognizes the importance of all these steps.

Flight paths

As a researcher invited to many international conferences, and a person who lives in Newfoundland but whose family is based in far-away British Columbia, Favaro has given a lot of thought to the conundrum of air travel. He notes that most of the readers of his book will be members of a particular global elite: the small percentage of the world’s population who board a plane more than a few times in their lives.

We members of that elite group have a disproportionate carbon footprint, and therefore we bear particular responsibility for carbon emission reductions.

The Air Transport Action Group, a UK-based industry association, estimated that the airline industry accounts for about 2% of global CO2 emissions. That may sound small, but given the tiny percentage of the world population that flies regularly, it represents a massive outlier in terms of carbon-intensive behaviors. In the United States, air travel is responsible for about 8% of the country’s emissions ….”

Favaro is keenly aware that if the Carbon Code were read as “never get on an airplane again for the rest of your life”, hardly anyone would adopt the code (and those few who did would be ostracized from professional activities and in many cases cut off from family). Yet the four principles of the Carbon Code can be very helpful in deciding when, where and how often to use the most carbon-intensive means of transportation.

Remember that ultimately all of humanity needs to mostly stop using fossil fuels to achieve climate stability. Therefore, just like with your personal travel, your default assumption should be that no flights are necessary, and then from there you make the case for each flight you take.”

The Carbon Code is a wise, carefully optimistic book. Let’s hope it is widely read and that individuals and organizations take the Carbon Code to heart.

 

Top photo: temporary parking garage in vacant lot in Manhattan, July 2013.

Alternative Geologies: Trump’s “America First Energy Plan”

Also published at Resilience.org.

Donald Trump’s official Energy Plan envisions cheap fossil fuel, profitable fossil fuel and abundant fossil fuel. The evidence shows that from now on, only two of those three goals can be met – briefly – at any one time.

While many of the Trump administration’s “alternative facts” have been roundly and rightly ridiculed, the myths in the America First Energy Plan are still widely accepted and promoted by mainstream media.

The dream of a great America which is energy independent, an America in which oil companies make money and pay taxes, and an America in which gas is still cheap, is fondly nurtured by the major business media and by many politicians of both parties.

The America First Energy Plan expresses this dream clearly:

The Trump Administration is committed to energy policies that lower costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil.

And further:

Sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. The Trump Administration will embrace the shale oil and gas revolution to bring jobs and prosperity to millions of Americans. … We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Less expensive energy will be a big boost to American agriculture, as well.
– www.whitehouse.gov/america-first-energy

This dream harkens back to a time when fossil fuel energy was indeed plentiful and cheap, when profitable oil companies did pay taxes to fund public infrastructure, and the US was energy independent – that is, when Donald Trump was still a boy who had not yet managed a single company into bankruptcy.

To add to the “flashback to the ’50s” mood, Trump’s plan doesn’t mention renewable energy, solar power, and wind turbines – it’s all fossil fuel all the way.

Nostalgia for energy independence

Let’s look at the “energy independence” myth in context. It has been more than 50 years since the US produced as much oil as it consumed.

Here’s a graph of US oil consumption and production since 1966. (Figures are from the BP Statistical Review of World Energy, via ycharts.com.)

Gap between US oil consumption and production – from stats on ycharts.com (click here for larger version)

Even at the height of the fracking boom in 2014, according to BP’s figures Americans were burning 7  million barrels per day more oil than was being produced domestically. (Note: the US Energy Information Agency shows net oil imports at about 5 million barrels/day in 2014 – still a big chunk of consumption.)

OK, so the US hasn’t been “energy independent” in oil for generations, and is not close to that goal now.

But if Americans Drill, Baby, Drill, isn’t it possible that great new fields could be discovered?

Well … oil companies in the US and around the world ramped up their exploration programs dramatically during the past 40 years – and came up with very little new oil, and very expensive new oil.

It’s difficult to find estimates of actual new oil discoveries in the US – though it’s easy to find news of one imaginary discovery.

When I  google “new oil discoveries in US”, most of the top links go to articles with totally bogus headlines, in totally mainstream media, from November 2016.

For example:

CNN: “Mammoth Texas oil discovery biggest ever in USA”

USA Today: “Largest oil deposit ever found in U.S. discovered in Texas”

The Guardian: “Huge deposit of untapped oil could be largest ever discovered in US”

Business Insider: “The largest oil deposit ever found in America was just discovered in Texas”

All these stories are based on a November 15, 2016 announcement by the United States Geological Survey – but the USGS claim was a far cry from the oil gushers conjured up in mass-media headlines.

The USGS wasn’t talking about a new oil field, but about one that has been drilled and tapped for decades. It merely estimated that there might be 20 billion more barrels of tight oil (oil trapped in shale) remaining in the field. The USGS announcement further specified that this estimated oil “consists of undiscovered, technically recoverable resources”. (Emphasis in USGS statement). In other words, if and when it is discovered, it will likely be technically possible to extract it, if the cost of extraction is no object.

The dwindling pace of oil discovery

We’ll come back to the issues of “technically recoverable” and “cost of extraction” later. First let’s take a realistic look at the pace of new oil discoveries.

Bloomberg sums it up in an article and graph from August, 2016:

Graph from Bloomberg.com

This chart is restricted to “conventional oil” – that is, the oil that can be pumped straight out of the ground, or which comes streaming out under its own pressure once the well is drilled. That’s the kind of oil that fueled the 20th century – but the glory days of discovery ended by the early 1970s.

While it is difficult to find good estimates of ongoing oil exploration expenditures, we do have estimates of “upstream capital spending”. This larger category includes not only the cost of exploration, but the capital outlays needed in developing new discoveries through to production.

Exploration and development costs must be funded by oil companies or by lenders, and the more companies rely on expensive wells such as deep off-shore wells or fracked wells, the less money is available for new exploration.

Over the past 20 years companies have been increasingly reliant on a) fracked oil and gas wells which suck up huge amounts of capital, and 2) exploration in ever-more-difficult environments such as deep sea, the arctic, and countries with volatile social situations.

As Julie Wilson of Wood Mackenzie forecast in Sept 2016, “Over the next three years or more, exploration will be smaller, leaner, more efficient and generally lower-risk. The biggest issue exploration has faced recently is the difficulty in commercializing discoveries—turning resources into reserves.”

Do oil companies choose to explore in more difficult environments just because they love a costly challenge? Or is it because their highly skilled geologists believe most of the oil deposits in easier environments have already been tapped?

The following chart from Barclays Global Survey shows the steeply rising trend in upstream capital spending over the past 20 years.

Graph from Energy Fuse Chart of the Week, Sept 30, 2016

 

Between the two charts above – “Oil Discoveries Lowest Since 1947”, and “Global Upstream Capital Spending” – there is overlap for the years 1985 to 2014. I took the numbers from these charts, averaged them into five-year running averages to smooth out year-to-year volatility, and plotted them together along with global oil production for the same years.

Based on Mackenzie Wood figures for new oil discoveries, Barclays Global Survey figures for upstream capital expenditures, and world oil production figures from US Energy Information Administration (click here for larger version)

This chart highlights the predicament faced by societies reliant on petroleum. It has been decades since we found as much new conventional oil in a year as we burned – so the supplies of cheap oil are being rapidly depleted. The trend has not been changed by the fracking boom in the US – which has involved oil resources that had been known for decades, resources which are costly to extract, and which has only amounted to about 5% of world production at the high point of the boom.

Yet while our natural capital in the form of conventional oil reserves is dwindling, the financial capital at play has risen steeply. In the 10 year period from 2005, upstream capital spending nearly tripled from $200 billion to almost $600 billion, while oil production climbed only about 15% and new conventional oil discoveries averaged out to no significant growth at all.

Is doubling down on this bet a sound business plan for a country? Will prosperity be assured by investing exponentially greater financial capital into the reliance on ever more expensive oil reserves, because the industry simply can’t find significant quantities of cheaper reserves? That fool’s bargain is a good summary of Trump’s all-fossil-fuel “energy independence” plan.

(The Canadian government’s implicit national energy plan is not significantly different, as the Trudeau government continues the previous Harper government’s promotion of tar sands extraction as an essential engine of “growth” in the Canadian economy.)

To jump back from global trends to a specific example, we can consider the previously mentioned “discovery” of 20 billion barrels of unconventional oil in the Permian basin of west Texas. Mainstream media articles exclaimed that this oil was worth $900 billion. As geologist Art Berman points out, that valuation is simply 20 billion barrels times the market price last November of about $45/barrel. But he adds that based on today’s extraction costs for unconventional oil in that field, it would cost $1.4 trillion to get this oil out of the ground. At today’s prices, in other words, each barrel of that oil would represent a $20 loss by the time it got to the surface.

Two out of three

To close, let’s look again at the three goals of Trump’s America First Energy Plan:
• Abundant fossil fuel
• Profitable fossil fuel
• Cheap fossil fuel

With remaining resources increasingly represented by unconventional oil such as that in the Permian basin of Texas, there is indeed abundant fossil fuel – but it’s very expensive to get. Therefore if oil companies are to remain profitable, oil has to be more expensive – that is, there can be abundant fossil fuel and profitable fossil fuel, but then the fuel cannot be cheap (and the economy will hit the skids). Or there can be abundant fossil fuel at low prices, but oil companies will lose money hand-over-fist (a situation which cannot last long).

It’s a bit harder to imagine, but there can also be fossil fuel which is both profitable to extract and cheap enough for economies to afford – it just won’t be abundant. That would require scaling back production/consumption to the remaining easy-to-extract conventional fossil fuels, and a reduction in overall demand so that those limited supplies aren’t immediately bid out of a comfortable price range. For that reduction in demand to occur, there would have to be some combination of dramatic reduction in energy use per capita and a rapid increase in deployment of renewable energies.

A rapid decrease in demand for oil is anathema to Trumpian fossil-fuel cheerleaders, but it is far more realistic than their own dream of cheap, profitable, abundant fossil fuel forever.
Top photo: composite of Donald Trump in a lake of oil spilled by the Lakeview Gusher, California, 1910 (click here for larger version). The Lakeview Gusher was the largest on-land oil spill in the US. It occurred in the Midway-Sunset oil field, which was discovered in 1894. In 2006 this field remained California’s largest producing field, though more than 80% of the estimated recoverable reserves had been extracted. (Source: California Department of Conservation, 2009 Annual Report of the State Oil & Gas Supervisor)

Energy From Waste, or Waste From Energy? A look at our local incinerator

Also published at Resilience.org.

Is it an economic proposition to drive up and down streets gathering up bags of plastic fuel for an electricity generator?

Biking along the Lake Ontario shoreline one autumn afternoon, I passed the new and just-barely operational Durham-York Energy Centre and a question popped into mind. If this incinerator produces a lot of electricity, where are all the wires?

The question was prompted in part by the facility’s location right next to the Darlington Nuclear Generating Station. Forests of towers and great streams of high-voltage power lines spread out in three directions from the nuclear station, but there is no obvious visible evidence of major electrical output from the incinerator.

So just how much electricity does the Durham-York Energy Centre produce? Does it produce as much energy as it consumes? In other words, is it accurate to refer to the incinerator as an “energy from waste” facility, or is it a “waste from energy” plant? The first question is easy to answer, the second takes a lot of calculation, and the third is a matter of interpretation.

Before we get into those questions, here’s a bit of background.

The Durham-York Energy Centre is located about an hour’s drive east of Toronto on the shore of Lake Ontario, and was built at a cost of about $300 million. It is designed to take 140,000 tonnes per year of non-recyclable and non-compostable household garbage, burn it, and use the heat to power an electric generator. The garbage comes from the jurisdictions of adjacent regions, Durham and York (which, like so many towns and counties in Ontario, share names with places in England).

The generator powered by the incinerator is rated at 14 megawatts net, while the generators at Darlington Nuclear Station, taken together, are rated at 3500 megawatts net. The incinerator produces 1/250th the electricity that the nuclear plant produces. That explains why there is no dramatically visible connection between the incinerator and the provincial electrical grid.

In other terms, the facility produces surplus power equivalent to the needs of 10,000 homes. Given that Durham and York regions have fast-growing populations – more than 1.6 million at the 2011 census – the power output of this facility is not regionally significant.

A small cluster of transformers is part of the Durham-York Energy Centre.

Energy Return on Energy Invested

But does the facility produce more energy than it uses? That’s not so easy to determine. A full analysis of Energy Return On Energy Invested (EROEI) would require data from many different sources. I decided to approach the question by looking at just one facet of the issue:

Is the energy output of the generator greater than the energy consumed by the trucks which haul the garbage to the site?

Let’s begin with a look at the “fuel” for the incinerator. Initial testing of the facility showed better than expected energy output due to the “high quality of the garbage”, according to Cliff Curtis, commissioner of works for Durham Region (quoted in the Toronto Star). Because most of the paper, cardboard, glass bottles, metal cans, recyclable plastic containers, and organic material is picked up separately and sent to recycling or composting plants, the remaining garbage is primarily plastic film or foam. (Much of this, too, is technically recyclable, but in current market conditions that recycling would be carried out at a financial loss.)

Inflammatory material

If you were lucky to grow up in a time and a place where building fires was a common childhood pastime, you know that plastic bags and styrofoam burn readily and create a lot of heat. A moment’s consideration of basic chemistry backs up that observation.

Our common plastics are themselves a highly processed form of petroleum. One of the major characteristics of our industrial civilization is that we have learned how to suck finite resources of oil from the deepest recesses of the earth, process it in highly sophisticated ways, mold it into endlessly versatile – but still cheap! – types of packaging, use the packaging once, and then throw the solidified petroleum into the garbage.

If instead of burying the plastic garbage in a landfill, we burn it, we capture some of the energy content of that original petroleum. There’s a key problem, though. As opposed to a petroleum or gas well, which provides huge quantities of energy in one location, our plastic “fuel” is light-weight and dispersed through every city, town, village and rural area.

The question thus becomes: is it an economic proposition to drive up and down every street gathering up bags of plastic fuel for an electricity generator?

The light, dispersed nature of the cargo has a direct impact on garbage truck design, and therefore on the number of loads it takes to haul a given number of tonnes of garbage.

Because these trucks must navigate narrow residential streets they must have short wheelbases. And because they need to compact the garbage as they go, they have to carry additional heavy machinery to do the compaction. The result is a low payload:

Long-haul trucks and their contents can weigh 80,000 pounds. However, the shorter wheelbase of garbage and recycling trucks results in a much lower legal weight  — usually around 51,000 pounds. Since these trucks weigh about 33,000 pounds empty, they have a legal payload of about nine tons. (Source: How Green Was My Garbage Truck)

By my calculations, residential garbage trucks picking up mostly light packaging will be “full” with a load weighing about 6.8 tonnes. (The appendix to this article lists sources and shows the calculations.)

At 6.8 tonnes per load, it will require over 20,000 garbage truck loads to gather the 140,000 tonnes burned each year by the Durham-York Energy Centre.

How many kilometers will those trucks travel? Working from a detailed study of garbage pickup energy consumption in Hamilton, Ontario, I estimated that in a medium-density area, an average garbage truck route will be about 45 km. Truck fuel economy during the route is very poor, since there is constant stopping and starting plus frequent idling while workers grab and empty the garbage cans.

There is additional traveling from the base depot to the start of each route, from the end of the route to the drop-off point, and back to the depot.

I used the following map to make a conservative estimate of total kilometers.

Google map of York and Durham Region boundaries, with location of incinerator.

Because most of the garbage delivered to the incinerator comes from Durham Region, and the population of both Durham Region and York Region are heavily weighted to their southern and western portions, I picked a spot in Whitby as an “average” starting point. From that circled “X” to the other “X” (the incinerator location) is 30 kilometers. Using that central location as the starting and ending point for trips, I estimated 105 km total for each load. (45 km on the pickup route, 30 km to the incinerator, and 30 km back to the starting point).

Due to their weight and to their frequent stops, garbage trucks get poor fuel economy. I calculated an average .96 liters/kilometer.

The result: our fleet of trucks would haul 20,600 loads per year, travel 2,163,000 kilometers, and burn just over 2 million liters of diesel fuel.

Comparing diesel to electricity

How does the energy content of the diesel fuel compare to the energy output of the incinerator’s generator? Here the calculations are simpler though the numbers get large.

There are 3412 BTUs in a kilowatt-hour of electricity, and about 36,670 BTUs in a liter of diesel fuel.

If the generator produces enough electricity for 10,000 homes, and these homes use the Ontario average of 10,000 kilowatt-hours per year, then the generator’s output is 100,000,000 kWh per year.

Converted to BTUs, the 100,000,000 kWh equal about 341 billion BTUs.

The diesel fuel burned by the garbage trucks, on the other hand, has a total energy content of about 76 billion BTUs.

That answers our initial question: does the incinerator produce more energy than the garbage trucks consume in fuel? Yes it does, by a factor of about 4.5.

If we had tallied all the energy consumed by this operation, then we could say it had an Energy Return On Energy Invested ratio of about 4.5 – comparable to the bottom end of economically viable fossil fuel extraction operations such as Canadian tar sands mining. But of course we have considered just one energy input, the fuel burned by the trucks.

If we added in the energy required to build and maintain the fleet of garbage trucks, plus an appropriate share of the energy required to maintain our roads (which are greatly impacted by weighty trucks), plus the energy used to build the $300 million incinerator/generator complex, the EROEI would be much lower, perhaps below 1. In other words, there is little or no energy return in the business of driving around picking up household garbage to fuel a generator.

Energy from waste, or waste from energy

Finally, our third question: is this facility best referred to as “Energy From Waste” or “Waste From Energy”?

Looking at the big picture, “Waste From Energy” is the best descriptor. We take highly valuable and finite energy sources in the form of petroleum, consume a lot of that energy to create plastic packaging, ship that packaging to every household via a network of stores, and then use a lot more energy to re-collect the plastic so that we can burn it. The small amount of usable energy we get at the last stage is inconsequential.

From a municipal waste management perspective, however, things might look quite different. In our society people believe they have a god-given right to acquire a steady-stream of plastic-packaged goods, and a god-given right to have someone else come and pick up their resulting garbage.

Thus municipal governments are expected to pay for a fleet of garbage trucks, and find some way to dispose of all the garbage. If they can burn that garbage and recapture a modest amount of energy in the form of electricity, isn’t that a better proposition than hauling it to expensive landfill sites which inevitably run short of capacity?

Looked at from within that limited perspective, “Energy From Waste” is a fair description of the process. (Whether incineration is a good idea still depends, of course, on the safety of the emissions from modern garbage incinerators – another controversial issue.)

But if we want to seriously reduce our waste, the place to focus is not the last link in the chain – waste disposal. The big problem is our dependence on a steady stream of products produced from valuable fossil fuels, which cannot practically be re-used or even recycled, but only down-cycled once or twice before they end up as garbage.

Top photo: Durham-York Energy Centre viewed from south east. 

APPENDIX – Sources and Calculations

Capacity and Fuel Economy of Garbage Trucks

There are many factors which determine the capacity and fuel economy of garbage trucks, including: type of truck (front-loading, rear-loading, trucks with hoists for large containers vs. trucks which are loaded by hand by workers picking up individual bags); type of route (high density urban areas with large businesses or apartment complex vs. low-density rural areas); and type of garbage (mixed waste including heavy glass, metal and wet organics vs. light but bulky plastics and foam).

Although I sent an email inquiry to Durham Waste Department asking about capacity and route lengths of garbage trucks, I didn’t receive a response. So I looked for published studies which could provide figures that seemed applicable to Durham Region.

A major source was the paper “Fuel consumption estimation for kerbside municipal solid waste (MSW) collection activities”, in Waste Management & Research, 2010, accessed via www.sagepub.com.

This study found that “Within the ‘At route’ stage, on average, the normal garbage truck had to travel approximately 71.9 km in the low-density areas while the route length in high-density areas is approximately 25 km.” Since Durham Region is a mix of older dense urban areas, newer medium-density urban sprawl, and large rural areas, I estimated an average “medium-density area route” of 45 km.

The same study found an average fuel economy of .335 liters/kilometer for garbage trucks when they were traveling from depot to the beginning of a route. The authors found that fuel economy in the “At Route” portion (with frequent stops, starts, and idling) was 1.6 L/km for high-density areas, and 2.0 L/km in low-density areas; I split the difference and used 1.8 L/km as the “At Route” fuel consumption.

As to the volumes of trucks and the weight of the garbage, I based on estimates on figures in “The Workhorses of Waste”, published by MSW Management Magazine and WIH Resource Group. This article states: “Rear-end loader capacities range from 11 cubic yards to 31 cubic yards, with 25 cubic yards being typical.”

Since rear-end loader trucks are the ones I usually see in residential neighborhoods, I used 25 cubic yards as the average volume capacity.

The same article discusses the varying weight factors:

The municipal solid waste deposited at a landfill has a density of 550 to over 650 pounds per cubic yard (approximately 20 to 25 pounds per cubic foot). This is the result of compaction within the truck during collection operations as the truck’s hydraulic blades compress waste that has a typical density of 10 to 15 pounds per cubic foot at the curbside. The in-vehicle compaction effort should approximately double the density and half the volume of the collected waste. However, these values are rough averages only and can vary considerably given the irregular and heterogeneous nature of municipal solid waste.

In Durham Region the heavier paper, glass, metal and wet organics are picked up separately and hauled to recycling depots, so it seems reasonable to assume that the remaining garbage hauled to the incinerator would not be at the dense end of the “550 to over 650 pounds per cubic yard” range. I used what seems like a conservative estimate of 600 pounds per cubic yard.

(I am aware that in some cases garbage may be off-loaded at transfer stations, further compacted, and then loaded onto much larger trucks for the next stage of transportation. This would impact the fuel economy per tonne in transportation, but would involve additional fuel in loading and unloading. I would not expect that the overall fuel use would be dramatically different. In any case, I decided to keep the calculations (relatively) simple and so I assumed that one type of truck would pick up all the garbage and deliver it to the final drop-off.)

OK, now the calculations:

Number of truckloads

25 cubic yard load X 600 pounds / cubic yard = 15000 pounds per load

15000 pounds ÷ 2204 lbs per tonne = 6.805 tonnes per load

140,000 tonnes burned by incinerator ÷ 6.805 tonnes per load = 20,570 garbage truck loads

Fuel burned:

45 km per “At Route” portion X 20,570 loads = 925,650 km “At Route”

1.8 L/km fuel consumption “At Route” x 925,650 km = 1,666,170 liters

60 km per load traveling to and from incinerator

60 km x 20,570 loads = 1,234,200 km traveling

.335 L/km travelling fuel consumption X 1,234,200 km = 413,457 liters

1,666,170 liters + 413,457 liters = 2,027,627 liters total fuel used by garbage trucks

As a check on the reasonableness of this estimate, I calculated the average fuel economy from the above figures:

20,570 loads x 105 km per load = 2,159,850 km per year

2,079,625 liters fuel ÷ 2,159,850 km = .9629 L/km

This compares closely with a figure published by the Washington Post, which said municipal garbage trucks get just 2-3 mpg. The middle of that range, 2.5 miles per US gallon, equals 1.06 L/km.

Electricity output of the generator power by the incinerator

With a rated output of 14 megawatts, the generator could produce about 122 megawatt-hours of electricity per year – if it ran at 100% capacity, every hour of the year. (14,000 kW X 24 hours per day X 365 days = 122,640,000 kWh.) That’s clearly unrealistic.

However, the generator’s operators say it puts out enough electricity for 10,000 homes. The Ontario government says the average residential electricity consumption is 10,000 kWh.

10,000 homes X 10,000 kWh per year = 100,000,000 kWh per year.

This figure represents about 80% of the maximum rated capacity of the incinerator’s generator, which sounds like a reasonable output, so that’s the figure I used.

Fake news as official policy

Also published at Resilience.org.

Faced with simultaneous disruptions of climate and energy supply, industrial civilization is also hampered by an inadequate understanding of our predicament. That is the central message of Nafeez Mosaddeq Ahmed’s new book Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence.

In the first part of this review, we looked at the climate and energy disruptions that have already begun in the Middle East, as well as the disruptions which we can expect in the next 20 years under a “business as usual” scenario. In this installment we’ll take a closer look at “the perpetual transmission of false and inaccurate knowledge on the origins and dynamics of global crises”.

While a clear understanding of the real roots of economies is a precondition for a coherent response to global crises, Ahmed says this understanding is woefully lacking in mainstream media and mainstream politics.

The Global Media-Industrial Complex, representing the fragmented self-consciousness of human civilization, has served simply to allow the most powerful vested interests within the prevailing order to perpetuate themselves and their interests ….” (Failing States, Collapsing Systems, page 48)

Other than alluding to powerful self-serving interests in fossil fuels and agribusiness industries, Ahmed doesn’t go into the “how’s” and “why’s” of their influence in media and government.

In the case of misinformation about the connection between fossil fuels and climate change, much of the story is widely known. Many writers have documented the history of financial contributions from fossil fuel interests to groups which contradict the consensus of climate scientists. To take just one example, Inside Climate News revealed that Exxon’s own scientists were keenly aware of the dangers of climate change decades ago, but the corporation’s response was a long campaign of disinformation.

Yet for all its nefarious intent, the fossil fuel industry’s effort has met with mixed success. Nearly every country in the world has, at least officially, agreed that carbon-emissions-caused climate change is an urgent problem. Hundreds of governments, on national, provincial or municipal levels, have made serious efforts to reduce their reliance on fossil fuels. And among climate scientists the consensus has only grown stronger that continued reliance on fossil fuels will result in catastrophic climate effects.

When it comes to continuous economic growth unconstrained by energy limitations, the situation is quite different. Following the consensus opinion in the “science of economics’, nearly all governments are still in thrall to the idea that the economy can and must grow every year, forever, as a precondition to prosperity.

In fact, the belief in the ever-growing economy has short-circuited otherwise well-intentioned efforts to reduce carbon emissions. Western politicians routinely play off “environment’ and ”economy” as forces that must be balanced, meaning they must take care not to cut carbon emissions too fast, lest economic growth be hindered. To take one example, Canada’s Prime Minister Justin Trudeau claims that expanded production of tar sands bitumen will provide the economic growth necessary to finance the country’s official commitments under the Paris Accord.

As Ahmed notes, “the doctrine of unlimited economic growth is nothing less than a fundamental violation of the laws of physics. In short, it is the stuff of cranks – yet it is nevertheless the ideology that informs policymakers and pundits alike.” (Failing States, Collapsing Systems, page 90)

Why does “the stuff of cranks” still have such hold on the public imagination? Here the work of historian Timothy Mitchell is a valuable complement to Ahmed’s analysis.

Mitchell’s 2011 book Carbon Democracy outlines the way “the economy” became generally understood as something that could be measured mostly, if not solely, by the quantities of money that exchanged hands. A hundred years ago, this was a new and controversial idea:

In the early decades of the twentieth century, a battle developed among economists, especially in the United States …. One side wanted economics to start from natural resources and flows of energy, the other to organise the discipline around the study of prices and flows of money. The battle was won by the second group …..” (Carbon Democracy, page 131)

A very peculiar circumstance prevailed while this debate raged: energy from petroleum was cheap and getting cheaper. Many influential people, including geologist M. King Hubbert, argued that the oil bonanza would be short-lived in a historical sense, but their arguments didn’t sway corporate and political leaders looking at short-term results.

As a result a new economic orthodoxy took hold by the middle of the 20th century. Petroleum seemed so abundant, Mitchell says, that for most economists “oil could be counted on not to count. It could be consumed as if there were no need to take account of the fact that its supply was not replenishable.”

He elaborates:

the availability of abundant, low-cost energy allowed economists to abandon earlier concerns with the exhaustion of natural resources and represent material life instead as a system of monetary circulation – a circulation that could expand indefinitely without any problem of physical limits. Economics became a science of money ….” (Carbon Democracy, page 234)

This idea of the infinitely expanding economy – what Ahmed terms “the stuff of cranks” – has been widely accepted for approximately one human life span. The necessity of constant economic growth has been an orthodoxy throughout the formative educations of today’s top political leaders, corporate leaders and media figures, and it continues to hold sway in the “science of economics”.

The transition away from fossil fuel dependence is inevitable, Ahmed says, but the degree of suffering involved will depend on how quickly and how clearly we get on with the task. One key task is “generating new more accurate networks of communication based on transdisciplinary knowledge which is, most importantly, translated into user-friendly multimedia information widely disseminated and accessible by the general public in every continent.” (Failing States, Collapsing Systems, page 92)

That task has been taken up by a small but steadily growing number of researchers, activists, journalists and hands-on practitioners of energy transition. As to our chances of success, Ahmed allows a hint of optimism, and that’s a good note on which to finish:

The systemic target for such counter-information dissemination, moreover, is eminently achievable. Social science research has demonstrated that the tipping point for minority opinions to become mainstream, majority opinion is 10% of a given population.” (Failing States, Collapsing Systems, page 92)

 

Top image: M. C. Escher’s ‘Waterfall’ (1961) is a fanciful illustration of a finite source providing energy without end. Accessed from Wikipedia.org.