Platforms for a Green New Deal

Two new books in review

Also published on Resilience.org

Does the Green New Deal assume a faith in “green growth”? Does the Green New Deal make promises that go far beyond what our societies can afford? Will the Green New Deal saddle ordinary taxpayers with huge tax bills? Can the Green New Deal provide quick solutions to both environmental overshoot and economic inequality?

These questions have been posed by people from across the spectrum – but of course proponents of a Green New Deal may not agree on all of the goals, let alone an implementation plan. So it’s good to see two concise manifestos – one British, one American – released by Verso in November.

The Case for the Green New Deal (by Ann Pettifor), and A Planet to Win: Why We Need a Green New Deal (by Kate Aronoff, Alyssa Battistoni, Daniel Aldana Cohen and Thea Riofrancos) each clock in at a little under 200 pages, and both books are written in accessible prose for a general audience.

Surprisingly, there is remarkably little overlap in coverage and it’s well worth reading both volumes.

The Case for a Green New Deal takes a much deeper dive into monetary policy. A Planet To Win devotes many pages to explaining how a socially just and environmentally wise society can provide a healthy, prosperous, even luxurious lifestyle for all citizens, once we understand that luxury does not consist of ever-more-conspicuous consumption.

The two books wind to their destinations along different paths but they share some very important principles.

Covers of The Case For The Green New Deal and A Planet To Win

First, both books make clear that a Green New Deal must not shirk a head-on confrontation with the power of corporate finance. Both books hark back to Franklin Delano Roosevelt’s famous opposition to big banking interests, and both books fault Barack Obama for letting financial kingpins escape the 2008 crash with enhanced power and wealth while ordinary citizens suffered the consequences.

Instead of seeing the crash as an opportunity to set a dramatically different course for public finance, Obama presented himself as the protector of Wall Street:

“As [Obama] told financial CEOs in early 2009, “My administration is the only thing between you and the pitchforks.” Frankly, he should have put unemployed people to work in a solar-powered pitchfork factory.” (A Planet To Win, page 13)

A second point common to both books is the view that the biggest and most immediate emissions cuts must come from elite classes who account for a disproportionate share of emissions. Unfortunately, neither book makes it clear whether they are talking about the carbon-emitting elite in wealthy countries, or the carbon-emitting elite on a global scale. (If it’s the latter, that likely includes the authors, most of their readership, this writer and most readers of this review.)

Finally, both books take a clear position against the concept of continuous, exponential economic growth. Though they argue that the global economy must cease to grow, and sooner rather than later, their prescriptions also appear to imply that there will be one more dramatic burst of economic growth during the transition to an equitable, sustainable steady-state economy.

Left unasked and unanswered in these books is whether the climate system can stand even one more short burst of global economic growth.

Public or private finance

The British entry into this conversation takes a deeper dive into the economic policies of US President Franklin Roosevelt. British economist Ann Pettifor was at the centre of one of the first policy statements that used the “Green New Deal” moniker, just before the financial crash of 2007–08. She argues that we should have learned the same lessons from that crash that Roosevelt had to learn from the Depression of the 1930s.

Alluding to Roosevelt’s inaugural address, she summarizes her thesis this way:

“We can afford what we can do. This is the theme of the book in your hands. There are limits to what we can do – notably ecological limits, but thanks to the public good that is the monetary system, we can, within human and ecological limits, afford what we can do.” (The Case for the Green New Deal, page xi)

That comes across as a radical idea in this day of austerity budgetting. But Pettifor says the limits that count are the limits of what we can organize, what we can invent, and, critically, what the ecological system can sustain – not what private banking interests say we can afford.

In Pettifor’s view it is not optional, it is essential for nations around the world to re-win public control of their financial systems from the private institutions that now enrich themselves at public expense. And she takes us through the back-and-forth struggle for public control of banking, examining the ground-breaking theory of John Maynard Keynes after World War I, the dramatically changed monetary policy of the Roosevelt administration that was a precondition for the full employment policy of the original New Deal, and the gradual recapture of global banking systems by private interests since the early 1960s.

On the one hand, a rapid reassertion of public banking authority (which must include, Pettifor says, tackling the hegemony of the United States dollar as the world’s reserve currency) may seem a tall order given the urgent environmental challenges. On the other hand, the global financial order is highly unstable anyway, and Pettifor says we need to be ready next time around:

“sooner rather than later the world is going to be faced by a shuddering shock to the system. … It could be the flooding or partial destruction of a great city …. It could be widespread warfare…. Or it could be (in my view, most likely) another collapse of the internationally integrated financial system. … [N]one of these scenarios fit the ‘black swan’ theory of difficult-to-predict events. All three fall within the realm of normal expectations in history, science and economics.” (The Case for the Green New Deal, pg 64)

A final major influence acknowledged by Pettifor is American economist Herman Daly, pioneer of steady-state economics. She places this idea at the center of the Green New Deal:

“our economic goal is for a ‘steady state’ economy … that helps to maintain and repair the delicate balance of nature, and respects the laws of ecology and physics (in particular thermodynamics). An economy that delivers social justice for all classes, and ensures a liveable planet for future generations.” (The Case for the Green New Deal, pg 66)

Beyond a clear endorsement of this principle, though, Pettifor’s book doesn’t offer much detail on how our transportation system, food provisioning systems, etc, should be transformed. That’s no criticism of the book. Providing a clear explanation of the need for transformation in monetary policy; why the current system of “free mobility” of capital allows private finance to work beyond the reach of democratic control, with disastrous consequences for income equality and for the environment; and how finance was brought under public control before and can be again – this  is a big enough task for one short book, and Pettifor carries it out with aplomb.

Some paths are ruinous. Others are not.

Writing in The Nation in November of 2018, Daniel Aldana Cohen set out an essential corrective to the tone of most public discourse:

“Are we doomed? It’s the most common thing people ask me when they learn that I study climate politics. Fair enough. The science is grim, as the UN Intergovernmental Panel on Climate Change (IPCC) has just reminded us with a report on how hard it will be to keep average global warming to 1.5 degrees Celsius. But it’s the wrong question. Yes, the path we’re on is ruinous. It’s just as true that other, plausible pathways are not. … The IPCC report makes it clear that if we make the political choice of bankrupting the fossil-fuel industry and sharing the burden of transition fairly, most humans can live in a world better than the one we have now.” (The Nation, “Apocalyptic Climate Reporting Completely Misses the Point,” November 2, 2018; emphasis mine)

There’s a clear echo of Cohen’s statement in the introduction to A Planet To Win:

“we rarely see climate narratives that combine scientific realism with positive political and technological change. Instead, most stories focus on just one trend: the grim projections of climate science, bright reports of promising technologies, or celebrations of gritty activism. But the real world will be a mess of all three. (A Planet To Win, pg 3)

The quartet of authors are particularly concerned to highlight a new path in which basic human needs are satisfied for all people, in which communal enjoyment of public luxuries replaces private conspicuous consumption, and in which all facets of the economy respect non-negotiable ecological limits.

The authors argue that a world of full employment; comfortable and dignified housing for all; convenient, cheap or even free public transport; healthy food and proper public health care; plus a growth in leisure time –  this vision can win widespread public backing and can take us to a sustainable civilization.

A Planet To Win dives into history, too, with a picture of the socialist housing that has been home to generations of people in Vienna. This is an important chapter, as it demonstrates that there is nothing inherently shabby in the concept of public housing:

“Vienna’s radiant social housing incarnates its working class’s socialist ideals; the United States’ decaying public housing incarnates its ruling class’s stingy racism.” (A Planet To Win, pg 127)

Likewise, the book looks at the job creation programs of the 1930s New Deal, noting that they not only built a vast array of public recreational facilities, but also carried out the largest program of environmental restoration ever conducted in the US.

The public co-operatives that brought electricity to rural people across the US could be revitalized and expanded for the era of all-renewable energy. Fossil fuel companies, too, should be brought under public ownership – for the purpose of winding them down as quickly as possible while safeguarding workers’ pensions.

In their efforts to present a New Green Deal in glowingly positive terms, I think the authors underestimate the difficulties in the energy transition. For example, they extol a new era in which Americans will have plenty of time to take inexpensive vacations on high-speed trains throughout the country. But it’s not at all clear, given current technology, how feasible it will be to run completely electrified trains through vast and sparsely populated regions of the US.

In discussing electrification of all transport and heating, the authors conclude that the US must roughly double the amount of electricity generated – as if it’s a given that Americans can or should use nearly as much total energy in the renewable era as they have in the fossil era.1

And once electric utilities are brought under democratic control, the authors write, “they can fulfill what should be their only mission: guaranteeing clean, cheap, or even free power to the people they serve.” (A World To Win, pg 53; emphasis mine)

A realistic understanding of thermodynamics and energy provision should, I think, prompt us to ask whether energy is ever cheap or free – (except in the dispersed, intermittent forms of energy that the natural world has always provided).

As it is, the authors acknowledge a “potent contradiction” in most current recipes for energy transition:

“the extractive processes necessary to realize a world powered by wind and sun entail their own devastating social and environmental consequences. The latter might not be as threatening to the global climate as carbon pollution. But should the same communities exploited by 500 years of capitalist and colonial violence be asked to bear the brunt of the clean energy transition …?” (A Planet To Win, pg 147-148)

With the chapter on the relationship between a Green New Deal in the industrialized world, and the even more urgent challenges facing people in the Global South, A World To Win gives us an honest grappling with another set of critical issues. And in recognizing that “We hope for greener mining techniques, but we shouldn’t count on them,” the authors make it clear that the Green New Deal is not yet a fully satisfactory program.

Again, however, they accomplish a lot in just under 200 pages, in support of their view that “An effective Green New Deal is also a radical Green New Deal” (A Planet To Win, pg 8; their emphasis). The time has long passed for timid nudges such as modest carbon taxes or gradual improvements to auto emission standards.

We are now in “a trench war,” they write, “to hold off every extra tenth of a degree of warming.” In this war,

“Another four years of the Trump administration is an obvious nightmare. … But there are many paths to a hellish earth, and another one leads right down the center of the political aisle.” (A Planet To Win, pg 180)


1 This page on the US government Energy Information Agency website gives total US primary energy consumption as 101 quadrillion Btus, and US electricity use as 38 quadrillion Btus. If all fossil fuel use were stopped but electricity use were doubled, the US would then use 76 quadrillion Btus, or 75% of current total energy consumption.

A measured response to surveillance capitalism

Also published at Resilience.org.

A flood of recent analysis discusses the abuse of personal information by internet giants such as Facebook and Google. Some of these articles zero in on the basic business models of Facebook, and occasionally Google, as inherently deceptive and unethical.

But I have yet to see a proposal for any type of regulation that seems proportional to the social problem created by these new enterprises.

So here’s my modest proposal for a legislative response to surveillance capitalism1:

No company which operates an internet social network, or an internet search engine, shall be allowed to sell advertising, nor allowed to sell data collected about the service’s users.

We should also consider an additional regulation:

No company which operates an internet social network, or an internet search engine, shall be allowed to provide this service free of charge to its users.

It may not be easy to craft an appropriate legal definition of “social network” or “search engine”, and I’m not suggesting that this proposal would address all of the surveillance issues inherent in our digitally networked societies. But regulation of companies like Facebook and Google will remain ineffectual unless their current business models are prohibited.

Core competency

The myth of “free services” is widespread in our society, of course, and most people have been willing to play along with the fantasy. Yet we can now see that when it comes to search engines and social networks, this game of pretend has dangerous consequences.

In a piece from September, 2017 entitled “Why there’s nothing to like about Facebook’s ethically-challenged, troublesome business model,” Financial Post columnist Diane Francis clearly described the trick at the root of Facebook’s success:

“Facebook’s underlying business model itself is troublesome: offer free services, collect user’s private information, then monetize that information by selling it to advertisers or other entities.”

Writing in The Guardian a few days ago, John Naughton concisely summarized the corporate histories of both Facebook and Google:

“In the beginning, Facebook didn’t really have a business model. But because providing free services costs money, it urgently needed one. This necessity became the mother of invention: although in the beginning Zuckerberg (like the two Google co-founders, incidentally) despised advertising, in the end – like them – he faced up to sordid reality and Facebook became an advertising company.”

So while Facebook has grandly phrased its mission as “to make the world more open and connected”, and Google long proclaimed its mission “to organize the world’s information”, those goals had to take a back seat to the real business: helping other companies sell us more stuff.

In Facebook’s case, it has been obvious for years that providing a valuable social networking service was a secondary focus. Over and over, Facebook introduced major changes in how the service worked, to widespread complaints from users. But as long as these changes didn’t drive too many users away, and as long as the changes made Facebook a more effective partner to advertisers, the company earned more profit and its stock price soared.

Likewise, Google found a “sweet spot” with the number of ads that could appear above and beside search results without overly annoying users – while also packaging the search data for use by advertisers across the web.

A bad combination

The sale of advertising, of course, has subsidized news and entertainment media for more than a century. In recent decades, even before online publishing became dominant, some media switched to wholly-advertising-supported “free” distribution. While that fiction had many negative consequences, I believe, the danger to society was taken to another level with search engines and social networks.

A “free” print magazine or newspaper, after all, collects no data while being read.2 No computer records if and when you turn the page, how long you linger over an article, or even whether you clip an ad and stick it to your refrigerator.

Today’s “free” online services are different. Search engines collate every search by every user, so they know what people are curious about – the closest version of mass mind-reading we have yet seen. Social media not only register every click and every “Like”, but all our digital interactions with all of our “friends”.

This surveillance-for-profit is wonderfully useful for the purpose of selling us more stuff – or, more recently, for manipulating our opinions and our votes. But we should not be surprised when they abuse our confidence, since their business model drives them to betray our trust as efficiently as possible.

Effective regulation

In the flood of commentary about Facebook following the Cambridge Analytica revelations, two themes predominate. First, there is a frequently-stated wish that Facebook “respect our privacy”. Second, there are somewhat more specific calls for regulation of Facebook’s privacy settings, terms of sale of data, or policing of “bot” accounts.

Both themes strike me as naïve. Facebook may allow users a measure of privacy in that they can be permitted to hide some posts from some other users. But it is the very essence of Facebook’s business model that no user can have any privacy from Facebook itself, and Facebook can and will use everything it learns about us to help manipulate our desires in the interests of paying customers. Likewise, it is naïve to imagine that what we post on Facebook remains “our data”, since we have given it to Facebook in exchange for a service for which we pay no monetary fee.

But regulating the terms under which Facebook acquires our consent to monetize our information? This strikes me as an endlessly complicated game of whack-a-mole. The features of computerized social networks have changed and will continue to change as fast as a coder can come up with a clever new bit of software. Regulating these internal methods and operations would be a bureaucratic boondoggle.

Much simpler and more effective, I think, would be to abolish the fiction of “free” services that forms the façade of Facebook and Google. When these companies as well as new competitors3 charge an honest fee to users of social networks and search engines, because they can no longer earn money by selling ads or our data, much of the impetus to surveillance capitalism will be gone.

It costs real money to provide a platform for billions of people to share our cat videos, pictures of grandchildren, and photos of avocado toast. It also costs real money to build a data-mining machine – to sift and sort that data to reveal useful patterns for advertisers who want to manipulate our desires and opinions.

If social networks and search engines make their money honestly through user fees, they will obviously collect data that helps them improve their service and retain or gain users. But they will have no incentive to throw financial resources at data mining for other purposes.

Under such a regulation, would we still have good social network and search engine services? I have little doubt that we would.

People willingly pay for services they truly value – look back at how quickly people adopted the costly use of cell phones. But when someone pretends to offer us a valued service “free”, we endure a host of consequences as we eagerly participate in the con.
Photos at top: Sergey Brin, co-founder of Google (left) and Mark Zuckerberg, Facebook CEO. Left photo, “A surprise guest at TED 2010, Sergey spoke openly about Google’s new posture with China,” by Steve Jurvetson, via Wikimedia Commons. Right photo, “Mark Zuckerberg, Founder and Chief Executive Officer, Facebook, USA, captured during the session ‘The Next Digital Experience’ at the Annual Meeting 2009 of the World Economic Forum in Davos, Switzerland, January 30, 2009”, by World Economic Forum, via Wikimedia Commons.

 


NOTES

1 The term “surveillance capitalism” was introduced by John Bellamy Foster and Robert W. McChesney in a perceptive article in Monthly Review, July 2014.

2 Thanks to Toronto photographer and writer Diane Boyer for this insight.

3 There would be a downside to stipulating that social networks or search engines do not provide their services to users free of charge, in that it would be difficult for a new service to break into the market. One option might be a size-based exemption, allowing, for example, a company to offer such services free until it reaches 10 million users.

The unbearable cheapness of capitalism

Also published at Resilience.org.

René Descartes, Christopher Columbus and Jeff Bezos walk into a bar and the bartender asks, “What can I get for you thirsty gentlemen?”

“We’ll take everything you’ve got,” they answer, “just make it cheap!”

That’s a somewhat shorter version of the story served up by Raj Patel and Jason W. Moore. Their new book, A History of the World in Seven Cheap Things, illuminates many aspects of our present moment. While Jeff Bezos doesn’t make it into the index, René Descartes and Christopher Columbus both play prominent roles.

In just over 200 pages plus notes, the book promises “A Guide to Capitalism, Nature and the Future of the Planet.”

Patel and Moore present a provocative and highly readable guide to the early centuries of capitalism, showing how its then radically new way of relating to Nature remains at the root of world political economy today. As for a guide to the future, however, the authors do little beyond posing a few big questions.

The long shadow of the Enlightenment

Philosopher René Descartes, known in Western intellectual history as one of the fathers of the Enlightenment, helped codify a key idea for capitalism: separation between Society and Nature. In 1641,

“Descartes distinguished between mind and body, using the Latin res cogitans and res extensa to refer to them. Reality, in this view, is composed of discrete “thinking things” and “extended things.” Humans (but not all humans) were thinking things, Nature was full of extended things. The era’s ruling classes saw most human beings – women, peoples of color, Indigenous Peoples – as extended, not thinking, beings. This means that Descartes’s philosophical abstractions were practical instruments of domination ….”

From the time that Portuguese proto-capitalists were converting the inhabitants of Madeira into slaves on sugar plantations, and Spanish colonialists first turned New World natives into cogs in their brutal silver mines, there had been pushback against the idea of some humans owning and using others. But one current in Western thought was particularly attractive to the profit-takers.

In this view, Nature was there for the use and profit of thinking beings, which meant white male property owners. Patel and Moore quote English philosopher and statesman Francis Bacon, who expressed the new ethos with ugly simplicity: “science should as it were torture nature’s secrets out of her,” and the “empire of man” should penetrate and dominate the “womb of nature.”

The patriarchal character of capitalism, then, is centuries old:

“The invention of Nature and Society was gendered at every turn. The binaries of Man and Woman, Nature and Society, drank from the same cup. … Through this radically new mode of organizing life and thought, Nature became not a thing but a strategy that allowed for the ethical and economic cheapening of life.”

Armored with this convenient set of blinders, a colonialist could gaze at a new (to him) landscape filled with wondrous plants, animals, and complex societies, and without being hindered by awe, respect or humility he could see mere Resources. Commodities. Labour Power. A Work Force. In short, he could see Cheap Things which could be taken, used, and sold for a profit.

Patel and Moore’s framework is most convincing in their chapters on Cheap Nature, Cheap Work, and Cheap Care. Their narrative begins with the enclosure movement, in which land previously respected as Commons for the use of – and care by – all, was turned into private property which could be exploited for short-term gain.

Enclosure in turn led to proletarianization, resulting in landless populations whose only method of fending off starvation was to sell their labour for a pittance. The gendered nature of capitalism, meanwhile, meant that the essential role of bringing new generations of workers into life, and caring for them until they could be marched into the fields or factories, was typically not entered into the economic ledger at all. The worldwide legacy remains to this day, with care work most often done by women either egregiously under-paid or not paid at all.

Yet as the book goes on, the notion of “cheap” grows ever fuzzier. First of all, what’s cheap to one party in a transaction might be very dear to the other. While a capitalist gains cheap labour, others lose their cultures, their dignity, often their very lives.

Other essential components in the system often don’t come cheap even for capitalists. In their chapter on Cheap Money, Patel and Moore note that the European powers sunk tremendous resources into the military budgets needed to extend colonial domination around the world. The chapter “Cheap Lives” notes that “Keeping things cheap is expensive. The forces of law and order, domestic and international, are a costly part of the management of capitalism’s ecology.” The vaunted Free Market, in other words, has never come free.

A strategic definition

How can the single word “cheap” be made a meaningful characterization of Nature, Money, Work, Care, Food, Energy and Lives? The authors promise at the outset to tell us “precisely” what they mean by “cheap.” When the definition arrives, it is this:

“We come, then, to what we mean by cheapness: it’s a set of strategies to manage relations between capitalism and the web of life by  temporarily fixing capitalism’s crises. Cheap is not the same as low cost – though that’s part of it. Cheap is a strategy, a practice, a violence that mobilizes all kinds of work – human and animal, botanical and geological – with as little compensation as possible. … Cheapening marks the transition from uncounted relations of life making to the lowest possible dollar value. It’s always a short-term strategy.”

Circular reasoning, perhaps. Capitalism means the Strategies of getting things Cheap. And Cheap means those Strategies used by Capitalism. Yet Moore and Patel use this rhetorical flexibility, for the most part, to great effect.

Their historical narrative sticks mostly to the early centuries of capitalism, but their portrayals of sugar plantations, peasant evictions and the pre-petroleum frenzies of charcoal-making in England and peat extraction in the Netherlands are vivid and closely linked.

Particularly helpful is their concept of frontiers, which extends beyond the merely geographic to include any new sphere of exploitation – and capitalism is an incessant search for such new frontiers. As a result, it’s easy to see the strategies of “cheapening” in the latest business stories.

Jeff Bezos, for example, has become the world’s richest man through a new model of industrial organization – thousands of minimum-wage workers frantically running through massive windowless warehouses to package orders, with the latest electronic monitoring equipment used to speed up the treadmill at regular intervals. Life-destroying stress for employees, but Cheap Work for Bezos. Or take the frontier of the “sharing economy”, in which clever capitalists find a way to profit from legions of drivers and hotel-keepers, without the expense of investment in taxis or real estate.

Patel and Moore note that periods of financialization have occurred before, when there was a temporary surplus of capital looking for returns and a temporary shortage of frontiers. But

“there’s something very different about the era of financialization that began in the 1980s. Previous financial expansions could all count on imperialism to extend profit-making opportunities into significant new frontiers of cheap nature. … Today, those frontiers are smaller than ever before, and the volume of capital looking for new investment is greater than ever before.”

Thus the latest episode of financialization is just one of many indicators of a turbulent future. And that leads us to perhaps the most glaring weakness of Seven Cheap Things.

The subtitle makes a promise of a guide to “the future of the planet”. (In fairness, it’s possible that the subtitle was chosen not by the authors but the publishers.) The Conclusion offers suggestions of “a way to think beyond a world of cheap things ….” But in spite of the potentially intriguing headings Recognition, Reparation, Redistribution, Reimagination, and Recreation, their suggestions are so sketchy that they end a solid story on a very thin note.


Top photo: “The boiling house”, from Ten Views in the Island of Antigua, 1823, by William Clark, illustrates a step in the production of sugar. Image from the British Library via Wikimedia Commons.

Super-size that commodity

Also published at Resilience.org.

A review of ‘A Foodie’s Guide to Capitalism’

Don’t expect a whole lot of taste when you sit down to a plateful of commodities.

That might be a fitting but unintended lesson for foodies who work through the new book by Eric Holt-Giménez. A Foodie’s Guide to Capitalism will reward a careful reader with lots of insights – but it won’t do much for the taste buds.

While A Foodie’s Guide is lacking in recipes or menu ideas, it shines in helping us to understand the struggles of the men and women who work in the farms and packing plants. Likewise, it explains why major capitalists have typically shown little interest in direct involvement in agriculture – preferring to make their money selling farm inputs, trading farm commodities, or turning farm products into the thousands of refined products that fill supermarket shelves.

Fictitious commodities

Karl Polanyi famously described land, labour and money as “fictitious commodities”. Land and labour in particular come in for lengthy discussion in A Foodie’s Guide to Capitalism. In the process, Holt-Giménez also effectively unmasks the myth of the free market.

“Markets have been around a long time,” he writes, “but before the nineteenth century did not organize society as they do today.” He shows how capitalism in England arose concurrently with vigorous state intervention which drove people off their small farms and into the industrial labour pool. Meanwhile overseas both the slave trade and settler colonialism were opening critical parts of global markets, which were anything but “free”.

Nevertheless the takeover of food production by capitalism has been far from complete.

“Today, despite centuries of capitalism, large-scale capitalist agriculture produces less than a third of the world’s food supply, made possible in large part by multibillion-dollar subsidies and insurance programs. Peasants and smallholders still feed most people in the world, though they cultivate less than a quarter of the arable land.” (Holt-Giménez, A Foodie’s Guide To Capitalism, Monthly Review Press and FoodFirst Books, citing a report in GRAIN, May 2014)

There are a lot of reasons for this incomplete transition, but many are related to two of the “fictitious commodities”. Let’s start with land.

While land is the most important “means of production” in agriculture, land is of course much more than that. For people throughout history, land has been home, land has been the base of culture, land has been sacred. Even today, people go to great lengths to avoid having their lands swallowed up by capitalist agriculture – especially since this transition typically results in widespread consolidation of farms, leaving most former farmers to try to earn a living as landless labourers.

Autumn colours in the Northumberland Hills north of Lake Ontario, Canada

Likewise labour is much more than a commodity. An hour of labour is a handy abstraction that can be fed into an economist’s formula, but the labourer is a flesh-and-blood human being with complex motivations and aspirations. Holt-Giménez offers a good primer in Marxist theory here, showing why it has always been difficult for capitalists to extract surplus value directly from the labour of farmers. He also builds on the concept of the “cost of reproduction” in explaining why, in those sectors of farming that do depend on wage labour, most of the wage labourers are immigrants.

Before people can be hired at wages, they need to be born, cared for as infants, fed through childhood, provided with some level of education. These “costs of reproduction” are substantial and unavoidable. A capitalist cannot draw surplus value from labour unless some segment of society pays those “costs of reproduction”, but it is in the narrow economic self-interest of capitalists to ensure that someone else pays. Consider, for example, the many Walmart employees who rely on food stamps to feed their families. Since Walmart doesn’t want to pay a high enough wage to cover the “cost of reproduction” for the next generation of workers, a big chunk of that bill goes to taxpayers.

In industrialized countries, the farm workers who pick fruit and vegetables or work in packing plants tend to be immigrants on temporary work permits. This allows the capitalist food system to pass off the costs of reproduction, not to domestic taxpayers, but to the immigrants’ countries of origin:

“the cost of what it takes to feed, raise, care for and educate a worker from birth to working age (the costs of reproduction) are assumed by the immigrants’ countries of origin and is free to their employers in the rich nations, such as the United States and the nations of Western Europe. The low cost of immigrant labor works like a tremendous subsidy, imparting value to crops and agricultural land. This value is captured by capitalists across the food chain, but not by the worker.” (Holt-Giménez, A Foodie’s Guide to Capitalism)

Farmstead in the Black Hills, South Dakota, USA

The persistence of the family farm

In the US a large majority of farms, including massive farms which raise monoculture crops using huge machinery, are run by individual families rather than corporations. Although they own much of their land, these farmers typically work long hours at what amounts to less than minimum wage, and many depend on at least some non-farm salary or wage income to pay the bills. Again, there are clear limitations in a capitalist food system’s ability to extract surplus value directly from these hours of labour.

But in addition to selling “upstream” inputs like hybrid and GMO seeds, fertilizers, pesticides and machinery, the capitalist food system dominates the “downstream” process of trading commodities, processing foods, and distributing them via supermarket shelves. An important recent development in this regard is contract farming, which Holt-Giménez refers to as “a modern version of sharecropping and tenant farming”.

A large corporation contracts to buy, for example, a chicken farmer’s entire output of chickens, at a fixed price:

“Through a market-specification contract, the firm guarantees the producer a buyer, based on agreements regarding price and quality, and with a resource-providing contract the firm also provides production inputs (like fertilizer, hatchlings, or technical assistance). If the firm provides all the inputs and buys all of the product, it essentially controls the production process while the farmer basically provides land and labor ….”

The corporation buying the chickens gets the chance to dominate the chicken market, without the heavy investment of buying land and buildings and hiring the workforce. Meanwhile farmers with purchase contracts in hand can go to the bank for operating loans, but they lose control over most decisions about production on their own land. And they bear the risk of losing their entire investment – which often means losing their home as well – if the corporation decides the next year to cancel the contract, drop the price paid for chicken, or raise the price of chicken feed.

Contract farming dominates the poultry industry in the US and the pork market is now rapidly undergoing “chickenization”. Holt–Giménez adds that “The World Bank considers contract farming to be the primary means for linking peasant farmers to the global market and promotes it widely in Asia, Latin America, and Africa.”

Farm field in springtime, western North Dakota, USA

Feeding a hungry world

In North America the conventional wisdom holds that only industrial capitalist agriculture has the ability to provide food for the billions of people in today’s world. Yet on a per hectare basis, monoculture agribusiness has been far less productive than many traditional intensive agricultures.

“Because peasant-style farming usually takes place on smaller farms, the total output is less than capitalist or entrepreneurial farms. However, their total output per unit of land (tons/hectare; bushels/acre) tends to be higher. This is why, as capitalist agriculture converts peasant-style farms to entrepreneurial and capitalist farms, there is often a drop in productivity ….”

Marxist political-economic theory provides a useful basis for Holt-Giménez’ explorations of many aspects of global food systems. Among the topics he covers are the great benefits of the Green Revolution to companies marketing seeds and fertilizers, along with the great costs to peasants who were driven off their lands, and potentially catastrophic damages to the ecological web.

But an over-reliance on this theory, in my opinion, leads to an oversimplification of some of our current challenges. This is most significant in Holt-Giménez’s discussions of the overlapping issues of food waste and the failure to distribute farm outputs fairly.

In recent decades there has been a constant surplus of food available on world markets, while hundreds of millions of people have suffered serious malnutrition. At the same time we are often told that approximately 40% of the world’s food goes to waste. Surely there should be an easy way to distribute food more justly, avoid waste, and solve chronic hunger, no?

Yet it is not clear what proportion of food waste is unavoidable, given the vagaries of weather that may cause a bumper crop one year in one area, or rapid increases in harvest-destroying pests in response to ecological changes. It is easy to think that 40% waste is far too high – but could we reasonably expect to cut food waste to 5%, 10% or 20%? That’s a question that Holt-Giménez doesn’t delve into.

On the other hand he does pin food waste very directly on capitalist modes of production. “The defining characteristic of capitalism is its tendency to overproduce. The food system is no exception.” He adds, “The key to ending food waste is to end overproduction.”

Yet if food waste is cut back through a lowering of production, that in itself is of no help to those who are going hungry.

Holt-Giménez writes “Farmers are nutrient-deficient because they don’t have enough land to grow a balanced diet. These are political, not technical problems.” Yes, access to land is a critical political issue – but can we be sure that the answers are only political, and not in part technical as well? After all, famines predated capitalism, and have occurred in widely varying economic contexts even in the past century.

Particularly for the coming generations, climatic shifts may create enormous food insecurities even for those with access to (formerly sufficient) land. As George Monbiot notes in The Guardian this week, rapid loss of topsoil on a world scale, combined with water scarcity and rising temperatures, is likely to have serious impacts on agricultural production. Facing these challenges, farming knowledge and techniques that used to work very well may require serious adaptation. So the answers are not likely to be political or technical, but political and technical.

These critiques aside, Holt-Giménez has produced an excellent guidebook for the loose collection of interests often called “the food movement”. With a good grasp of the way capitalism distorts food production, plus an understanding of the class struggles that permeate the global food business, foodies stand a chance of turning the food movement into an effective force for change.

When boom is bust: the shale oil bonanza as a symptom of economic crisis

Also published at Resilience.org.

The gradual climb in oil prices in recent weeks has revived hopes that US shale oil producers will return to profitability, while also renewing fevered dreams of the US becoming a fossil fuel superpower once again.

Thus a few days ago my daily newspaper ran a Bloomberg article by Grant Smith which lead with this sweeping claim:

“The U.S. shale revolution is on course to be the greatest oil and gas boom in history, turning a nation once at the mercy of foreign imports into a global player. That seismic shift shattered the dominance of Saudi Arabia and the OPEC cartel, forcing them into an alliance with long-time rival Russia to keep a grip on world markets.”

I might have simply chuckled and turned the page, had I not just finished reading Oil and the Western Economic Crisis, by Cambridge University economist Helen Thompson. (Palgrave Macmillan, 2017)

Thompson looks at the same  shale oil revolution and draws strikingly different conclusions, both about the future of the oil economy and about the effects on US relations with OPEC, Saudi Arabia, and Russia.

Before diving into Thompson’s analysis, let’s first look at the idea that the shale revolution may be “the greatest oil and gas boom in history”. As backing for this claim, Grant Smith cites a report earlier in November by the International Energy Agency, predicting that US shale oil output will soar to about 8 million barrels/day by 2025.

Accordingly, “ ‘The United States will be the undisputed leader of global oil and gas markets for decades to come,’ IEA Executive Director Fatih Birol said … in an interview with Bloomberg television.”

Let’s leave this prediction unchallenged for the moment. (Though skeptics could start with David Hughes detailed look at the IEA’s 2016 forecasts here, or with a recent MIT report that confirms a key aspect of Hughes’ analysis.) Suppose the IEA turns out to be right. How will the shale bonanza rank among the great oil booms in history?

Grant Smith uses the following chart to bolster his claim that the fracking boom will equal Saudi Arabia’s expansion in the 1960s and 1970s.

 

Chart by Bloomberg

 

OK, so if US shale oil rises to 8 million barrels by 2025, that production will be about the same as Saudi oil production in 1981. Would that make these two booms roughly equivalent?

First, world oil consumption in the early 1980s was only about two-thirds what it is now. So 8 billion barrels/day represented a bigger proportion of the world’s oil needs in 1980 that it does today.

Second, Saudi Arabia used very little of its oil domestically in 1980, leaving most of it for sale abroad, and that gave it a huge impact on the world market. The US, by contrast, still burns more oil domestically than it produces – and in the best case scenario, its potential oil exports in 2025 would be a small percentage of global supply.

Third, Saudi Arabia has been able to keep roughly 8 million barrels/day flowing for the past 40 years, while even the IEA’s optimistic forecast shows US shale oil output starting to drop within ten years of a 2025 peak.

And last but not least, Saudi Arabia’s 8 million barrels/day have come with some of the world’s lowest production costs, while US shale oil comes from some of the world’s costliest wells.

All these factors come into play in Helen Thompson’s thorough analysis.

No more Mr. NICE Guy

In an October 2005 speech, Bank of England governor Mervyn King “argued that the rising price of oil was ending what he termed ‘NICE’ – a period of ‘non-inflationary consistently expansionary economic growth’ that began in 1992.” (Thompson, Oil and the Western Economic Crisis, page 28-29)

In spite of their best efforts in the first decade of this millennium, Western governments were not able to maintain steady economic growth, nor keep the price of oil in check, nor significantly increase the supply of oil, nor prevent the onslaught of a serious recession. Thompson traces the interplay of several major economic factors, both before and after this recession.

By the beginning of the George W. Bush administration, there was widespread concern that world oil production would not keep up with growing demand. The booming economies of China and India added to this fear.

“Of the increase of 17.9 million bpd in oil consumption that materialised between 1994 and 2008,” Thompson writes, “only 960,000 of the total came from the G7 economies.” Nearly all of the growth in demand came from China and India – and that growth in demand was forecast to continue.

The GW Bush administration appointed oilman and defense hawk Dick Cheney to lead a task force on the impending supply crunch. But “ultimately, for all the aspiration of the Cheney report, the Bush Jr administration’s energy strategy did little to increase the supply of oil over the first eight years of the twenty-first century.” (Thompson, page 20)

In fact, the only significant supply growth in the decade up to 2008 came from Russia. This boosted Putin’s power while fracturing Western economic interests, as “the western states divided between those who were significant importers of Russian oil and gas and those that were not.” (Thompson, page 23)

Meanwhile oil prices shot up dramatically until Western economies dropped into recession in 2007 as a precursor to the 2008 financial crash. Shouldn’t those high oil prices have spurred high investment in new wells, with consequent rises in production? It didn’t work out that way.

Between 2003 and the first half of 2008 the costs of the construction of production facilities, oil equipment and services, and energy soared in good part in response to the overall commodity boom produced by China’s economic rise. Consequently, whilst future oil supply was becoming ever more dependent on large-scale capital investment both to extract more from declining fields and to open up high-cost non-conventional production, the capital available was also required by 2008 simply to cover rising existing costs.” (Thompson, page 23)

Thus oil prices rose to the point where western economies could no longer maintain consumption levels, but these high prices still couldn’t finance the kind of new drilling needed to boost production.

Oddly enough, the right conditions for a boom in US oil production wouldn’t occur until well after the crash of 2008, when monetary policy-makers were struggling with little success to revive economic growth.

Zero Interest Rate Policy

In western Europe and the US, recovery from the financial crisis of 2008 has been sluggish and incomplete. But the growth in demand for oil by India and China continued, with the result that after a brief price drop in 2009 oil quickly rebounded to $100/barrel and stayed there for the next few years.

As in the years leading up to the crash, $100 oil proved too expensive for western economies, accustomed as they had been to running on cheap energy for decades. Consumer confidence, and consumer spending, remained low.

Simply pumping the markets with cash – Quantitative Easing – had little effect on the real economy (though it afforded bank execs huge bonuses and boosted the prices of stocks and other financial assets). But as interest rates dropped to historic lows, the flood of nearly-free money finally revived the US energy-production sector.

QE and ZIRP hugely increased the availability of credit to the energy sector. ZIRP allowed oil companies to borrow from banks at extremely low interest rates, with the worth of syndicated loans to the oil and gas sectors rising from $600 billion in 2006 to $1.6 trillion in 2014. Meanwhile, in raising the price and depressing the yield of the relatively safe assets central banks purchased, QE created incentives for investors to buy assets with a higher yield, including significantly riskier corporate bonds and equities. …” (Thompson, page 50)

Without this extraordinary monetary expansion “the rise of non-conventional oil production would not have been possible”, Thompson concludes.

And while a huge boost in shale oil production might be counted as a “win” for the economic growth team, the downsides have been equally serious. The Zero Interest Rate Policy has almost eliminated interest earnings for cautious middle-income savers, which depresses consumer spending in the short term and threatens the security of millions in the long term. The inflation in asset prices has boosted the profits of large corporations, while weak consumer confidence has removed corporate incentive to invest in greater production of most consumer goods.

The situation would be more stable if non-conventional oil producers had the ability to weather prolonged periods of low oil prices. But as the price drop of 2015 showed, that would be wishful thinking. “By the second quarter of 2015 more than half of all distressed bonds across investment and high-yield bond markets were issued by energy companies. Under these financial strains a wave of shale bankruptcies began in the first quarter of 2015” – a bankruptcy wave that grew three times as high in 2016.

Finally, financial markets with their high exposure to risky non-conventional oil production have been easily spooked by mere rumours of the end of quantitative easing or any significant rise in interest rates. So central bankers have good reason to fear they may go into the next recession with no tools left in their monetary policy toolbox.

Far from representing a way out of economic crisis, then, the shale oil and related tar sands booms are a symptom of an ongoing economic crisis, the end of which is nowhere in sight.

Energy and power

Thompson also discusses the geo-political effects of the changing global oil market. She notes that the shale oil boom created serious tensions in the US-Saudi relationship. The Saudis wanted oil prices to be moderately high, perhaps in the $50-60/barrel range, because that would afford the Saudis substantial profits without driving down demand for oil. The Americans, with their billions sunk into high-cost shale oil wells, now had a need for oil prices in the $70/barrel and up range, simply to make the fracked oil minimally profitable.

There was no way for both the Saudis and the Americans to win in this struggle, though they could both lose.

At the peak (to date) of the shale oil boom, there was only one significant geo-political development in which the Americans were able to flex some muscle specifically because of the big increase in US oil production, Thompson says. She attributes the nuclear treaty with Iran in part to the surge of new oil production in Texas and North Dakota. In her reading, world oil markets at the time needn’t fear the sudden loss of Iran’s oil output, and that gave European governments a comfort level in agreeing to impose sanctions on Iran. These sanctions, in turn, helped convince Iran to make a deal (a diplomatic success which the Trump administration is determined to undo).

But in 2014 OPEC still produced about three times as much oil as the US produced – with important implications:

“even at the height of the shale boom the obvious limits to any claim of geo-political transformation were also evident. The US remained a significant net importer of oil and, consequently, lacked the capacity to act as a swing producer capable of immediately and directly influencing the price.” (Thompson, page 56)

“Most consequentially, when the Obama administration turned towards sanctions against Russia after the onset of the Ukrainian crisis in the spring of 2014, it was not willing to contemplate significant action against Russian oil production.” (Thompson, page 57)

Thompson wraps up with a look at the oil shock of the 1970s, concluding that “There are striking similarities between aspects of the West’s current predicaments around oil and the problems western governments faced in the 1970s. … However, in a number of ways the present version of these problems is worse than those that were manifest in the 1970s.” (Thompson, page 57)

A much higher world oil demand today, the fact that new oil reserves in western countries are very high-cost, plus the explosion of oil-related financial derivatives, make the international monetary order highly unstable.

Has the US returned to the ranks of “fossil fuel superpowers”? Not as Thompson sees it:

Now the US has nothing like the power it had in the post-war period in providing other states access to oil. Shale oil … cannot change the fact that the largest reserves of cheaply accessible oil lie in the Middle East and Russia, or that China and others’ rise has fundamentally changed the volume of demand for oil in the world.” (Thompson, page 111)

S-curves and other paths

Also published at Resilience.org.

Oxford University economist Kate Raworth is getting a lot of good press for her recently released book Doughnut Economics: 7 Ways to Think Like a 21st Century Economist.

The book’s strengths are many, starting with the accessibility of Raworth’s prose. Whether she is discussing the changing faces of economic orthodoxy, the caricature that is homo economicus, or the importance of according non-monetized activities their proper recognition, Raworth keeps things admirably clear.

Doughnut Economics makes a great crash course in promising new approaches to economics. In Raworth’s own words, her work “draws on diverse schools of thought, such as complexity, ecological, feminist, institutional and behavioural economics.” Yet the integration of ecological economics into her framework is incomplete, leading to a frustratingly unimaginative concluding chapter on economic growth.

Laying the groundwork for that discussion of economic growth has resulted in an article about three times as long as most of my posts, so here is the ‘tl;dr’ version:

Continued exponential economic growth is impossible, but the S-curve of slowing growth followed by a steady state is not the only other alternative. If the goal is maintaining GDP at the highest possible level, then the S-curve is the best case scenario, but in today’s world that isn’t necessarily desirable or even possible.

The central metaphor

Full disclosure: for as long as I can remember, the doughnut has been my least favourite among refined-sugar-white-flour-and-grease confections. So try as I might to be unbiased, I was no doubt predisposed to react critically to Raworth’s title metaphor.

What is the Doughnut? As Raworth explains, the Doughnut is the picture that emerged when she sketched a “safe space” between the Social Foundation necessary for prosperity, and the Ecological Ceiling beyond which we should not go.

Source: Doughnut Economics, page 38

There are many good things to be said about this picture. It affords a prominent place to both the social factors and the ecological factors which are essential to prosperity, but which are omitted from many orthodox economic models. The picture also restores ethics, and the choosing of goals, to central roles in economics.

Particularly given Raworth’s extensive background in development economics, it is easy to understand the appeal of this diagram.

But I agree with Ugo Bardi (here and here) that there is no particular reason the diagram should be circular – Shortfall, Social Foundation, Safe and Just Space, Ecological Ceiling and Overshoot would have the same meaning if arranged in horizontal layers rather than in concentric circles.

From the standpoint of economic analysis, I find it unhelpful to include a range of quite dissimilar factors all at the same level in the centre of the diagram. A society could have adequate energy, water and food without having good housing and health care – but you couldn’t have good housing and health care without energy, water and food. So some of these factors are clearly preconditions for others.

Likewise, some of the factors in the centre of the diagram are clearly and directly related to “overshoot” in the outer ring, while others are not. Excessive consumption of energy, water, or food often leads to ecological overshoot, but you can’t say the same about “excessive” gender equality, political voice, or peace and justice.

Beyond these quibbles with the Doughnut diagram, I further agree with Bardi that a failure to incorporate biophysical economics is the major weakness of Doughnut Economics. In spite of her acknowledgment of the pioneering work of Herman Daly, and a brief but lucid discussion of the work of Robert Ayres and Benjamin Warr showing that fossil fuels have been critical for the past century’s GDP growth, Raworth does not include energy supply as a basic determining factor in economic development.

Economists as spin doctors

Raworth makes clear that key doctrines of economic orthodoxy often obscure rather than illuminate economic reality. Thus economists in rich countries extoll the virtues of free trade, though their own countries relied on protectionism to nurture their industrial base.

Likewise standard economic modeling starts with a reductionist “homo economicus” whose decisions are always based on rational pursuit of self-interest – even though behavioral science shows that real people are not consistently rational, and are motivated by co-operation as much as by self-interest. Various studies indicate, however, that economics students and professors show a greater-than-average degree of self-interest. And for those who are already wealthy but striving to become wealthier still, it is comforting to believe that everyone is similarly self-interested, and that their self-interest works to the good of all.

When considering a principle of mainstream economics, then, it makes sense to ask: what truths does this principle hide, and for whose benefit?

Unfortunately, when it comes to GDP growth as the accepted measure of a healthy economy, Raworth leaves out an important part of the story.

The concept of Gross Domestic Product has its roots in the 1930s, when statisticians were looking for ways to quantify economic activity, making temporal trends easier to discern. Simon Kuznets developed a way to calculate Gross National Product – the total of all income generated worldwide by US residents.

As Raworth stresses, Kuznets himself was clear that his national income tally was a very limited way of measuring an economy.

Emphasising that national income captured only the market value of goods and services produced in an economy, he pointed out that it therefore excluded the enormous value of goods and services produced by and for households, and by society in the course of daily life. … And since national income is a flow measure (recording only the amount of income generated each year), Kuznets saw that it needed to be complemented by a stock measure, accounting for the wealth from which it was generated ….” (Doughnut Economics, page 34; emphasis mine)

The distinction between flows and stocks is crucial. Imagine a simple agrarian nation which uses destructive farming methods to work its rich land. For a number of years it may earn increasingly high income – the flow – though its wealth-giving topsoil – the stock – is rapidly eroding. Is this country getting richer or poorer? Measured by GDP alone, this economy is healthy as long as current income grows; no matter that the topsoil, and future prospects, are blowing away in the wind.

In the years immediately preceding and following World War II, GDP became the primary measure of economic health, and it became political and economic orthodoxy that GDP should grow every year. (To date no western leader has ever campaigned by promising “In my first year I will increase GDP by 3%, in my second year by 2%, in my third year it will grow by 1%, and by my fourth year I will have tamed GDP growth to 0!”)

What truth does this reliance on GDP hide, and for whose benefit? The answers are fairly obvious, in my humble opinion: a myopic focus on GDP obscured the inevitability of resource depletion, for the benefit of the fossil fuel and automative interests who dominated the US economy in the mid-twentieth century.

For context, in 1955 the top ten US corporations by number of employees included: General Motors, Chrysler, Standard Oil of New Jersey, Amoco, Goodyear and Firestone. (Source: 24/7 Wall St)

In 1960, the top ten largest US companies by revenue included General Motors, Exxon, Ford, Mobil, Gulf Oil, Texaco, and Chrysler. (Fortune 500)

These companies, plus the steel companies that made sheet metal for cars and the construction interests building the rapidly-growing network of roads, were clear beneficiaries of a new way of life that consumed ever-greater quantities of fossil fuels.

In the decades after World War II, the US industrial complex threw its efforts into rapid exploitation of energy reserves, along with mass production of machines that would burn that energy as fast as it could be pulled out of the ground. This transformation was not a simple result of “the invisible hand of the free market”; it relied on the enthusiastic collaboration of every level of government, from local zoning boards, to metropolitan transit authorities, to state and federal transportation planners.

But way back then, was it politically necessary to distract people from the inevitability of resource depletion?

The Peak Oil movement in the 1930s

From the very beginnings of the petroleum age, there were prominent voices who saw clearly that exponential growth in use of a finite commodity could not go on indefinitely.

One such voice was William Jevons, now known particularly for the “Jevons Paradox”. In 1865 he argued that since coal provided vastly more usable energy than industry had previously been able to harness, and since this new-found power was the very foundation of modern industrial civilization, it was particularly important to a nation to prudently manage supplies:

Describing the novel social experience that coal and steam power had created, the experience that today we would call ‘exponential growth’, in which practically infinite values are reached in finite time, Jevons showed how quickly even very large stores of coal might be depleted.” (Timothy Mitchell, Carbon Democracy, pg 129)

In the 1920s petroleum was the new miracle energy source, but thoughtful geologists and economists alike realized that as a finite commodity, petroleum could not fuel infinite growth.

Marion King Hubbert was a student in 1926, but more than sixty years later he still recalled the eye-opening lesson he received when a professor asked pupils to consider the implications of ongoing rapid increases in the consumption of coal and oil resources.

As Mason Inman relates in his excellent biography of Hubbert,

When a quantity grows by a constant percentage each year, its history forms a straight line on a semilogarithmic graph. Hubbert plotted the points for coal, year after year, and found a fairly straight line that persisted for several decades: a continual growth rate of around 6 percent a year. At that rate, the production doubled about every dozen years. When he looked at this graph, it was obvious to him that such rapid growth could persist for decades – his graph showed that had already happened – but couldn’t continue forever.” (The Oracle of Oil, 2016, pg 19)

Hubbert soon learned that there were many others who shared his concerns. This thinking coalesced in the 1930s in a very popular movement known as Technocracy. They argued that wealth depended primarily not on the circulation of money, but on the flow of energy.

The leaders of Technocracy, including Hubbert, were soon speaking to packed houses and were featured in cover stories in leading magazines. Hubbert was also tasked with producing a study guide that interested people could work through at home.

In the years prior to the Great Depression, people had become accustomed to economic growth of about 5% per year. Hubbert wanted people to realize it made no sense to take that kind of growth for granted.

“It has come to be naively expected by our business men and their apologists, the economists, that such a rate of growth was somehow inherent in the industrial processes,” Hubbert wrote. But since Earth and its physical resources are finite, he said, infinite growth is an impossibility.

In short, Technocracy pointed out that the fossil fuel age was likely to be a flash in the pan, historically speaking – unless the nation’s fuel reserves were managed carefully by engineers who understood energy efficiency and depletion.

Without sensible accounting and allocation of the true sources of a nation’s wealth – its energy reserves – private corporations would rake in massive profits for a few decades and two or three generations of Americans might prosper, but in the longer term the nation would be “burning its capital”.

Full speed ahead

After the convulsions of the Depression and World War II, the US emerged with the same leading corporations in an even more dominant position. Now the US had control, or at least major influence, not only over rich domestic fossil fuel reserves, but also the much greater reserves in the Middle East. And as the world’s greatest military and financial power, they were in a position to set the terms of trade.

For fossil fuel corporations the major problem was that oil was temporarily too cheap. It came flowing out of wells so easily and in such quantity that it was often difficult to keep the price up. It was in their interests that economies consume oil at a faster rate than ever before, and that the rate of consumption would speed up each year.

Fortunately for these interests, a new theory of economics had emerged just in time.

In this new theory, economists should not worry about measuring the exhaustion of resources. In Timothy Mitchell’s words, “Economics became instead a science of money.”

The great thing about money supply was that, unlike water or land or oil, the quantity of money could grow exponentially forever. And as long as one didn’t look too far backwards or forwards, it was easy to imagine that energy resources would prove no barrier. After all, for several decades, the price of oil had been dropping.

So although increasing quantities of energy were consumed, the cost of energy did not appear to represent a limit to economic growth. … Oil could be treated as something inexhaustible. Its cost included no calculation for the exhaustion of reserves. The growth of the economy, measured in terms of GNP, had no need to account for the depletion of energy resources.” (Carbon Democracy, pg 140)

GDP was thus installed as the supreme measure of an economy, with continuous GDP growth the unquestionable political goal.

A few voices dissented, of course. Hubbert warned in the mid-1950s that the US would hit the peak of its conventional fossil fuel production by the early 1970s, a prediction that proved correct. But large quantities of cheap oil remained in the Middle East. Additional new finds in Alaska and the North Sea helped to buy another couple of decades for the oil economy (though these fields are also now in decline).

Thanks to the persistent work of a small number of researchers who called themselves “ecological economists”, a movement grew to account for stocks of resources, in addition to tallying income flows in the GDP. By the early 1990s, the US Bureau of Economic Analysis gave its blessing to this effort.

In April 1994 the Bureau published a first set of tables called Integrated Environmental-Economic System of Accounts (IEESA).

The official effort was short-lived indeed. As described in Beyond GDP,

progress toward integrated environmental-economic accounting in the US came to a screeching halt immediately after the first IEESA tables were published. The US Congress responded swiftly and negatively. The House report that accompanied the next appropriations bill explicitly forbade the BEA from spending any additional resources to develop or extend the integrated environmental and economic accounting methodology ….” (Beyond GDP, by Heun, Carbajales-Dale, Haney and Roselius, 2016)

All the way through Fiscal Year 2002, appropriations bills made sure this outbreak of ecological economics was nipped in the bud. The bills stated,

The Committee continues the prohibition on use of funds under this appropriation, or under the Census Bureau appropriation accounts, to carry out the Integrated Environmental-Economic Accounting or ‘Green GDP’ initiative.” (quoted in Beyond GDP)

One can only guess that, when it came to contributing to Congressional campaign funds, the struggling fossil fuel interests had somehow managed to outspend the deep-pocketed biophysical economists lobby.

S-curves and other paths

With that lengthy detour complete, we are ready to rejoin Raworth and Doughnut Economics.

The final chapter is entitled “Be Agnostic About Growth: from growth addicted to growth agnostic”.

This sounds like a significant improvement over current economic orthodoxy – but I found this section weak in several ways.

First, it is unclear just what it is that we are to be agnostic about. While Raworth has made clear earlier in the book why GDP is an incomplete and misleading measure of an economy, in the final chapter GDP growth is nevertheless used as the only significant measure of economic growth. Are we to be agnostic about “GDP growth”, which might well be meaningless anyway? Or should we be agnostic about “economic growth”, which might be something quite different and quite a bit more essential – especially to the hundreds of millions of people still living without basic necessities?

Second, Raworth may be agnostic about growth, but she is not agnostic about degrowth. (She has discussed elsewhere why she can’t bring herself to use the word “degrowth”.) True, she remarks at one point that “I mean agnostic in the sense of designing an economy that promotes human prosperity whether GDP is going up, down, or holding steady.” Yet in the pictures she draws and in the ensuing discussion, there is no clear recognition either that degrowth might be desirable, or that degrowth might be forced on us by biophysical realities.

She includes two graphs for possible paths of economic growth –  with growth measured here simply by GDP.

Source: Doughnut Economics, page 210 and page 214

As she notes, the first graph shows GDP increasing at steady annual percentage. While the politicians would like us to believe this is possible and desirable, the graph showing what quickly becomes a near-vertical climb is seldom presented in economics textbooks, as it is clearly unrealistic.

The second graph shows GDP growing slowly at first, then picking up speed, and then leveling off into a high but steady state with no further growth. This path for growth is commonly seen and recognized in ecology. The S-curve was also recognized by pre-20th-century economists, including Adam Smith and John Stuart Mill, as the ideal for a healthy economy.

I would concur that an S-curve which lands smoothly on a high plateau is an ideal outcome. But can we take for granted that this outcome is still possible? And do these two paths – continued exponential growth or an S-curve – really exhaust the conceptual possibilities that we should consider?

On the contrary, we can look back 80 years to the Technocracy Study Course for an illustration of varied and contrasting paths of economic growth and degrowth.

Source: The Oracle of Oil, page 58

M. King Hubbert produced this set of graphs to illustrate what can be expected with various key commodities on which a modern industrial economy depends – and by extension, what might happen with the economy as a whole.

While pure exponential growth is impossible, the S-curve may work for a dependably renewable resource, or a renewable-resource based economy. However, the next possibility – with a rise, peak, decline, and then a leveling off – is also a common scenario. For example, a society may harvest increasing amounts of wood until the regenerating power of the forests are exceeded; the harvest must then drop before any production plateau can be established.

The bell curve which starts at zero, climbs to a high peak, and drops back to zero, could characterize an economy which is purely based on a non-renewable resource such as fossils fuels. Hopefully this “decline to zero” will remain a theoretical conception, since no society to date has run 100% on a non-renewable resource. Nevertheless our fossil-fuel-based industrial society will face a severe decline unless we can build a new energy system on a global scale, in very short order.

This range of economic decline scenarios is not really represented in Doughnut Economics. That may have something to do with the design of the title metaphor.

While ecological overshoot, on the outside of the doughnut, represents things we should not do, the diagram doesn’t have a way of representing the things we can not do.

We should not continue to burn large quantities of fossil fuel because that will destabilize the climate that our children and grandchildren inherit. But once our cheaply accessible fossil fuels are used up, then we can not consume energy at the same frenetic pace that today’s wealthy populations take for granted.

The same principle applies to many essential economic resources. As long as there is significant fertility left in farmland, we can choose to farm the land with methods that produce a high annual return even though they gradually strip away the topsoil. But once the topsoil is badly depleted, then we no longer have a choice to continue production at the same level – we simply need to take the time to let the land recover.

In other words, these biophysical realities are more fundamental than any choices we can make – they set hard limits on which choices remain open to us.

The S-curve economy may be the best-case scenario, an outcome which could in principle provide global prosperity with a minimum of system disruption. But with each passing year during which our economy is based overwhelmingly on rapidly depleting non-renewable resources, the smooth S-curve becomes a less likely outcome.

If some degree of economic decline is unavoidable, then clear-sighted planning for that decline can help us make the transition a just and peaceful one.

If we really want to think like 21st century economists, don’t we need to openly face the possibility of economic decline?

 

Top photo: North Dakota State Highway 22, June 2014. (click here for larger view)

Guns, energy, and the coin of the realm

Also published at Resilience.org.

While US debt climbs to incomprehensible heights, US banking authorities continue to pump new money into the economy. How can they do it? David Graeber sees a  simple explanation:

There’s a reason why the wizard has such a strange capacity to create money out of nothing. Behind him, there’s a man with a gun.” (Debt: The First 5,000 Years, Melville House, 2013, pg 364)

In part one of this series, we looked at the extent of violence in the “American Century” – the period since World War II in which the US has been the number one superpower, and in which US garrisons have ringed the world. In part two we looked at the role of energy supplies in propelling the US to power, the rapid drawdown of energy supplies in the US post-WWII, and the more recent explosion of US debt.

In this concluding installment we’ll look at the links between military power and financial power.

A new set of financial institutions arose at the end of World War II, and for obvious reasons the US was ‘first among equals’ in setting the rules. Not only was the US in military occupation of Germany and Japan, but the US also had the financial capital to help shattered countries –whether on the war’s winning or losing sides – in reconstructing their infrastructures and restarting their economies.

The US was also able to offer military protection to many countries including previous mortal enemies. This meant that these countries could avoid large military outlays – but also that their elites were in no position to challenge US supremacy.

That being said, there were challenges both large and small in dozens of nations, particularly from the grass roots. The US exercised political power, both soft and hard, in attempts to influence the directions of scores of countries around the world. Planting of media reports, surreptitious aid to favoured electoral candidates, dirty tricks to discredit candidates seen as threatening, military aid and training to dictatorships and police forces who could put down movements for social justice, planning and helping to implement coups, and full-fledged military invasion – this range of intervention techniques resulted in hundreds of thousands, if not millions, of deaths. Cataloguing the bloody side of US “leadership of the free world” is the task taken on so ably by John Dower in The Violent American Century.

Dollars for oil

One of the rules of the game grew in importance with each passing decade. In Timothy Mitchell’s words,

Under the arrangements that governed the international oil trade, the commodity was sold in the currency not of the country where it was produced, nor of the place where it was consumed, but of the international companies that controlled production. ‘Sterling oil’, as it was known (principally oil from Iran), was traded in British pounds, but the bulk of global sales were in ‘dollar oil’.” (Carbon Democracy, Verso, 2013, pg 111)

As David Graeber’s Debt explains in detail, the ability to force people to acquire and use the ruler’s currency has, throughout history, been a key mechanism for extracting tribute from subject populations.

In today’s global economy, that is why the pricing of oil in dollars has been so important for the US. Again in Timothy Mitchell’s words:

Europe and other regions had to accumulate dollars, hold them and then return them to the United States in payment for oil. Inflation in the United States slowly eroded the value of the dollar, so that when these countries purchased oil, the dollars they used were worth less than their value when they acquired them. These seigniorage privileges, as they are called, enabled Washington to extract a tax from every other country in the world …. (Carbon Democracy, pg 120)

As Greg Grandin explains, the oil-US dollar relationship grew in importance even as OPEC countries were able to force big price increases:

With every rise in the price of oil, oil-importing countries had to borrow more to meet their energy needs. With every petrodollar placed in New York banks, the value of the US currency increased, and with it the value of the dollar-denominated debt that poor countries owed to those banks.” (“Down From The Mountain”, London Review of Books, June 19, 2017)

But the process did take on another important twist after US domestic oil production peaked and imports from Saudi Arabia soared in the 1970s. Although the oil trade continued to support the value of the US dollar, the US was now sending a lot more of those dollars to oil exporting countries. The Saudis, in particular, accumulated US dollars so fast there wasn’t a productive way for them to circulate these dollars back into the US by purchasing US-made goods. The burgeoning US exports of munitions provided a solution. Mitchell explains:

As the producer states gradually forced the major oil companies to share with them more of the profits from oil, increasing quantities of sterling and dollars flowed to the Middle East. To maintain the balance of payments and the viability of the international financial system, Britain and the United States needed a mechanism for these currency flows to be returned. … Arms were particularly suited to this task of financial recycling, for their acquisition was not limited by their usefulness. The dovetailing of the production of petroleum and the manufacture of arms made oil and militarism increasingly interdependent.” (Carbon Democracy, pg 155-156)

He adds, “The real value of US arms exports more than doubled between 1967 and 1975, with most of the new market in the Middle East.”

An F-15 Eagle aircraft of the Royal Saudi Air Force takes off during Operation Desert Shield, 1991. (Source: Wikimedia Commons)

Fast forward to today. Imported oil is a critical factor in the US economy, in spite of a supply blip from fracking. US industry leads the world in the export of weapons; the top three buyers, and five of the top ten buyers, are in the Middle East. (Source: CNN, May 25, 2016) Yet US arms sales are dwarfed by US military expenditures, which are roughly double in real terms what they were in the 1960s. (Source: Time, July 16, 2013)

Finally, US national debt, in 1983 dollars, is about 10 times as high as it was from 1950 to 1980. In other words the US government, along with its banking and military complexes, has been living far beyond its means (making bankruptcy king Donald Trump a fitting figurehead). (Source: Stephen Bloch, Adelphi University)

Yet the game goes on. As David Graeber sees it,

American imperial power is based on a debt that will never – can never – be repaid. Its national debt has become a promise, not just to its own people, but to the nations of the entire world, that everyone knows will not be kept.

At the same time, U.S. policy was to insist that those countries relying on U.S. treasury bonds as their reserve currency behaved in exactly the opposite way: observing tight money policies and scrupulously repaying their debts ….” (Debt, pg 367)

We’ll close with two speculations on how the “American century” may come to an end.

US supremacy rests on interrelated dominance in military power, financial power, and influence over fossil fuel energy markets. At present the US financial system can create ever larger sums of money, and the rest of the world may have no immediately preferable options than to continue buying US debt. But just as you can’t eat money, you can’t burn it in an electricity generator, a diesel truck, or a bomber flying sorties to a distant land. So no amount of financial wizardry will sustain the current outsized industrial economy or its military subsection, once prime fossil fuel sources have been tapped out.

On the other hand, suppose low-carbon renewable energy technologies improve so rapidly that they can replace fossil fuels within a few decades. This would be a momentous energy transition, and might also lead to a momentous transition in geopolitics.

In recent years, and especially under the Trump administration, the US is ceding renewable energy technology leadership to other countries, especially China. If many countries free themselves from fossil-fuel dependence, and they no longer need US dollars to purchase their energy needs, a pillar of US supremacy will fall.

Top photo: Commemorative silver dollar sold by the US Mint, 2012.

Alternative Geologies: Trump’s “America First Energy Plan”

Also published at Resilience.org.

Donald Trump’s official Energy Plan envisions cheap fossil fuel, profitable fossil fuel and abundant fossil fuel. The evidence shows that from now on, only two of those three goals can be met – briefly – at any one time.

While many of the Trump administration’s “alternative facts” have been roundly and rightly ridiculed, the myths in the America First Energy Plan are still widely accepted and promoted by mainstream media.

The dream of a great America which is energy independent, an America in which oil companies make money and pay taxes, and an America in which gas is still cheap, is fondly nurtured by the major business media and by many politicians of both parties.

The America First Energy Plan expresses this dream clearly:

The Trump Administration is committed to energy policies that lower costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil.

And further:

Sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. The Trump Administration will embrace the shale oil and gas revolution to bring jobs and prosperity to millions of Americans. … We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Less expensive energy will be a big boost to American agriculture, as well.
– www.whitehouse.gov/america-first-energy

This dream harkens back to a time when fossil fuel energy was indeed plentiful and cheap, when profitable oil companies did pay taxes to fund public infrastructure, and the US was energy independent – that is, when Donald Trump was still a boy who had not yet managed a single company into bankruptcy.

To add to the “flashback to the ’50s” mood, Trump’s plan doesn’t mention renewable energy, solar power, and wind turbines – it’s all fossil fuel all the way.

Nostalgia for energy independence

Let’s look at the “energy independence” myth in context. It has been more than 50 years since the US produced as much oil as it consumed.

Here’s a graph of US oil consumption and production since 1966. (Figures are from the BP Statistical Review of World Energy, via ycharts.com.)

Gap between US oil consumption and production – from stats on ycharts.com (click here for larger version)

Even at the height of the fracking boom in 2014, according to BP’s figures Americans were burning 7  million barrels per day more oil than was being produced domestically. (Note: the US Energy Information Agency shows net oil imports at about 5 million barrels/day in 2014 – still a big chunk of consumption.)

OK, so the US hasn’t been “energy independent” in oil for generations, and is not close to that goal now.

But if Americans Drill, Baby, Drill, isn’t it possible that great new fields could be discovered?

Well … oil companies in the US and around the world ramped up their exploration programs dramatically during the past 40 years – and came up with very little new oil, and very expensive new oil.

It’s difficult to find estimates of actual new oil discoveries in the US – though it’s easy to find news of one imaginary discovery.

When I  google “new oil discoveries in US”, most of the top links go to articles with totally bogus headlines, in totally mainstream media, from November 2016.

For example:

CNN: “Mammoth Texas oil discovery biggest ever in USA”

USA Today: “Largest oil deposit ever found in U.S. discovered in Texas”

The Guardian: “Huge deposit of untapped oil could be largest ever discovered in US”

Business Insider: “The largest oil deposit ever found in America was just discovered in Texas”

All these stories are based on a November 15, 2016 announcement by the United States Geological Survey – but the USGS claim was a far cry from the oil gushers conjured up in mass-media headlines.

The USGS wasn’t talking about a new oil field, but about one that has been drilled and tapped for decades. It merely estimated that there might be 20 billion more barrels of tight oil (oil trapped in shale) remaining in the field. The USGS announcement further specified that this estimated oil “consists of undiscovered, technically recoverable resources”. (Emphasis in USGS statement). In other words, if and when it is discovered, it will likely be technically possible to extract it, if the cost of extraction is no object.

The dwindling pace of oil discovery

We’ll come back to the issues of “technically recoverable” and “cost of extraction” later. First let’s take a realistic look at the pace of new oil discoveries.

Bloomberg sums it up in an article and graph from August, 2016:

Graph from Bloomberg.com

This chart is restricted to “conventional oil” – that is, the oil that can be pumped straight out of the ground, or which comes streaming out under its own pressure once the well is drilled. That’s the kind of oil that fueled the 20th century – but the glory days of discovery ended by the early 1970s.

While it is difficult to find good estimates of ongoing oil exploration expenditures, we do have estimates of “upstream capital spending”. This larger category includes not only the cost of exploration, but the capital outlays needed in developing new discoveries through to production.

Exploration and development costs must be funded by oil companies or by lenders, and the more companies rely on expensive wells such as deep off-shore wells or fracked wells, the less money is available for new exploration.

Over the past 20 years companies have been increasingly reliant on a) fracked oil and gas wells which suck up huge amounts of capital, and 2) exploration in ever-more-difficult environments such as deep sea, the arctic, and countries with volatile social situations.

As Julie Wilson of Wood Mackenzie forecast in Sept 2016, “Over the next three years or more, exploration will be smaller, leaner, more efficient and generally lower-risk. The biggest issue exploration has faced recently is the difficulty in commercializing discoveries—turning resources into reserves.”

Do oil companies choose to explore in more difficult environments just because they love a costly challenge? Or is it because their highly skilled geologists believe most of the oil deposits in easier environments have already been tapped?

The following chart from Barclays Global Survey shows the steeply rising trend in upstream capital spending over the past 20 years.

Graph from Energy Fuse Chart of the Week, Sept 30, 2016

 

Between the two charts above – “Oil Discoveries Lowest Since 1947”, and “Global Upstream Capital Spending” – there is overlap for the years 1985 to 2014. I took the numbers from these charts, averaged them into five-year running averages to smooth out year-to-year volatility, and plotted them together along with global oil production for the same years.

Based on Mackenzie Wood figures for new oil discoveries, Barclays Global Survey figures for upstream capital expenditures, and world oil production figures from US Energy Information Administration (click here for larger version)

This chart highlights the predicament faced by societies reliant on petroleum. It has been decades since we found as much new conventional oil in a year as we burned – so the supplies of cheap oil are being rapidly depleted. The trend has not been changed by the fracking boom in the US – which has involved oil resources that had been known for decades, resources which are costly to extract, and which has only amounted to about 5% of world production at the high point of the boom.

Yet while our natural capital in the form of conventional oil reserves is dwindling, the financial capital at play has risen steeply. In the 10 year period from 2005, upstream capital spending nearly tripled from $200 billion to almost $600 billion, while oil production climbed only about 15% and new conventional oil discoveries averaged out to no significant growth at all.

Is doubling down on this bet a sound business plan for a country? Will prosperity be assured by investing exponentially greater financial capital into the reliance on ever more expensive oil reserves, because the industry simply can’t find significant quantities of cheaper reserves? That fool’s bargain is a good summary of Trump’s all-fossil-fuel “energy independence” plan.

(The Canadian government’s implicit national energy plan is not significantly different, as the Trudeau government continues the previous Harper government’s promotion of tar sands extraction as an essential engine of “growth” in the Canadian economy.)

To jump back from global trends to a specific example, we can consider the previously mentioned “discovery” of 20 billion barrels of unconventional oil in the Permian basin of west Texas. Mainstream media articles exclaimed that this oil was worth $900 billion. As geologist Art Berman points out, that valuation is simply 20 billion barrels times the market price last November of about $45/barrel. But he adds that based on today’s extraction costs for unconventional oil in that field, it would cost $1.4 trillion to get this oil out of the ground. At today’s prices, in other words, each barrel of that oil would represent a $20 loss by the time it got to the surface.

Two out of three

To close, let’s look again at the three goals of Trump’s America First Energy Plan:
• Abundant fossil fuel
• Profitable fossil fuel
• Cheap fossil fuel

With remaining resources increasingly represented by unconventional oil such as that in the Permian basin of Texas, there is indeed abundant fossil fuel – but it’s very expensive to get. Therefore if oil companies are to remain profitable, oil has to be more expensive – that is, there can be abundant fossil fuel and profitable fossil fuel, but then the fuel cannot be cheap (and the economy will hit the skids). Or there can be abundant fossil fuel at low prices, but oil companies will lose money hand-over-fist (a situation which cannot last long).

It’s a bit harder to imagine, but there can also be fossil fuel which is both profitable to extract and cheap enough for economies to afford – it just won’t be abundant. That would require scaling back production/consumption to the remaining easy-to-extract conventional fossil fuels, and a reduction in overall demand so that those limited supplies aren’t immediately bid out of a comfortable price range. For that reduction in demand to occur, there would have to be some combination of dramatic reduction in energy use per capita and a rapid increase in deployment of renewable energies.

A rapid decrease in demand for oil is anathema to Trumpian fossil-fuel cheerleaders, but it is far more realistic than their own dream of cheap, profitable, abundant fossil fuel forever.
Top photo: composite of Donald Trump in a lake of oil spilled by the Lakeview Gusher, California, 1910 (click here for larger version). The Lakeview Gusher was the largest on-land oil spill in the US. It occurred in the Midway-Sunset oil field, which was discovered in 1894. In 2006 this field remained California’s largest producing field, though more than 80% of the estimated recoverable reserves had been extracted. (Source: California Department of Conservation, 2009 Annual Report of the State Oil & Gas Supervisor)

Energy From Waste, or Waste From Energy? A look at our local incinerator

Also published at Resilience.org.

Is it an economic proposition to drive up and down streets gathering up bags of plastic fuel for an electricity generator?

Biking along the Lake Ontario shoreline one autumn afternoon, I passed the new and just-barely operational Durham-York Energy Centre and a question popped into mind. If this incinerator produces a lot of electricity, where are all the wires?

The question was prompted in part by the facility’s location right next to the Darlington Nuclear Generating Station. Forests of towers and great streams of high-voltage power lines spread out in three directions from the nuclear station, but there is no obvious visible evidence of major electrical output from the incinerator.

So just how much electricity does the Durham-York Energy Centre produce? Does it produce as much energy as it consumes? In other words, is it accurate to refer to the incinerator as an “energy from waste” facility, or is it a “waste from energy” plant? The first question is easy to answer, the second takes a lot of calculation, and the third is a matter of interpretation.

Before we get into those questions, here’s a bit of background.

The Durham-York Energy Centre is located about an hour’s drive east of Toronto on the shore of Lake Ontario, and was built at a cost of about $300 million. It is designed to take 140,000 tonnes per year of non-recyclable and non-compostable household garbage, burn it, and use the heat to power an electric generator. The garbage comes from the jurisdictions of adjacent regions, Durham and York (which, like so many towns and counties in Ontario, share names with places in England).

The generator powered by the incinerator is rated at 14 megawatts net, while the generators at Darlington Nuclear Station, taken together, are rated at 3500 megawatts net. The incinerator produces 1/250th the electricity that the nuclear plant produces. That explains why there is no dramatically visible connection between the incinerator and the provincial electrical grid.

In other terms, the facility produces surplus power equivalent to the needs of 10,000 homes. Given that Durham and York regions have fast-growing populations – more than 1.6 million at the 2011 census – the power output of this facility is not regionally significant.

A small cluster of transformers is part of the Durham-York Energy Centre.

Energy Return on Energy Invested

But does the facility produce more energy than it uses? That’s not so easy to determine. A full analysis of Energy Return On Energy Invested (EROEI) would require data from many different sources. I decided to approach the question by looking at just one facet of the issue:

Is the energy output of the generator greater than the energy consumed by the trucks which haul the garbage to the site?

Let’s begin with a look at the “fuel” for the incinerator. Initial testing of the facility showed better than expected energy output due to the “high quality of the garbage”, according to Cliff Curtis, commissioner of works for Durham Region (quoted in the Toronto Star). Because most of the paper, cardboard, glass bottles, metal cans, recyclable plastic containers, and organic material is picked up separately and sent to recycling or composting plants, the remaining garbage is primarily plastic film or foam. (Much of this, too, is technically recyclable, but in current market conditions that recycling would be carried out at a financial loss.)

Inflammatory material

If you were lucky to grow up in a time and a place where building fires was a common childhood pastime, you know that plastic bags and styrofoam burn readily and create a lot of heat. A moment’s consideration of basic chemistry backs up that observation.

Our common plastics are themselves a highly processed form of petroleum. One of the major characteristics of our industrial civilization is that we have learned how to suck finite resources of oil from the deepest recesses of the earth, process it in highly sophisticated ways, mold it into endlessly versatile – but still cheap! – types of packaging, use the packaging once, and then throw the solidified petroleum into the garbage.

If instead of burying the plastic garbage in a landfill, we burn it, we capture some of the energy content of that original petroleum. There’s a key problem, though. As opposed to a petroleum or gas well, which provides huge quantities of energy in one location, our plastic “fuel” is light-weight and dispersed through every city, town, village and rural area.

The question thus becomes: is it an economic proposition to drive up and down every street gathering up bags of plastic fuel for an electricity generator?

The light, dispersed nature of the cargo has a direct impact on garbage truck design, and therefore on the number of loads it takes to haul a given number of tonnes of garbage.

Because these trucks must navigate narrow residential streets they must have short wheelbases. And because they need to compact the garbage as they go, they have to carry additional heavy machinery to do the compaction. The result is a low payload:

Long-haul trucks and their contents can weigh 80,000 pounds. However, the shorter wheelbase of garbage and recycling trucks results in a much lower legal weight  — usually around 51,000 pounds. Since these trucks weigh about 33,000 pounds empty, they have a legal payload of about nine tons. (Source: How Green Was My Garbage Truck)

By my calculations, residential garbage trucks picking up mostly light packaging will be “full” with a load weighing about 6.8 tonnes. (The appendix to this article lists sources and shows the calculations.)

At 6.8 tonnes per load, it will require over 20,000 garbage truck loads to gather the 140,000 tonnes burned each year by the Durham-York Energy Centre.

How many kilometers will those trucks travel? Working from a detailed study of garbage pickup energy consumption in Hamilton, Ontario, I estimated that in a medium-density area, an average garbage truck route will be about 45 km. Truck fuel economy during the route is very poor, since there is constant stopping and starting plus frequent idling while workers grab and empty the garbage cans.

There is additional traveling from the base depot to the start of each route, from the end of the route to the drop-off point, and back to the depot.

I used the following map to make a conservative estimate of total kilometers.

Google map of York and Durham Region boundaries, with location of incinerator.

Because most of the garbage delivered to the incinerator comes from Durham Region, and the population of both Durham Region and York Region are heavily weighted to their southern and western portions, I picked a spot in Whitby as an “average” starting point. From that circled “X” to the other “X” (the incinerator location) is 30 kilometers. Using that central location as the starting and ending point for trips, I estimated 105 km total for each load. (45 km on the pickup route, 30 km to the incinerator, and 30 km back to the starting point).

Due to their weight and to their frequent stops, garbage trucks get poor fuel economy. I calculated an average .96 liters/kilometer.

The result: our fleet of trucks would haul 20,600 loads per year, travel 2,163,000 kilometers, and burn just over 2 million liters of diesel fuel.

Comparing diesel to electricity

How does the energy content of the diesel fuel compare to the energy output of the incinerator’s generator? Here the calculations are simpler though the numbers get large.

There are 3412 BTUs in a kilowatt-hour of electricity, and about 36,670 BTUs in a liter of diesel fuel.

If the generator produces enough electricity for 10,000 homes, and these homes use the Ontario average of 10,000 kilowatt-hours per year, then the generator’s output is 100,000,000 kWh per year.

Converted to BTUs, the 100,000,000 kWh equal about 341 billion BTUs.

The diesel fuel burned by the garbage trucks, on the other hand, has a total energy content of about 76 billion BTUs.

That answers our initial question: does the incinerator produce more energy than the garbage trucks consume in fuel? Yes it does, by a factor of about 4.5.

If we had tallied all the energy consumed by this operation, then we could say it had an Energy Return On Energy Invested ratio of about 4.5 – comparable to the bottom end of economically viable fossil fuel extraction operations such as Canadian tar sands mining. But of course we have considered just one energy input, the fuel burned by the trucks.

If we added in the energy required to build and maintain the fleet of garbage trucks, plus an appropriate share of the energy required to maintain our roads (which are greatly impacted by weighty trucks), plus the energy used to build the $300 million incinerator/generator complex, the EROEI would be much lower, perhaps below 1. In other words, there is little or no energy return in the business of driving around picking up household garbage to fuel a generator.

Energy from waste, or waste from energy

Finally, our third question: is this facility best referred to as “Energy From Waste” or “Waste From Energy”?

Looking at the big picture, “Waste From Energy” is the best descriptor. We take highly valuable and finite energy sources in the form of petroleum, consume a lot of that energy to create plastic packaging, ship that packaging to every household via a network of stores, and then use a lot more energy to re-collect the plastic so that we can burn it. The small amount of usable energy we get at the last stage is inconsequential.

From a municipal waste management perspective, however, things might look quite different. In our society people believe they have a god-given right to acquire a steady-stream of plastic-packaged goods, and a god-given right to have someone else come and pick up their resulting garbage.

Thus municipal governments are expected to pay for a fleet of garbage trucks, and find some way to dispose of all the garbage. If they can burn that garbage and recapture a modest amount of energy in the form of electricity, isn’t that a better proposition than hauling it to expensive landfill sites which inevitably run short of capacity?

Looked at from within that limited perspective, “Energy From Waste” is a fair description of the process. (Whether incineration is a good idea still depends, of course, on the safety of the emissions from modern garbage incinerators – another controversial issue.)

But if we want to seriously reduce our waste, the place to focus is not the last link in the chain – waste disposal. The big problem is our dependence on a steady stream of products produced from valuable fossil fuels, which cannot practically be re-used or even recycled, but only down-cycled once or twice before they end up as garbage.

Top photo: Durham-York Energy Centre viewed from south east. 

APPENDIX – Sources and Calculations

Capacity and Fuel Economy of Garbage Trucks

There are many factors which determine the capacity and fuel economy of garbage trucks, including: type of truck (front-loading, rear-loading, trucks with hoists for large containers vs. trucks which are loaded by hand by workers picking up individual bags); type of route (high density urban areas with large businesses or apartment complex vs. low-density rural areas); and type of garbage (mixed waste including heavy glass, metal and wet organics vs. light but bulky plastics and foam).

Although I sent an email inquiry to Durham Waste Department asking about capacity and route lengths of garbage trucks, I didn’t receive a response. So I looked for published studies which could provide figures that seemed applicable to Durham Region.

A major source was the paper “Fuel consumption estimation for kerbside municipal solid waste (MSW) collection activities”, in Waste Management & Research, 2010, accessed via www.sagepub.com.

This study found that “Within the ‘At route’ stage, on average, the normal garbage truck had to travel approximately 71.9 km in the low-density areas while the route length in high-density areas is approximately 25 km.” Since Durham Region is a mix of older dense urban areas, newer medium-density urban sprawl, and large rural areas, I estimated an average “medium-density area route” of 45 km.

The same study found an average fuel economy of .335 liters/kilometer for garbage trucks when they were traveling from depot to the beginning of a route. The authors found that fuel economy in the “At Route” portion (with frequent stops, starts, and idling) was 1.6 L/km for high-density areas, and 2.0 L/km in low-density areas; I split the difference and used 1.8 L/km as the “At Route” fuel consumption.

As to the volumes of trucks and the weight of the garbage, I based on estimates on figures in “The Workhorses of Waste”, published by MSW Management Magazine and WIH Resource Group. This article states: “Rear-end loader capacities range from 11 cubic yards to 31 cubic yards, with 25 cubic yards being typical.”

Since rear-end loader trucks are the ones I usually see in residential neighborhoods, I used 25 cubic yards as the average volume capacity.

The same article discusses the varying weight factors:

The municipal solid waste deposited at a landfill has a density of 550 to over 650 pounds per cubic yard (approximately 20 to 25 pounds per cubic foot). This is the result of compaction within the truck during collection operations as the truck’s hydraulic blades compress waste that has a typical density of 10 to 15 pounds per cubic foot at the curbside. The in-vehicle compaction effort should approximately double the density and half the volume of the collected waste. However, these values are rough averages only and can vary considerably given the irregular and heterogeneous nature of municipal solid waste.

In Durham Region the heavier paper, glass, metal and wet organics are picked up separately and hauled to recycling depots, so it seems reasonable to assume that the remaining garbage hauled to the incinerator would not be at the dense end of the “550 to over 650 pounds per cubic yard” range. I used what seems like a conservative estimate of 600 pounds per cubic yard.

(I am aware that in some cases garbage may be off-loaded at transfer stations, further compacted, and then loaded onto much larger trucks for the next stage of transportation. This would impact the fuel economy per tonne in transportation, but would involve additional fuel in loading and unloading. I would not expect that the overall fuel use would be dramatically different. In any case, I decided to keep the calculations (relatively) simple and so I assumed that one type of truck would pick up all the garbage and deliver it to the final drop-off.)

OK, now the calculations:

Number of truckloads

25 cubic yard load X 600 pounds / cubic yard = 15000 pounds per load

15000 pounds ÷ 2204 lbs per tonne = 6.805 tonnes per load

140,000 tonnes burned by incinerator ÷ 6.805 tonnes per load = 20,570 garbage truck loads

Fuel burned:

45 km per “At Route” portion X 20,570 loads = 925,650 km “At Route”

1.8 L/km fuel consumption “At Route” x 925,650 km = 1,666,170 liters

60 km per load traveling to and from incinerator

60 km x 20,570 loads = 1,234,200 km traveling

.335 L/km travelling fuel consumption X 1,234,200 km = 413,457 liters

1,666,170 liters + 413,457 liters = 2,027,627 liters total fuel used by garbage trucks

As a check on the reasonableness of this estimate, I calculated the average fuel economy from the above figures:

20,570 loads x 105 km per load = 2,159,850 km per year

2,079,625 liters fuel ÷ 2,159,850 km = .9629 L/km

This compares closely with a figure published by the Washington Post, which said municipal garbage trucks get just 2-3 mpg. The middle of that range, 2.5 miles per US gallon, equals 1.06 L/km.

Electricity output of the generator power by the incinerator

With a rated output of 14 megawatts, the generator could produce about 122 megawatt-hours of electricity per year – if it ran at 100% capacity, every hour of the year. (14,000 kW X 24 hours per day X 365 days = 122,640,000 kWh.) That’s clearly unrealistic.

However, the generator’s operators say it puts out enough electricity for 10,000 homes. The Ontario government says the average residential electricity consumption is 10,000 kWh.

10,000 homes X 10,000 kWh per year = 100,000,000 kWh per year.

This figure represents about 80% of the maximum rated capacity of the incinerator’s generator, which sounds like a reasonable output, so that’s the figure I used.

Fake news, failed states

Also published at Resilience.org.

Many of the violent conflicts raging today can only be understood if we look at the interplay between climate change, the shrinking of cheap energy supplies, and a dominant economic model that refuses to acknowledge physical limits.

That is the message of Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, a thought-provoking new book by Nafeez Mosaddeq Ahmed. Violent conflicts are likely to spread to all continents within the next 30 years, Ahmed says, unless a realistic understanding of economics takes hold at a grass-roots level and at a nation-state policy-making level.

The book is only 94 pages (plus an extensive and valuable bibliography), but the author packs in a coherent theoretical framework as well as lucid case studies of ten countries and regions.

As part of the Springer Briefs In Energy/Energy Analysis series edited by Charles Hall, it is no surprise that Failing States, Collapsing Systems builds on a solid grounding in biophysical economics. The first few chapters are fairly dense, as Ahmed explains his view of global political/economic structures as complex adaptive systems inescapably embedded in biophysical processes.

The adaptive functions of these systems, however, are failing due in part to what we might summarize with four-letter words: “fake news”.

inaccurate, misleading or partial knowledge bears a particularly central role in cognitive failures pertaining to the most powerful prevailing human political, economic and cultural structures, which is inhibiting the adaptive structural transformation urgently required to avert collapse.” (Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence, by Nafeez Mosaddeq Ahmed, Springer, 2017, page 13)

We’ll return to the failures of our public information systems. But first let’s have a quick look at some of the case studies, in which the explanatory value of Ahmed’s complex systems model really comes through.

In discussing the rise of ISIS in the context war in Syria and Iraq, Western media tend to focus almost exclusively on political and religious divisions which are shoehorned into a “war on terror” framework. There is also an occasional mention of the early effects of climate change. While not discounting any of these factors, Ahmed says that it is also crucial to look at shrinking supplies of cheap energy.

Prior to the onset of war, the Syrian state was experiencing declining oil revenues, driven by the peak of its conventional oil production in 1996. Even before the war, the country’s rate of oil production had plummeted by nearly half, from a peak of just under 610,000 barrels per day (bpd) to approximately 385,000 bpd in 2010.” (Failing States, Collapsing Systems, page 48)

Similarly, Yemen’s oil production peaked in 2001, and had dropped more than 75% by 2014.

While these governments tried to cope with climate change effects including water and food shortages, their oil-export-dependent budgets were shrinking. The result was the slashing of basic social service spending when local populations were most in need.

That’s bad enough, but the responses of local and international governments, guided by “inaccurate, misleading or partial knowledge”, make a bad situation worse:

While the ‘war on terror’ geopolitical crisis-structure constitutes a conventional ‘security’ response to the militarized symptoms of HSD [Human Systems Destabilization] (comprising the increase in regional Islamist militancy), it is failing to slow or even meaningfully address deeper ESD [Environmental System Disruption] processes that have rendered traditional industrialized state power in these countries increasingly untenable. Instead, the three cases emphasized – Syria, Iraq, and Yemen – illustrate that the regional geopolitical instability induced via HSD has itself hindered efforts to respond to deeper ESD processes, generating instability and stagnation across water, energy and food production industries.” (Failing States, Collapsing Systems, page 59)

This pattern – militarized responses to crises that beget more crises – is not new:

A 2013 RAND Corp analysis examined the frequency of US military interventions from 1940 to 2010 and came to the startling conclusion: not only that the overall frequency of US interventions has increased, but that intervention itself increased the probability of an ensuing cluster of interventions.” (Failing States, Collapsing Systems, page 43)

Ahmed’s discussions of Syria, Iraq, Yemen, Nigeria and Egypt are bolstered by the benefits of hindsight. His examination of Saudi Arabia looks a little way into the future, and what he foresees is sobering.

He discusses studies that show Saudi Arabia’s oil production is likely to peak in as soon as ten years. Yet the date of the peak is only one key factor, because the country’s steadily increasing internal demand for energy means there is steadily less oil left for export.

For Saudi Arabia the economic crunch may be severe and rapid: “with net oil revenues declining to zero – potentially within just 15 years – Saudi Arabia’s capacity to finance continued food imports will be in question.” For a population that relies on subsidized imports for 80% of its food, empty government coffers would mean a life-and-death crisis.

But a Saudi Arabia which uses up all its oil internally would have major implications for other countries as well, in particular China and India.

like India, China faces the problem that as we near 2030, net exports from the Middle East will track toward zero at an accelerating rate. Precisely at the point when India and China’s economic growth is projected to require significantly higher imports of oil from the Middle East, due to their own rising domestic energy consumption requirement, these critical energy sources will become increasingly unavailable on global markets.” (Failing States, Collapsing Systems, page 74)

Petroleum production in Europe has also peaked, while in North America, conventional oil production peaked decades ago, and the recent fossil fuel boomlet has come from expensive, hard-to-extract shale gas, shale oil, and tar sands bitumen. For both Europe and North America, Ahmed forecasts, the time is fast approaching when affordable high-energy fuels are no longer available from Russia or the Middle East. Without successful adaptive responses, the result will be a cascade of collapsing systems:

Well before 2050, this study suggests, systemic state-failure will have given way to the irreversible demise of neoliberal finance capitalism as we know it.” (Failing States, Collapsing Systems, page 88)

Are such outcomes inescapable? By no means, Ahmed says, but adequate adaptive responses to our developing predicaments are unlikely without a recognition that our economies remain inescapably embedded in biophysical processes. Unfortunately, there are powerful forces working to prevent the type of understanding which could guide us to solutions:

vested interests in the global fossil fuel and agribusiness system are actively attempting to control information flows to continue to deny full understanding in order to perpetuate their own power and privilege.” (Failing States, Collapsing Systems, page 92)

In the next installment, Fake News as Official Policy, we’ll look at the deep roots of this misinformation and ask what it will take to stem the tide.

Top photo: Flying over the Trans-Arabian Pipeline, 1950. From Wikimedia.org.