“Warning. Data Inadequate.”

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part three
Also published on Resilience.

“The Navy revealed the embryo of an electronic computer today,” announced a New York Times article, “that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”1

A few paragraphs into the article, “the Navy” was quoted as saying the new “perceptron” would be the first non-living mechanism “capable of receiving, recognizing and identifying its surroundings without any human training or control.”

This example of AI hype wasn’t the first and won’t be the last, but it is a bit dated. To be precise, the Times story was published on July 8, 1958.

Due to its incorporation of a simple “neural network” loosely analogous to the human brain, the perceptron of 1958 is recognized as a forerunner of today’s most successful “artificial intelligence” projects – from facial recognition systems to text extruders like ChatGPT. It’s worth considering this early device in some detail.

In particular, what about the claim that the perceptron could identify its surroundings “without any human training or control”? Sixty years on, the descendants of the perceptron have “learned” a great deal, and can now identify, describe and even transform millions of images. But that “learning” has involved not only billions of transistors, and trillions of watts, but also millions of hours of labour in “human training and control.”

Seeing is not perceiving

When we look at a real-world object – for example, a tree – sensors in our eyes pass messages through a network of neurons and through various specialized areas of the brain. Eventually, assuming we are old enough to have learned what a tree looks like, and both our eyes and the required parts of our brains are functioning well, we might say “I see a tree.” In short, our eyes see a configuration of light, our neural network processes that input, and the result is that our brains perceive and identify a tree.

Accomplishing the perception with electronic computing, it turns out, is no easy feat.

The perceptron invented by Dr. Frank Rosenblatt in the 1950s used a 20 pixel by 20 pixel image sensor, paired with an IBM 704 computer. Let’s look at some simple images, and how a perceptron might process the data to produce a perception. 

Images created by the author.

In the illustration at left above, what the camera “sees” at the most basic level is a column of pixels that are “on”, with all the other pixels “off”. However, if we train the computer by giving it nothing more than labelled images of the numerals from 0 to 9, the perceptron can recognize the input as matching the numeral “1”. If we then add training data in the form of labelled images of the characters in the Latin-script alphabet in a sans serif font, the perceptron can determine that it matches, equally well, the numeral “1”, the lower-case letter “l”, or an upper-case letter “I”.

The figure at right is considerably more complex. Here our perceptron is still working with a low-resolution grid, but pixels can be not only “on” or “off” – black or white – but various shades of grey. To complicate things further, suppose more training data has been added, in the form of hand-written letters and numerals, plus printed letters and numerals in an oblique sans serif font. The perceptron might now determine the figure is a numeral “1” or a lower-case “l” or upper-case “I”, either hand-written or printed in an oblique font, each with an equal probability. The perceptron is learning how to be an optical character recognition (OCR) system, though to be very good at the task it would need the ability to use context to the rank the probabilities of a numeral “1”, a lower-case “l”, or an upper-case “I”.

The possibilities multiply infinitely when we ask the perceptron about real-world objects. In the figure below, a bit of context, in the form of a visual ground, is added to the images. 

Images created by the author.

Depending, again, on the labelled training data already input to the computer, the perceptron may “see” the image at left as a tall tower, a bare tree trunk, or the silhouette of a person against a bright horizon. The perceptron might see, on the right, a leaning tree or a leaning building – perhaps the Leaning Tower of Pisa. With more training images and with added context in the input image – shapes of other buildings, for example – the perceptron might output with high statistical confidence that the figure is actually the Leaning Tower of Leeuwarden.

Today’s perceptrons can and do, with widely varying degrees of accuracy and reliability, identify and name faces in crowds, label the emotions shown by someone in a recorded job interview, analyse images from a surveillance drone and indicate that a person’s activities and surroundings match the “signature” of terrorist operations, or identify a crime scene by comparing an unlabelled image with photos of known settings from around the world. Whether right or wrong, the systems’ perceptions sometimes have critical consequences: people can be monitored, hired, fired, arrested – or executed in an instant by a US Air Force Reaper drone.

As we will discuss below, these capabilities have been developed with the aid of millions of hours of poorly-paid or unpaid human labour.

The Times article of 1958, however, described Dr. Rosenblatt’s invention this way: “the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience ….” The kernel of truth in that claim lies in the concept of a neural network.

Rosenblatt told the Times reporter “he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a ‘self-induced change in the wiring diagram.’”

I can empathize with that Times reporter. I still hope to find a person sufficiently intelligent to explain the machine learning process so clearly that even a simpleton like me can fully understand. However, New Yorker magazine writers in 1958 made a good attempt. As quoted in Matteo Pasquinelli’s book The Eye of the Master, the authors wrote:

“If a triangle is held up to the perceptron’s eye, the association units connected with the eye pick up the image of the triangle and convey it along a random succession of lines to the response units, where the image is registered. The next time the triangle is held up to the eye, its image will travel along the path already travelled by the earlier image. Significantly, once a particular response has been established, all the connections leading to that response are strengthened, and if a triangle of a different size and shape is held up to the perceptron, its image will be passed along the track that the first triangle took.”2

With hundreds, thousands, millions and eventually billions of steps in the perception process, the computer gets better and better at interpreting visual inputs.

Yet this improvement in machine perception comes at a high ecological cost. A September 2021 article entitled “Deep Learning’s Diminishing Returns” explained:

“[I]n 2012 AlexNet, the model that first showed the power of training deep-learning systems on graphics processing units (GPUs), was trained for five to six days using two GPUs. By 2018, another model, NASNet-A, had cut the error rate of AlexNet in half, but it used more than 1,000 times as much computing to achieve this.”

The authors concluded that, “Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools.”3

The steep increase in the computing demands of AI is illustrated in a graph by Anil Ananthaswamy.

“The Drive to Bigger AI Models” shows that AI models used for language and image generation have grown in size by several orders of magnitude since 2010.  Graphic from “In AI, is Bigger Better?”, by Anil Ananthaswamy, Nature, 9 March 2023.

Behold the Mechanical Turk

In the decades since Rosenblatt built the first perceptron, there were periods when progress in this field seemed stalled. Additional theoretical advances in machine learning, a many orders-of-magnitude increase in computer processing capability, and vast quantities of training data were all prerequisites for today’s headline-making AI systems. In Atlas of AI, Kate Crawford gives a fascinating account of the struggle to acquire that data.

Up to the 1980s artificial intelligence researchers didn’t have access to large quantities of digitized text or digitized images, and the type of machine learning that makes news today was not yet possible. The lengthy antitrust proceedings against IBM provided an unexpected boost to AI research, in the form of a hundred million digital words from legal proceedings. In the 1990s, court proceedings against Enron collected more than half a million email messages sent among Enron employees. This provided text exchanges in everyday English, though Crawford notes wording “represented the gender, race, and professional skews of those 158 workers.”

And the data floodgates were just beginning to open. As Crawford describes the change,

“The internet, in so many ways, changed everything; it came to be seen in the AI research field as something akin to a natural resource, there for the taking. As more people began to upload their images to websites, to photo-sharing services, and ultimately to social media platforms, the pillaging began in earnest. Suddenly, training sets could reach a size that scientists in the 1980s could never have imagined.”4

It took two decades for that data flood to become a tsunami. Even then, although images were often labelled and classified for free by social media users, the labels and classifications were not always consistent or even correct. There remained a need for humans to look at millions of images and create or check the labels and classifications.

Developers of the image database ImageNet collected 14 million images and eventually organized them into over twenty thousand categories. They initially hired students in the US for labelling work, but concluded that even at $10/hour, this work force would quickly exhaust the budget.

Enter the Mechanical Turk.

The original Mechanical Turk was a chess-playing scam originally set up in 1770 by a Hungarian inventor. An apparently autonomous mechanical human model, dressed in the Ottoman fashion of the day, moved chess pieces and could beat most human chess players. Decades went by before it was revealed that a skilled human chess player was concealed inside the machine for each exhibition, controlling all the motions.

In the early 2000s, Amazon developed a web platform by which AI developers, among others, could contract gig workers for many tasks that were ostensibly being done by artificial intelligence. These tasks might include, for example, labelling and classifying photographic images, or making judgements about outputs from AI-powered chat experiments. In a rare fit of honesty, Amazon labelled the process “artificial artificial intelligence”5 and launched its service, Amazon Mechanical Turk, in 2005.

screen shot taken 3 February 2024, from opening page at mturk.com.

Crawford writes,

“ImageNet would become, for a time, the world’s largest academic user of Amazon’s Mechanical Turk, deploying an army of piecemeal workers to sort an average of fifty images a minute into thousands of categories.”6

Chloe Xiang described this organization of work for Motherboard in an article entitled “AI Isn’t Artificial or Intelligent”:

“[There is a] large labor force powering AI, doing jobs that include looking through large datasets to label images, filter NSFW content, and annotate objects in images and videos. These tasks, deemed rote and unglamorous for many in-house developers, are often outsourced to gig workers and workers who largely live in South Asia and Africa ….”7

Laura Forlano, Associate Professor of Design at Illinois Institute of Technology, told Xiang “what human labor is compensating for is essentially a lot of gaps in the way that the systems work.”

Xiang concluded,

“Like other global supply chains, the AI pipeline is greatly imbalanced. Developing countries in the Global South are powering the development of AI systems by doing often low-wage beta testing, data annotating and labeling, and content moderation jobs, while countries in the Global North are the centers of power benefiting from this work.”

In a study published in late 2022, Kelle Howson and Hannah Johnston described why “platform capitalism”, as embodied in Mechanical Turk, is an ideal framework for exploitation, given that workers bear nearly all the costs while contractors take no responsibility for working conditions. The platforms are able to enroll workers from many countries in large numbers, so that workers are constantly low-balling to compete for ultra-short-term contracts. Contractors are also able to declare that the work submitted is “unsatisfactory” and therefore will not be paid, knowing the workers have no effective recourse and can be replaced by other workers for the next task. Workers are given an estimated “time to complete” before accepting a task, but if the work turns out to require two or three times as many hours, the workers are still only paid for the hours specified in the initial estimate.8

A survey of 700 cloudwork employees (or “independent contractors” in the fictive lingo of the gig work platforms) found about 34% of the time they spent on these platforms was unpaid. “One key outcome of these manifestations of platform power is pervasive unpaid labour and wage theft in the platform economy,” Howson and Johnston wrote.9 From the standpoint of major AI ventures at the top of the extraction pyramid, pervasive wage theft is not a bug in the system, it is a feature.

The apparently dazzling brilliance of AI-model creators and semi-conductor engineers gets the headlines in western media. But without low-paid or unpaid work by employees in the Global South, “AI systems won’t function,” Crawford writes. “The technical AI research community relies on cheap, crowd-sourced labor for many tasks that can’t be done by machines.”10

Whether vacuuming up data that has been created by the creative labour of hundreds of millions of people, or relying on tens of thousands of low-paid workers to refine the perception process for reputedly super-intelligent machines, the AI value chain is another example of extractivism.

“AI image and text generation is pure primitive accumulation,” James Bridle writes, “expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners.”11

“All seven emotions”

New AI implementations don’t usually start with a clean slate, Crawford says – they typically borrow classification systems from earlier projects.

“The underlying semantic structure of ImageNet,” Crawford writes, “was imported from WordNet, a database of word classifications first developed at Princeton University’s Cognitive Science Laboratory in 1985 and funded by the U.S. Office of Naval Research.”12

But classification systems are unavoidably political when it comes to slotting people into categories. In the ImageNet groupings of pictures of humans, Crawford says, “we see many assumptions and stereotypes, including race, gender, age, and ability.”

She explains,

“In ImageNet the category ‘human body’ falls under the branch Natural Object → Body → Human Body. Its subcategories include ‘male body,’ ‘person,’ ‘juvenile body,’ ‘adult body,’ and ‘female body.’ The ‘adult body’ category contains the subclasses ‘adult female body’ and ‘adult male body.’ There is an implicit assumption here that only ‘male’ and ‘female’ bodies are recognized as ‘natural.’”13

Readers may have noticed that US military agencies were important funders of some key early AI research: Frank Rosenblatt’s perceptron in the 1950s, and the WordNet classification scheme in the 1980s, were both funded by the US Navy.

For the past six decades, the US Department of Defense has also been interested in systems that might detect and measure the movements of muscles in the human face, and in so doing, identify emotions. Crawford writes, “Once the theory emerged that it is possible to assess internal states by measuring facial movements and the technology was developed to measure them, people willingly adopted the underlying premise. The theory fit what the tools could do.”14

Several major corporations now market services with roots in this military-funded research into machine recognition of human emotion – even though, as many people have insisted, the emotions people express on their faces don’t always match the emotions they are feeling inside.

Affectiva is a corporate venture spun out of the Media Lab at Massachusetts Institute of Technology. On their website they claim “Affectiva created and defined the new technology category of Emotion AI, and evangelized its many uses across industries.” The opening page of affectiva.com spins their mission as “Humanizing Technology with Emotion AI.”

Who might want to contract services for “Emotion AI”? Media companies, perhaps, want to “optimize content and media spend by measuring consumer emotional responses to videos, ads, movies and TV shows – unobtrusively and at scale.” Auto insurance companies, perhaps, might want to keep their (mechanical) eyes on you while you drive: “Using in-cabin cameras our AI can detect the state, emotions, and reactions of drivers and other occupants in the context of a vehicle environment, as well as their activities and the objects they use. Are they distracted, tired, happy, or angry?”

Affectiva’s capabilities, the company says, draw on “the world’s largest emotion database of more than 80,000 ads and more than 14.7 million faces analyzed in 90 countries.”15 As reported by The Guardian, the videos are screened by workers in Cairo, “who watch the footage and translate facial expressions to corresponding emotions.”6

There is a slight problem: there is no clear and generally accepted definition of an emotion, nor general agreement on just how many emotions there might be. But “emotion AI” companies don’t let those quibbles get in the way of business.

Amazon’s Rekognition service announced in 2019 “we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’)” – but they were proud to have “added a new emotion: ‘Fear’.”17

Facial- and emotion-recognition systems, with deep roots in military and intelligence agency research, are now widely employed not only by these agencies but also by local police departments. Their use is not confined to governments: they are used in the corporate world for a wide range of purposes. And their production and operation likewise crosses public-private lines; though much of the initial research was government-funded, the commercialization of the technologies today allows corporate interests to sell the resulting services to public and private clients around the world.

What is the likely impact of these AI-aided surveillance tools? Dan McQuillan sees it this way:

“We can confidently say that the overall impact of AI in the world will be gendered and skewed with respect to social class, not only because of biased data but because engines of classification are inseparable from systems of power.”18

In our next installment we’ll see that biases in data sources and classification schemes are reflected in the outputs of the GPT large language model.


Image at top of post: The Senture computer server facility in London, Ky, on July 14, 2011, photo by US Department of Agriculture, public domain, accessed on flickr.

Title credit: the title of this post quotes a lyric of “Data Inadequate”, from the 1998 album Live at Glastonbury by Banco de Gaia.


Notes

1 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

2 “Rival”, in The New Yorker, by Harding Mason, D. Stewart, and Brendan Gill, November 28, 1958, synopsis here. Quoted by Matteo Pasquinelli in The Eye of the Master: A Social History of Artificial Intelligence, Verso Books, October 2023, page 137.

 Deep Learning’s Diminishing Returns”, by Neil C. Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso, IEEE Spectrum, 24 September 2021.

4 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

5 This phrase is cited by Elizabeth Stevens and attributed to Jeff Bezos, in “The mechanical Turk: a short history of ‘artificial artificial intelligence’”, Cultural Studies, 08 March 2022.

6 Crawford, Atlas of AI.

7 Chloe Xiang, “AI Isn’t Artificial or Intelligent: How AI innovation is powered by underpaid workers in foreign countries,” Motherboard, 6 December 2022.

8 Kelle Howson and Hannah Johnston, “Unpaid labour and territorial extraction in digital value networks,” Global Network, 26 October 2022.

9 Howson and Johnston, “Unpaid labour and territorial extraction in digital value networks.”

10 Crawford, Atlas of AI.

11 James Bridle, “The Stupidity of AI”, The Guardian, 16 Mar 2023.

12 Crawford, Atlas of AI.

13 Crawford, Atlas of AI.

14 Crawford, Atlas of AI.

15 Quotes from Affectiva taken from www.affectiva.com on 5 February 2024.

16 Oscar Schwarz, “Don’t look now: why you should be worried about machines reading your emotions,” The Guardian, 6 March 2019.

17 From Amazon Web Services Rekognition website, accessed on 5 February 2024; italics added.

18 Dan McQuillan, “Post-Humanism, Mutual Aid,” in AI for Everyone? Critical Perspectives, University of Westminster Press, 2021.

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

A road map that misses some turns

A review of No Miracles Needed

Also published on Resilience

Mark Jacobson’s new book, greeted with hosannas by some leading environmentalists, is full of good ideas – but the whole is less than the sum of its parts.

No Miracles Needed, by Mark Z. Jacobson, published by Cambridge University Press, Feb 2023. 437 pages.

The book is No Miracles Needed: How Today’s Technology Can Save Our Climate and Clean Our Air (Cambridge University Press, Feb 2023).

Jacobson’s argument is both simple and sweeping: We can transition our entire global economy to renewable energy sources, using existing technologies, fast enough to reduce annual carbon dioxide emissions at least 80% by 2030, and 100% by 2050. Furthermore, we can do all this while avoiding any major economic disruption such as a drop in annual GDP growth, a rise in unemployment, or any drop in creature comforts. But wait – there’s more! In so doing, we will also completely eliminate pollution.

Just don’t tell Jacobson that this future sounds miraculous.

The energy transition technologies we need – based on Wind, Water and Solar power, abbreviated to WWS – are already commercially available, Jacobson insists. He contrasts the technologies he favors with “miracle technologies” such as geoengineering, Carbon Capture Storage and Utilization (CCUS), or Direct Air Capture of carbon dioxide (DAC). These latter technologies, he argues, are unneeded, unproven, expensive, and will take far too long to implement at scale; we shouldn’t waste our time on such schemes.  

The final chapter helps to understand both the hits and misses of the previous chapters. In “My Journey”, a teenage Jacobson visits the smog-cloaked cities of southern California and quickly becomes aware of the damaging health effects of air pollution:

“I decided then and there, that when I grew up, I wanted to understand and try to solve this avoidable air pollution problem, which affects so many people. I knew what I wanted to do for my career.” (No Miracles Needed, page 342)

His early academic work focused on the damages of air pollution to human health. Over time, he realized that the problem of global warming emissions was closely related. The increasingly sophisticated computer models he developed were designed to elucidate the interplay between greenhouse gas emissions, and the particulate emissions from combustion that cause so much sickness and death.

These modeling efforts won increasing recognition and attracted a range of expert collaborators. Over the past 20 years, Jacobson’s work moved beyond academia into political advocacy. “My Journey” describes the growth of an organization capable of developing detailed energy transition plans for presentation to US governors, senators, and CEOs of major tech companies. Eventually that led to Jacobson’s publication of transition road maps for states, countries, and the globe – road maps that have been widely praised and widely criticized.

In my reading, Jacobson’s personal journey casts light on key features of No Miracles Needed in two ways. First, there is a singular focus on air pollution, to the omission or dismissal of other types of pollution. Second, it’s not likely Jacobson would have received repeat audiences with leading politicians and business people if he challenged the mainstream orthodox view that GDP can and must continue to grow.

Jacobson’s road map, then, is based on the assumption that all consumer products and services will continue to be produced in steadily growing quantities – but they’ll all be WWS based.

Does he prove that a rapid transition is a realistic scenario? Not in this book.

Hits and misses

Jacobson gives us brief but marvelously lucid descriptions of many WWS generating technologies, plus storage technologies that will smooth the intermittent supply of wind- and sun-based energy. He also goes into considerable detail about the chemistry of solar panels, the physics of electricity generation, and the amount of energy loss associated with each type of storage and transmission.

These sections are aimed at a lay readership and they succeed admirably. There is more background detail, however, than is needed to explain the book’s central thesis.

The transition road map, on the other hand, is not explained in much detail. There are many references to scientific papers in which he outlines his road maps. A reader of No Miracles Needed can take Jacobson’s word that the model is a suitable representation, or you can find and read Jacobson’s articles in academic journals – but you don’t get the needed details in this book.

Jacobson explains why, at the level of a device such as a car or a heat pump, electric energy is far more efficient in producing motion or heat than is an internal combustion engine or a gas furnace. Less convincingly, he argues that electric technologies are far more energy-efficient than combustion for the production of industrial heat – while nevertheless conceding that some WWS technologies needed for industrial heat are, at best, in prototype stages.

Yet Jacobson expresses serene confidence that hard-to-electrify technologies, including some industrial processes and long-haul aviation, will be successfully transitioning to WWS processes – perhaps including green hydrogen fuel cells, but not hydrogen combustion – by 2035.

The confidence in complex global projections is often jarring. For example, Jacobson tells us repeatedly that the fully WWS energy system of 2050 “reduces end-use energy requirements by 56.4 percent” (page 271, 275).1 The expressed precision notwithstanding, nobody yet knows the precise mix of storage types, generation types, and transmission types, which have various degrees of energy efficiency, that will constitute a future WWS global system. What we should take from Jacobson’s statements is that, based on the subset of factors and assumptions – from an almost infinitely complex global energy ecosystem – which Jacobson has included in his model, the calculated outcome is a 56% end-use energy reduction.

Canada’s Premiers visit Muskrat Falls dam construction site, 2015. Photo courtesy of Government of Newfoundland and Labrador; CC BY-NC-ND 2.0 license, via Flickr.

Also jarring is the almost total disregard of any type of pollution other than that which comes from fossil fuel combustion. Jacobson does briefly mention the particles that grind off the tires of all vehicles, including typically heavier EVs. But rather than concede that these particles are toxic and can harm human and ecosystem health, he merely notes that the relatively large particles “do not penetrate so deep into people’s lungs as combustion particles do.” (page 49)

He claims, without elaboration, that “Environmental damage due to lithium mining can be averted almost entirely.” (page 64) Near the end of the book, he states that “In a 2050 100 percent WWS world, WWS energy private costs equal WWS energy social costs because WWS eliminates all health and climate costs associated with energy.” (page 311; emphasis mine)

In a culture which holds continual economic growth to be sacred, it would be convenient to believe that business-as-usual can continue through 2050, with the only change required being a switch to WWS energy.

Imagine, then, that climate-changing emissions were the only critical flaw in the global economic system. Given that assumption, is Jacobson’s timetable for transition plausible?

No. First, Jacobson proposes that “by 2022”, no new power plants be built that use coal, methane, oil or biomass combustion; and that all new appliances for heating, drying and cooking in the residential and commercial sectors “should be powered by electricity, direct heat, and/or district heating.” (page 319) That deadline has passed, and products that rely on combustion continue to be made and sold. It is a mystery why Jacobson or his editors would retain a 2022 transition deadline in a book slated for publication in 2023.

Other sections of the timeline also strain credulity. “By 2023”, the timeline says, all new vehicles in the following categories should be either electric or hydrogen fuel-cell: rail locomotives, buses, nonroad vehicles for construction and agriculture, and light-duty on-road vehicles. This is now possible only in a purely theoretical sense. Batteries adequate for powering heavy-duty locomotives and tractors are not yet in production. Even if they were in production, and that production could be scaled up within a year, the charging infrastructure needed to quickly recharge massive tractor batteries could not be installed, almost overnight, at large farms or remote construction sites around the world.

While electric cars, pick-ups and vans now roll off assembly lines, the global auto industry is not even close to being ready to switch the entire product lineup to EV only. Unless, of course, they were to cut back auto production by 75% or more until production of EV motors, batteries, and charging equipment can scale up. Whether you think that’s a frightening prospect or a great idea, a drastic shrinkage in the auto industry would be a dramatic departure from a business-as-usual scenario.

What’s the harm, though, if Jacobson’s ambitious timeline is merely pushed back by two or three years?

If we were having this discussion in 2000 or 2010, pushing back the timeline by a few years would not be as consequential. But as Jacobson explains effectively in his outline of the climate crisis, we now need both drastic and immediate actions to keep cumulative carbon emissions low enough to avoid global climate catastrophe. His timeline is constructed with the goal of reducing carbon emissions by 80% by 2030, not because those are nice round figures, but because he (and many others) calculate that reductions of that scale and rapidity are truly needed. Even one or two more years of emissions at current rates may make the 1.5°C warming limit an impossible dream.

The picture is further complicated by a factor Jacobson mentions only in passing. He writes,

“During the transition, fossil fuels, bioenergy, and existing WWS technologies are needed to produce the new WWS infrastructure. … [A]s the fraction of WWS energy increases, conventional energy generation used to produce WWS infrastructure decreases, ultimately to zero. … In sum, the time-dependent transition to WWS infrastructure may result in a temporary increase in emissions before such emissions are eliminated.” (page 321; emphasis mine)

Others have explained this “temporary increase in emissions” at greater length. Assuming, as Jacobson does, that a “business-as-usual” economy keeps growing, the vast majority of goods and services will continue, in the short term, to be produced and/or operated using fossil fuels. If we embark on an intensive, global-scale, rapid build-out of WWS infrastructures at the same time, a substantial increment in fossil fuels will be needed to power all the additional mines, smelters, factories, container ships, trucks and cranes which build and install the myriad elements of a new energy infrastructure. If all goes well, that new energy infrastructure will eventually be large enough to power its own further growth, as well as to power production of all other goods and services that now rely on fossil energy.

Unless we accept a substantial decrease in non-transition-related industrial activity, however, the road that takes us to a full WWS destination must route us through a period of increased fossil fuel use and increased greenhouse gas emissions.

It would be great if Jacobson modeled this increase to give us some guidance how big this emissions bump might be, how long it might last, and therefore how important it might be to cumulative atmospheric carbon concentrations. There is no suggestion in this book that he has done that modeling. What should be clear, however, is that any bump in emissions at this late date increases the danger of moving past a climate tipping point – and this danger increases dramatically with every passing year.


1In a tl;dr version of No Miracles Needed published recently in The Guardian, Jacobson says “Worldwide, in fact, the energy that people use goes down by over 56% with a WWS system.” (“‘No miracles needed’: Prof Mark Jacobson on how wind, sun and water can power the world”, 23 January 2023)

 


Photo at top of page by Romain Guy, 2009; public domain, CC0 1.0 license, via Flickr.

Profits of Utopia

Also published on Resilience

What led to the twentieth century’s rapid economic growth? And what are the prospects for that kind of growth to return?

Slouching Towards Utopia: An Economic History of the Twentieth Century, was published by Basic Books, Sept 2022; 605 pages.

Taken together, two new books go a long way toward answering the first of those questions.

Bradford J. DeLong intends his Slouching Towards Utopia to be a “grand narrative” of what he calls “the long twentieth century”.

Mark Stoll summarizes his book Profit as “a history of capitalism that seeks to explain both how capitalism changed the natural world and how the environment shaped capitalism.”

By far the longer of the two books, DeLong’s tome primarily concerns the years from 1870 to 2010. Stoll’s slimmer volume goes back thousands of years, though the bulk of his coverage concerns the past seven centuries.

Both books are well organized and well written. Both make valuable contributions to an understanding of our current situation. In my opinion Stoll casts a clearer light on the key problems we now face.

Although neither book explicitly addresses the prospects for future prosperity, Stoll’s concluding verdict offers a faint hope.

Let’s start with Slouching Towards Utopia. Bradford J. Delong, a professor of economics at University of California Berkeley, describes “the long twentieth century” – from 1870 to 2010 – as “the first century in which the most important historical thread was what anyone would call the economic one, for it was the century that saw us end our near-universal dire material poverty.” (Slouching Towards Utopia, page 2; emphasis mine) Unfortunately that is as close as he gets in this book to defining just what he means by “economics”.

On the other hand he does tell us what “political economics” means:

“There is a big difference between the economic and the political economic. The latter term refers to the methods by which people collectively decide how they are going to organize the rules of the game within which economic life takes place.” (page 85; emphasis in original)

Discussion of the political economics of the Long Twentieth Century, in my opinion, account for most of the bulk and most of the value in this book.

DeLong weaves into his narratives frequent – but also clear and concise – explanations of the work of John Maynard Keynes, Friedrich Hayek, and Karl Polanyi. These three very different theorists responded to, and helped bring about, major changes in “the rules of the game within which economic life takes place”.

DeLong uses their work to good effect in explaining how policymakers and economic elites navigated and tried to influence the changing currents of market fundamentalism, authoritarian collectivism, social democracy, the New Deal, and neoliberalism.

With each swing of the political economic pendulum, the industrial, capitalist societies either slowed, or sped up, the advance “towards utopia” – a society in which all people, regardless of class, race, or sex, enjoy prosperity, human rights and a reasonably fair share of the society’s wealth.

DeLong and Stoll present similar perspectives on the “Thirty Glorious Years” from the mid-1940s to the mid-1970s, and a similarly dim view of the widespread turn to neoliberalism since then.

They also agree that while a “market economy” plays an important role in generating prosperity, a “market society” rapidly veers into disaster. That is because the market economy, left to its own devices, exacerbates inequalities so severely that social cohesion falls apart. The market must be governed by social democracy, and not the other way around.

DeLong provides one tragic example:

“With unequal distribution, a market economy will generate extraordinarily cruel outcomes. If my wealth consists entirely of my ability to work with my hands in someone else’s fields, and if the rains do not come, so that my ability to work with my hands has no productive market value, then the market will starve me to death – as it did to millions of people in Bengal in 1942 and 1943.” (Slouching Towards Utopia, p 332)

Profit: An Environmental History was published by Polity Books, January 2023; 280 pages.

In DeLong’s and Stoll’s narratives, during the period following World War II “the rules of the economic game” in industrialized countries were set in a way that promoted widespread prosperity and rising wealth for nearly all classes, without a concomitant rise in inequality.

As a result, economic growth during that period was far higher than it had been from 1870 to 1940, before the widespread influence of social democracy, and far higher than it has been since about 1975 during the neoliberal era.

During the Thirty Glorious Years, incomes from the factory floor to the CEO’s office rose at roughly the same rate. Public funding of advanced education, an income for retired workers, unemployment insurance, strong labor unions, and (in countries more civilized than the US) public health insurance – these social democratic features ensured that a large and growing number of people could continue to buy the ever-increasing output of the consumer economy. High marginal tax rates ensured that government war debts would be retired without cutting off the purchasing power of lower and middle classes.

Stoll explains that long-time General Motors chairman Alfred Sloan played a key role in the transition to a consumer economy. Under his leadership GM pioneered a line-up ranging from economy cars to luxury cars; the practice of regularly introducing new models whose primary features were differences in fashion; heavy spending on advertising to promote the constantly-changing lineup; and auto financing which allowed consumers to buy new cars without first saving up the purchase price.

By then the world’s largest corporation, GM flourished during the social democratic heyday of the Thirty Glorious Years. But in Stoll’s narrative, executives like Alfred Sloan couldn’t resist meddling with the very conditions that had made their version of capitalism so successful:

“There was a worm in the apple of postwar prosperity, growing out of sight until it appeared in triumph in the late 1970s. The regulations and government activism of the New Deal … so alarmed certain wealthy corporate leaders, Alfred Sloan among them, that they began to develop a propaganda network to promote weak government and low taxes.” (Profit, page 176)

This propaganda network achieved hegemony in the 1980s as Ronald Reagan and Margaret Thatcher took the helm in the US and the UK. DeLong and Stoll concur that the victory of neoliberalism resulted in a substantial drop in the economic growth rate, along with a rapid growth in inequality. As DeLong puts it, the previous generation’s swift march towards utopia slowed to a crawl.

DeLong and Stoll, then, share a great deal when it comes to political economics – the political rules that govern how economic wealth is distributed.

On the question of how that economic wealth is generated, however, DeLong is weak and Stoll makes a better guide.

DeLong introduces his discussion of the long twentieth century with the observation that between 1870 and 2010, economic growth far outstripped population growth for the first time in human history. What led to that economic acceleration? There were three key factors, DeLong says:

“Things changed starting around 1870. Then we got the institutions for organization and research and the technologies – we got full globalization, the industrial research laboratory, and the modern corporation. These were the keys. These unlocked the gate that had previously kept humanity in dire poverty.” (Slouching Towards Utopia, p. 3)

Thomas Edison’s research lab in West Orange, New Jersey. Cropped from photo by Anita Gould, 2010, CC BY-SA 2.0 license, via Flickr.

These may have been necessary conditions for a burst of economic growth, but were they sufficient? If they were sufficient, then why should we believe that the long twentieth century is conclusively over? Since DeLong’s three keys are still in place, and if only the misguided leadership of neoliberalism has spoiled the party, would it not be possible that a swing of the political economic pendulum could restore the conditions for rapid economic growth?

Indeed, in one of DeLong’s few remarks directly addressing the future he says “there is every reason to believe prosperity will continue to grow at an exponential rate in the centuries to come.” (page 11)

Stoll, by contrast, deals with the economy as inescapably embedded in the natural environment, and he emphasizes the revolutionary leap forward in energy production in the second half of the 19th century.

Energy and environment

Stoll’s title and subtitle are apt – Profit: An Environmental History. He says that “economic activity has always degraded environments” (p. 6) and he provides examples from ancient history as well as from the present.

Economic development in this presentation is “the long human endeavor to use resources more intensively.” (p. 7) In every era, tapping energy sources has been key.

European civilization reached for the resources of other regions in the late medieval era. Technological developments such as improved ocean-going vessels allowed incipient imperialism, but additional energy sources were also essential. Stoll explains that the Venetian, Genoese and Portuguese traders who pioneered a new stage of capitalism all relied in part on the slave trade:

“By the late fifteenth century, slaves made up over ten percent of the population of Lisbon, Seville, Barcelona, and Valencia and remained common in southern coastal Portugal and Spain for another century or two.” (p. 40)

The slave trade went into high gear after Columbus chanced across the Americas. That is because, even after they had confiscated two huge continents rich in resources, European imperial powers still relied on the consumption of other humans’ lives as an economic input:

“Free-labor colonies all failed to make much profit and most failed altogether. Colonizers resorted to slavery to people colonies and make them pay. For this reason Africans would outnumber Europeans in the Americas until the 1840s.” (p. 47)

While the conditions of slavery in Brazil were “appallingly brutal”, Stoll writes, Northern Europeans made slavery even more severe. As a result “Conditions in slave plantations were so grueling and harsh that birthrates trailed deaths in most European plantation colonies.” (p 49)

‘Shipping Sugar’ from William Clark’s ‘Ten views in the island of Antigua’ (Thomas Clay, London, 1823). Public domain image via Picryl.com.

Clearly, then, huge numbers of enslaved workers played a major and fundamental role in rising European wealth between 1500 and 1800. It is perhaps no coincidence that in the 19th century, as slavery was being outlawed in colonial empires, European industries were learning how to make effective use of a new energy source: coal. By the end of that century, the fossil fuel economy had begun its meteoric climb.

Rapid increases in scientific knowledge, aided by such organizations as modern research laboratories, certainly played a role in commercializing methods of harnessing the energy in coal and oil. Yet this technological knowhow on its own, without abundant quantities of readily-extracted coal and oil, would not have led to an explosion of economic growth.

Where DeLong is content to list “three keys to economic growth” that omit fossil fuels, Stoll adds a fourth key – not merely the technology to use fossil fuels, but the material availability of those fuels.

By 1900, coal-powered engines had transformed factories, mines, ocean transportation via steamships, land transportation via railroads, and the beginnings of electrical grids. The machinery of industry could supply more goods than most people had ever thought they might want, a development Stoll explains as a transition from an industrial economy to a consumer economy.

Coal, however, could not have powered the car culture that swept across North America before World War II, and across the rest of the industrialized world after the War. To shift the consumer economy into overdrive, an even richer and more flexible energy source was needed: petroleum.

By 1972, Stoll notes, the global demand for petroleum was five-and-a-half times as great as in 1949.

Like DeLong, Stoll marks the high point of the economic growth rate at about 1970. And like DeLong, he sees the onset of neoliberalism as one factor slowing and eventually stalling the consumer economy. Unlike DeLong, however, Stoll also emphasizes the importance of energy sources in this trajectory. In the period leading up to 1970 net energy availability was skyrocketing, making rapid economic growth achievable. After 1970 net energy availability grew more slowly, and increasing amounts of energy had to be used up in the process of finding and extracting energy. In other words, the Energy Return on Energy Invested, which increased rapidly between 1870 and 1970, peaked and started to decline over recent decades.

This gradual turnaround in net energy, along with the pervasive influence of neoliberal ideologies, contributed to the faltering of economic growth. The rich got richer at an even faster pace, but most of society gained little or no ground.

Stoll pays close attention to the kind of resources needed to produce economic growth – the inputs. He also emphasizes the anti-goods that our economies turn out on the other end, be they toxic wastes from mining and smelting, petroleum spills, smog, pervasive plastic garbage, and climate-disrupting carbon dioxide emissions.

Stoll writes, 

“The relentless, rising torrent of consumer goods that gives Amazon.com its apt name places unabating demand on extractive industries for resources and energy. Another ‘Amazon River’ of waste flows into the air, water, and land.” (Profit, p. 197)

Can the juggernaut be turned around before it destroys both our society and our ecological life-support systems, and can a fair, sustainable economy take its place? On this question, Stoll’s generally excellent book disappoints.

While he appears to criticize the late-twentieth century environmental movement for not daring to challenge capitalism itself, in Profit’s closing pages he throws cold water on any notion that capitalism could be replaced.

“Capitalism … is rooted in human nature and human history. These deep roots, some of which go back to our remotest ancestors, make capitalism resilient and adaptable to time and circumstances, so that the capitalism of one time and place is not that of another. These roots also make it extraordinarily difficult to replace.” (Profit, p. 253)

He writes that “however much it might spare wildlife and clean the land, water, and air, we stop the machinery of consumer capitalism at our peril.” (p. 254) If we are to avoid terrible social and economic unrest and suffering, we must accept that “we are captives on this accelerating merry-go-round of consumer capitalism.” (p. 256)

It’s essential to curb the power of big corporations and switch to renewable energy sources, he says. But in a concluding hint at the so-far non-existent phenomenon of “absolute decoupling”, he writes,

“The only requirement to keep consumer capitalism running is to keep as much money flowing into as many pockets as possible. The challenge may be to do so with as little demand for resources as possible.” (Profit, p. 256)

Are all these transformations possible, and can they happen in time? Stoll’s final paragraph says “We can only hope it will be possible.” Given the rest of his compelling narrative, that seems a faint hope indeed.

* * *

Coming next: another new book approaches the entanglements of environment and economics with a very different perspective, telling us with cheerful certainty that we can indeed switch the industrial economy to clean, renewable energies, rapidly, fully, and with no miracles needed.



Image at top of page: ‘The Express Train’, by Charles Parsons, 1859, published by Currier and Ives. Image donated to Wikimedia Commons by Metropolitan Museum of Art.

 

Segregation, block by block

Also published on Resilience

Is the purpose of zoning to ensure that towns and cities develop according to a rational plan? Does zoning protect the natural environment? Does zoning help promote affordable housing? Does zoning protect residents from the air pollution, noise pollution  and dangers from industrial complexes or busy highways?

To begin to answer these questions, consider this example from M. Nolan Gray’s new book Arbitrary Lines:

“It remains zoning ‘best practice’ that single-family residential districts should be ‘buffered’ from bothersome industrial and commercial districts by multifamily residential districts. This reflects zoning’s modus operandi of protecting single-family houses at all costs, but it makes no sense from a land-use compatibility perspective. While a handful of generally more affluent homeowners may be better off, it comes at the cost of many hundreds more less affluent residents suffering a lower quality of life.” (M. Nolan Gray, page 138)

Arbitrary Lines by M. Nolan Gray is published by Island Press, June 2022.

The intensification of inequality, Gray argues, is not an inadvertent side-effect of zoning, but its central purpose.

If you are interested in affordable housing, housing equity,  environmental justice, reduction of carbon emissions, adequate public transit, or streets that are safe for walking and cycling, Arbitrary Lines is an excellent resource in understanding how American cities got the way they are and how they might be changed for the better. (The book doesn’t discuss Canada, but much of Gray’s argument seems readily applicable to Canadian cities and suburbs.)

In part one and part two of this series, we looked at the complex matrix of causes that explain why “accidents”, far from being randomly distributed, happen disproportionately to disadvantaged people. In There Are No Accidents Jessie Singer writes, “Accidents are the predictable result of unequal power in every form – physical and systemic. Across the United States, all the places where a person is most likely to die by accident are poor. America’s safest corners are all wealthy.” (Singer, page 13)

Gray does not deal directly with traffic accidents, or mortality due in whole or part to contaminants from pollution sources close to poor neighbourhoods. His lucid explanation of zoning, however, helps us understand one key mechanism by which disadvantaged people are confined to unhealthy, dangerous, unpleasant places to live.

‘Technocratic apartheid’

Zoning codes in the US today make no mention of race, but Gray traces the history of zoning back to explicitly racist goals. In the early 20th century, he says, zoning laws were adopted most commonly in southern cities for the express purposes of enforcing racial segregation. As courts became less tolerant of open racism, they nonetheless put a stamp of approval on economic segregation. Given the skewed distribution of wealth, economic segregation usually resulted in or preserved de facto racial segregation as well.

The central feature and overriding purpose of zoning was to restrict the best housing districts to affluent people. Zoning accomplishes this in two ways. First, in large areas of cities and especially of suburbs the only housing allowed is single-family housing, one house per lot. Second, minimum lot sizes and minimum floor space sizes ensure that homes are larger and more expensive than they would be if left to the “free market”.

The result, across vast swaths of urban America, is that low-density residential areas have been mandated to remain low-density. People who can’t afford to buy a house, but have the means to rent an apartment, are unsubtly told to look in other parts of town.

Gray terms this segregation “a kind of technocratic apartheid,” and notes that “Combined with other planning initiatives, zoning largely succeeded in preserving segregation where it existed and instituting segregation where it didn’t.” (Gray, page 81) He cites one study that found “over 80 percent of all large metropolitan areas in the US were more racially segregated in 2019 than they were in 1990. Today, racial segregation is most acute not in the South but in the Midwest and mid-Atlantic regions.” (Gray, page 169)

Public transit? The numbers don’t add up.

From an environmental and transportation equity point of view, a major effect of zoning is that it makes good public transit unfeasible in most urban areas. Gray explains:

“There is a reasonable consensus among transportation planners that a city needs densities of at least seven dwelling units per acre to support the absolute baseline of transit: a bus that stops every thirty minutes. To get more reliable service, like bus rapid transit or light-rail service, a city needs … approximately fifteen units per acre. The standard detached single-family residential district—which forms the basis of zoning and remains mapped in the vast majority of most cities—supports a maximum density of approximately five dwelling units per acre. That is to say, zoning makes efficient transit effectively illegal in large swaths of our cities, to say nothing of our suburbs.” (Gray, page 101)

Coupled with the nearly ubiquitous adoption of rules mandating more parking space than would otherwise be built, the single-family housing and minimum lot size provisions of zoning are a disaster both for affordable housing and for environmentally-friendly housing. Typical American zoning, Gray says, “assumes universal car ownership and prohibits efficient apartment living. But it also just plain wastes space: if you didn’t know any better, you might be forgiven for thinking that your local zoning ordinance was carefully calibrated to use up as much land as possible.” (Gray, page 96)

Zoning regimes came into wide use in the mid-twentieth century and became notably stricter in the 1970s. In Gray’s view the current housing affordability crisis is the result of cities spending “the past fifty years using zoning to prevent new housing supply from meeting demand.” This succeeded in boosting values of properties owned by the already affluent, but eventually housing affordability became a problem not only for those at the bottom of the housing market but for most Americans. That is one impetus, Gray explains, for a recent movement to curb the worst features of zoning. While this movement is a welcome development, Gray argues zoning should be abolished, not merely reformed. Near the end of Arbitrary Lines, he explains many other planning and regulatory frameworks that can do much more good and much less harm than zoning.

There is one part of his argument that I found shaky. He believes that the abolition of zoning will restore economic growth by promoting movement to the “most productive” cities, and that “there is no reason to believe that there is an upper bound to the potential innovation that could come from growing cities.” (Gray, page 72) At root the argument is based on his acceptance that income is “a useful proxy for productivity” – a dubious proposition in my view. That issue aside, Arbitrary Lines is well researched, well illustrated, well reasoned and well written.

The book is detailed and wide-ranging, but unlike a typical big-city zoning document it is never boring or obscure. For environmentalists and urban justice activists Arbitrary Lines is highly recommended.


Image at top of page: detail from Winnipeg zoning map, 1947, accessed via Wikimedia Commons.

Around the world in a shopping cart

Also posted on Resilience.

Christopher Mims had just embarked on his study of the global retail supply chain when the Covid-19 pandemic broke out. Quickly, he found, affluent consumers redoubled their efforts at the very activity Mims was investigating:

“Confronted by the stark reality of their powerlessness to do anything else and primed by a lifetime of consumerism into thinking the answer to the existential dread at the core of their being is to buy more stuff, Americans, along with everyone else on Earth with the means to do so, will go shopping.” (page 6-7; all quotes here are from Arriving Today)

Arriving Today is published by Harper Collins, September 2021.

More than ever, shopping during the pandemic meant shopping online. That added complications to the global logistics systems Mims was studying, and added another strand to the story he weaves in Arriving Today: From Factory to Front Door – Why Everything Has Changed About How and What We Buy. (Harper Collins, 2021)

The book traces the movements of a single, typical online purchase – a USB charger – from the time it leaves a factory in Vietnam until it’s delivered to a buyer in the US. Sounds simple enough – but it’s an immensely complicated story, which Sims tells very well.

In the process he dives into the history and present of containerized shipping; working conditions for sailors, longshoremen, truckers, and warehouse employees; why items are scattered around a “fulfillment center” in the same way data files are scattered around on a computer drive; the great difficulty in teaching a robot to pick up soft packages wrapped in plastic film; and why no supercomputer can calculate the single best route for a UPS driver to take in making a hundred or more deliveries in the course of an average day.

How long can this system continue to swallow more resources, more small businesses, more lives? If there is a major weakness to Sims’ treatment, it is in suggesting that the online retail juggernaut must, inevitably, continue to grow indefinitely.

A key issue that is absent from the book is the energy cost of the global supply chain. Sims devotes a great deal of attention, however, to the brutal working conditions and relentless exploitation of working people in many segments of the delivery system. At the very least, this evidence should lead one to wonder when a tipping point will be reached. When, for example, might workers or voters be driven to organize an effective counterforce to insatiably acquisitive billionaires like Jeff Bezos? When, more grimly, might the portion of the population with discretionary income become so small they can no longer prop up the consumer economy?

“Taylorism – the dominant ideology of the modern world”

The unifying thread in Sims’ presentation is this: “Taylorism” – the early 20th-century management practice of breaking down factory work into discrete movements that can be “rationalized” for greater company profits – has now turned many more sectors into assembly lines. Today, Sims writes, “the walls of the factory have dissolved. Every day, more and more of what we do, how we consume, even how we think, has become part of the factory system.”

The factory system, in Sims’ telling, now stretches across oceans and across continents. It finds clear expression in facilities that are owned or controlled by the management practices of Amazon. In Amazon’s sorting, packing and shipping facilities, what makes the company “particularly Darwinian” is the floating rate that constantly and coldly passes judgment on employees.

With warehouse work divided into discrete, measurable and countable tasks, management algorithms constantly track the number of operations completed by each worker. Those who perform in the bottom 25% are routinely fired and replaced. As a result, Sims writes, “most workers in an Amazon warehouse are constantly in danger of losing their jobs, and they know it.”

There is no paid sick leave, so cash-strapped employees often have no choice but to work even when injured or sick. (Free coffee and free Ibuprofen are made available to help them work through fatigue or pain.) But if ill health causes a drop in performance they won’t “make the rate” and they will be fired. Those who are exceptionally physically fit, and who seldom get sick, are still likely to be worn down by the relentless pace eventually.

To replace workers, Sims says, “the company has all but abandoned interviewing new hires.” Screening and training new employees can be expensive processes, but they are processes in which Amazon invests little. A constant cohort of new employees are dropped into the stream and they simply sink or swim:

“Everyone I talked to about their first months at Amazon said that the attrition rate they witnessed was greater than 50 percent in the first two months.” (page 209)

Some companies might regard high employee turnover as a huge liability. For Amazon, Sims explains, high turnover is not a bug, it’s a feature. The turnover allows the company “to grab only the most able-bodied members of America’s workforce” (page 235) and to constantly replace them with new employees who haven’t yet gotten sick or injured.

If that weren’t enough, the high turnover benefits Amazon in another important way: “it makes it almost impossible for workers to unionize.” (page 210) 

UPS trucks in Manhattan, 2010. Photo by Jeremy Vandel, licensed under Creative Commons Attribution-Non Commercial license.

The last mile

“[Amazon’s] relentless measurement, drive for efficiency, loose hiring standards, and moving targets for hourly rates are the perfect system for ingesting as many people as possible and discarding all but the most physically fit.” (page 235-236)

As Amazon’s share of retail shopping grows and it Taylorizes its warehousing, there is another big link in the supply chain in which the company sees opportunity to slash worker compensation and boost corporate profits.

Until recently transportation of packages between sorting centers, and along the “last mile” to customers’ doorsteps, has been controlled by a wide array of trucking companies. One of the biggest of these companies, UPS, is a throwback to a day when most truck drivers were unionized, well paid, and received benefits like paid sick days, company health insurance, and pensions.

A driver for UPS is well trained, often very experienced, and learns to “go from stopping their truck to getting a package out of it in nine seconds.” (page 271) But a full-time driver for UPS also makes more than $30/hour plus benefits. Jeff Bezos, who increased his wealth by $65 billion in the first year of the pandemic, covets the paycheque of that UPS driver, along with the paycheque of anyone else in the supply chain whose job, if it can’t be robotized, could be turned over to a minimum-wage gig worker, aka “independent contractor”.

UPS and FedEx, Sims writes, together have 80 per cent of the US package delivery business. FedEx, along with nearly all other parcel-delivery companies, pay roughly minimum wage, with minimal benefits. Care to guess which company Amazon would like to emulate?

Indeed, as of 2018 Amazon itself has roared into the delivery business. “By the middle of 2020s,” Sims writes, “Amazon Logistics … is projected to take the number one spot from UPS.” (page 252)

Citing the research of Brandeis University professor David Weil, Sims concludes:

“Everything about Amazon’s decision to hire delivery companies that hire drivers, rather than hiring those drivers directly, is about pushing down wages, eliminating workplace protections, evading liability in the event of accidents, avoiding workplace litigation, eliminating the expense of benefits, and eliminating the possibility of drivers ever unionizing ….” (page 278)

In the last sentence of his book, Sims cites the 100 billion packages per year now shipped through the online retail supply chain, and he exhorts us to “imagine a future in which that number has doubled or tripled; imagine a future in which it is the way virtually every finished object gets anywhere.” (page 288)

Let’s imagine: Factory jobs in every sector will have moved to the lowest-wage countries with adequate industrial capabilities. Formerly well-paid factory workers in Rust Belt towns will compete for Amazon warehouse jobs that offer them minimum wage, for as many months as their bodies can sustain the constantly accelerating pace of simple repetitive tasks. Robots will have replaced human wage-earners wherever possible. And last mile delivery drivers will take orders from Amazon but receive their meager paycheques from other companies whose names most of us will never see.

In that paradise of capitalist productivity, who besides Jeff Bezos will still have enough income to fill their shopping carts?


Image at top: Your Cart is Full, composed by Bart Hawkins Kreps from public domain graphics.

‘This is a key conversation to have.’

This afternoon Post Carbon Institute announced the release of the new book Energy Transition and Economic Sufficiency. That brings to fruition a project more than two-and-a-half years in the making.

Cover of Energy Transition and Economic Sufficiency

In May 2019, I received an email from Clifford Cobb, editor of the American Journal of Economics and Sociology. He asked if I would consider serving as Guest Editor for an issue of the Journal, addressing “problems of transition to a world of climate instability and rising energy prices.” I said “yes” – and then, month by month, learned how difficult it can be to assemble a book-length collection of essays. In July, 2020, this was published by Wiley and made accessible to academic readers around the world.

It had always been a goal, however, to also release this collection as a printed volume, for the general public, at an accessible price. With the help of the Post Carbon Institute that plan is now realized. On their website you can download the book’s Introduction –which sets the context and gives an overview of each chapter – at no cost; download the entire book in pdf format for only $9.99US; or find online retailers around the world to buy the print edition of the book.

Advance praise for Energy Transition and Economic Sufficiency:

“Energy descent is crucial to stopping climate and ecological breakdown. This is a key conversation to have.” – Peter Kalmus, climate scientist, author of Being The Change

“This lively and insightful collection is highly significant for identifying key trends in transitioning to low-energy futures.” – Anitra Nelson, author of Small is Necessary

“The contributors to this volume have done us a tremendous service.” – Richard Heinberg, Senior Fellow, Post Carbon Institute, author of Power: Limits and Prospects for Human Survival

“For those already applying permaculture in their lives and livelihoods, this collection of essays is affirmation that we are on the right track for creative adaption to a world of less. This book helps fill the conceptual black hole that still prevails in academia, media, business and politics.” – David Holmgren, co-originator of Permaculture, author of RetroSuburbia

“The contributors explain why it is time to stop thinking so much about efficiency and start thinking about sufficiency: how much do we really need? What’s the best tool to do the job? What is enough? They describe a future that is not just sustainable but is regenerative, and where there is enough for everyone living in a low-carbon world.” – Lloyd Alter, Design Editor at treehugger.com and author of Living the 1.5 Degree Lifestyle: Why Individual Climate Action Matters More Than Ever


Some sources for the print edition:

In North America, Barnes & Noble

In Britain, Blackwell’s  and Waterstones

In Australia, Booktopia

Worldwide, from Amazon

Colonialism, climate crisis, and the forever wars

Also published on Resilience.

Two rounds of negotiation take centre stage, about halfway through Amitav Ghosh’s new masterwork The Nutmeg’s Curse: Parables for a Planet in Crisis.

In one, US State Department and Pentagon officials win agreement that carbon emissions connected with the military are to be kept out of the Kyoto Protocol – an omission that has been preserved in international climate agreements to this day.

At the opposite end of the global power hierarchy, Khokon, a refugee from the Kishoreganj district of Bangladesh, has engaged in desperate negotiations simply to stay alive. His family’s low-lying land had been flooded for six months, followed by long droughts, hailstorms, and unseasonal downpours. The environmental degradation was followed by political depredations, as well-connected people seized increasingly scarce arable land including part of Khokon’s family’s farm. Eventually there was no better option than to sell some land and send Khokon to France – but he was quickly deported back to Bangladesh. There was no paid employment for him so after seven months of hopelessness, 

“his family sold the rest of their land and paid another agent to send him abroad again. Dubai was Khokon’s chosen destination, and he paid accordingly; but the agent cheated him and he ended up in Libya instead. For the next several years he had to endure enslavement, beatings, extortion, and torture. But somehow he managed to save up enough money to pay traffickers to send him from Libya to Sicily in a ramshackle boat.” (all quoted material in this article is from The Nutmeg’s Curse by Amitav Ghosh, published by University of Chicago Press, October 2021)

Khokon was penniless, traumatized – but unlike many others he survived the voyage. Assisted by support groups for refugees and by relatives, he was able to stay in Italy and get a job at a warehouse in Parma.

How are these two sets of negotiations related? In Ghosh’s telling, the well-connected lobbyists meeting in posh board rooms, and the refugees simply trying to stay alive, each understand in their own ways how the climate crisis is intertwined with the global power structure.

The strategists at the Pentagon are fully aware that the climate crisis is a serious challenge. Yet their own ability to consume fossil fuels must not be called into question, even though the US military consumes more fossil fuel than any other organization in the world. Their own carbon emissions are not negotiable, because fossil fuel dominance is both the enabling force and the purpose of the vast web of military bases, aircraft carriers, bombers, missiles and drones through which the US exerts influence over global trade. In Ghosh’s words,

“The job of the world’s dominant military establishments is precisely to defend the most important drivers of climate change—the carbon economy and the systems of extraction, production, and consumption that it supports. Nor can these establishments be expected to address the unseen drivers of the planetary crisis, such as inequities of class, race, and geopolitical power: their very mission is to preserve the hierarchies that favor the status quo.”

Likewise, Ghosh explains, the refugees he meets in the camps around the Mediterranean are keenly aware of the realities of climate change – but they don’t think of themselves as climate refugees. If unstable weather conditions were the only challenge they faced, after all, they could simply buy a first-class ticket and fly to a comfortable new home in another country.

“What migrants like Khokon know, on the other hand, is that every aspect of their plight is rooted in unyielding, intractable, and historically rooted forms of class and racial injustice. …They know that the processes that have displaced them are embedded in very old and deeply entrenched social relationships of power, national and international.”

The exclusion of military emissions, at the very outset of international climate talks, has contributed to a tendency to see the climate crisis as techno-economic problem. Ghosh’s purpose in The Nutmeg’s Curse is to show that the climate crisis has roots as deep and as old as settler colonialism.

The conquest of Jacatra by the VOC in 1619. J.P. Coen decided in 1619 that Jakatra, later Batavia, would be a suitable base for the VOC on Java. (VOC = Vereenigde Oost Indische Compagnie, aka Dutch East Indies Company). After the conquest the whole city was razed to the ground, built anew and renamed Batavia. (File accessed via Wikimedia Commons.)

Terms of trade

“Selamon is a village in the Banda archipelago, a tiny cluster of islands at the far southeastern end of the Indian Ocean,” Ghosh writes in the book’s opening paragraphs. This village and this cluster of islands played an important role in global history due to the presence of an unusual tree – the tree that produces nutmeg and mace.

Nutmeg had been traded in many countries for many centuries, and was one of the substances most sought after and valued in Renaissance Europe. The search for nutmeg’s origins was a key driver of the wave of European explorations which eventually chanced upon the Americas.

When traders from the Dutch East India Company arrived in the Banda Islands, they quickly understood that they could multiply their profits. Trading in nutmeg was a good business, to be sure, but it would be much better if the Dutch had a tight monopoly. There was just one problem: the Bandas were already inhabited by skilled growers and traders, who had no desire to limit their business opportunities by selling only to one buyer.

The solution to the problem was simple and brutal, but was not unusual in the annals of colonialism: the Bandanese people had to be exterminated, so the Dutch could bring in slaves to harvest nutmegs, take sole control of the world-wide nutmeg trade, and sell the product for whatever the market would bear. This transfer of power took place in the early 17th century, and the profits fueled a burst of commercial and artistic development in The Netherlands which is known as The Golden Age.

“There are innumerable books on the art of the Dutch Golden Age,” Ghosh writes, but “few indeed are those that mention the Banda genocide.” He finds the story in obscure archives, told in the words of the very people who carried out the massacres. Even at the distance of four centuries, the events in Banda in April 1621 make for nightmare-inducing reading. And the events in Banda were not unique – they were part of a widespread pattern.

About the same time as the Banda massacres, Sir Francis Bacon wrote that there are  “nations that are outlawed and proscribed by the law of nature and nations, or by the immediate commandment of God.” It is only right, Bacon continued, that “godly and civilized nations”, when encountering such outlawed nations, should “cut them off from the face of the earth” (quoted by Ghosh from Bacon’s An Advertisement Touching An Holy War). This call to genocide, Ghosh says, was echoed by other European “Enlightenment” figures – and enacted all too frequently through the centuries of colonial conquest and domination.

European elites also began to tell themselves that the meaning, the very reason for existence, of all the world was to become resources for human industry. Those who believed the contrary – that the land and seas, plants and animals, had their own stories and their own spirits – were clearly unfit for survival:

“To believe that the Earth was anything more than an inanimate resource was to declare oneself a superstitious savage—and that, in turn, was tantamount to placing oneself on the waiting list for extinction or extermination. Vitalism, savagery, and extinction were a series in which each term implied the next.” 

Several centuries of frenzied extractivism have followed, with increasingly severe costs to earth’s ecosystems, deadly results to the indigenous peoples who were colonized, but exponential growth in wealth for the colonizers. By the time European industries learned how to exploit fossil fuels, the pattern of insatiable consumption was well established.

Today the spice trade is a minuscule part of international trade. The most valuable commodities in our era have been hydrocarbons. But these resources, too, are heavily concentrated in certain parts of the globe, and when exported must pass through a handful of maritime choke points including the Strait of Hormuz, the Strait of Malacca, and the southern tip and the Horn of Africa – “the exact locations,” Ghosh writes, “that European colonial powers fought over when the Indian Ocean’s most important commodities were cloves, nutmeg, and pepper.”

Today it is not the Dutch, nor the English, nor the Spanish, who rule the seas and set the terms of trade. But the basic order of colonialism remains, for now, intact:

“This empire may be under American control today, but it is the product of centuries of combined Western effort, going back to the 1500s.”

As in centuries past, preserving the dominant position of the empire results in immense loss of life outside the empire. In the cascading ecological catastrophes through the Middle East and South Asia, coupled with the vast numbers of civilian casualties categorized as “collateral damage”, Ghosh hears many echoes from centuries past. The “forever wars” in Iraq, Afghanistan, Somalia, and many other countries have their analogues through the long centuries of European conquest in Africa, the Americas and Asia.

The “surly bonds of earth” – or “all our relations”

The Nutmeg’s Curse is a very big book considering it weighs in at a relatively modest 336 pages. In exploring his theme Ghosh dives into Greek mythology, contemporary geopolitics, classic Dutch literature, American popular culture, the history of botanical science, all in addition to his primary focus, the colonization of several continents over several centuries. His gift for both narrative and exposition make The Nutmeg’s Curse compulsively readable.

One area in which his explanations fall short, in my view, is Ghosh’s discussion of socio-technical ramifications of energy transition. He accepts and repeats, with little apparent critique, two viewpoints that have been influential in US media in recent years: one, that since the onset of fracking the US has become energy sufficient, with no need for hydrocarbon imports; and two, that the technologies for a seamless transition from hydrocarbons to renewable energies are already available. But these arguments play a relatively minor role in the great sweep of The Nutmeg’s Curse.

 The story Ghosh tells is often appalling, sickening in its portrayal of human cruelty, and frightening in what it says about the daunting challenges we face to achieve a just world through coming decades. It is also enlightening and, in the end, hopeful.

Consider these lines from a poem by Canadian-American pilot John Gillespie Magee, written shortly before his death in World War II:

“Oh, I have slipped the surly bonds of earth
And danced the skies on laughter-silvered wings.”

Magee “was almost instantly canonized as the American poet of World War II,” Ghosh writes, and these lines soon appeared on headstones throughout the United States, they were used in the midnight sign-off for many television stations, a copy of the poem was deposited on the moon in 1971, and Ronald Reagan recited the lines to dramatic effect after the space shuttle Challenger disaster. But Ghosh asks us to consider:

“What exactly is ‘surly’ about the Earth’s bonds? [W]hy should the planet be thought of as a home from which humans would be fortunate to escape?”

The deep-seated disdain for the earth was not a mere mid-twentieth-century fad. Ghosh finds the same sentiment expressed in stark terms, for example, in the work of Alfred, Lord Tennyson, “perhaps the most celebrated English poet of the late nineteenth century.” But it is an unfortunately logical outcome of a perspective that sees all the Earth, that sees Nature – soil, minerals, plants, animals, and even people – as resources to be consumed for the profit of those clever enough to dominate.

Today, Ghosh says, this earth-disdaining ethos of domination has expanded well beyond traditional colonial powers. With the global hegemony of neo-liberal economics, ruling parties in Brazil, India and China are eagerly joining the extractivist project; that is one key reason why rain forests are shrinking so rapidly, and why half of all carbon emissions from the entire industrial age have happened in just the past thirty years.

In the face of all this destruction, where can one find hope? Perhaps here, Ghosh writes: a revival of vitalist beliefs, with deep love for the sacredness of earthly spaces, is spreading in many countries. In many cases led by indigenous peoples, this vitalist revival is at the forefront of environmental struggles. He notes the legal victories, from New Zealand to South America, “that Indigenous peoples around the world have won in recent years, precisely on vitalist grounds, by underscoring the sacredness of mountains, rivers, and forests, and by highlighting the ties of kinship by which they are bound to humans.” He is inspired by Native American resistance movements which honour “the familial instinct to protect ‘all our relatives’—that is to say, the entire spectrum of nonhuman kin, including rivers, mountains, animals, and the spirits of the land.”

Is it naïve, wishful thinking, or even anti-scientific, to find hope in loving “all our relatives”? Ghosh asks that question too, and we’ll close with his answer:

“Is this magical thinking? Perhaps—but no more so than the idea of colonizing Mars; or the belief, now enshrined in the Paris Agreement, that a new technology for removing vast amounts of carbon from the atmosphere will magically appear in the not-too-distant future.

“The difference is that a vitalist mass movement, because it depends not on billionaires or technology, but on the proven resources of the human spirit, may actually be magical enough to change hearts and minds across the world.”


Photo at top of page: A Dutch men-of-war and small vessels in a breeze, by Dutch Golden Age painter Lieve Verschuier (1627–1686). Now in National Museum of Warsaw. Accessed at Wikimedia Commons.

Your gas tank is not an oil well. Your battery will not be a power plant.

Also published on Resilience.

My car comes with an amazing energy-storage, demand-management-and-supply system; perhaps you’ve heard of it. It’s called the “gas tank”.

Thanks to this revolutionary feature, if I get home and the electric grid is down, I can siphon gas out of the tank and power up a generator. In a more urgent energy crunch, I can siphon out some gas, throw it on a woodpile, and get a really hot fire going in seconds. If a friend across town has no power, I can even drive over there, siphon out some fuel, and run a generator to provide power in an alternate location. It’s beautiful! I can shift energy provision and consumption both temporally and spatially.

There is one minor drawback, to be sure. If I siphon the fuel out of the tank then I can’t actually drive the car, at least not more than a few kilometers to the nearest fuel station. But let’s not let that limitation cast a shadow over this revolutionary technology. If this flexible load-management system were widely adopted, and there were cars everywhere, think how smoothly our society could run!

These thoughts come to mind when I hear someone rhapsodize about the second coming of the electric car. Recently, for example, a Grist headline proclaimed that “Your Electric Vehicle Could Become a Mini Power Plant. And that could make the electrical grid work better for everyone.” (June 21, 2021)

Stephen Peake, in Renewable Energy: Ten Short Lessons (review here) wrote that “new fleets of electric vehicles parked overnight could become another mass source of electricity storage and supply.” (emphasis mine)

One more example: an Oct 2020 article at World Economic Forum says that “When electric vehicles are integrated into a city’s energy system, the battery can provide power to the grid when the sun is down or the wind isn’t blowing.”

The key to this supply-and-demand magic is “bidirectional charging” – the electric vehicles of the near future will have the equivalent of a gas tank with a built-in siphon. Thus their capacious batteries will not only be able to quickly suck power out of the grid, but also to empty themselves out again to provide juice for other purposes.

But allow me this skeptical observation: electric car batteries do not have huge batteries because the drivers want to offer aid to the “smart grid”. Electric car batteries are huge because cars are huge consumers of energy.

(True, electric cars don’t consume quite as much energy as internal-combustion cars of similar class and weight – but they consume a whole lot more energy per passenger/kilometer than intelligently routed electric buses, trains, or especially, electric-assisted bicycles.)

And let’s be clear: neither an electric car vehicle nor its battery provide any “energy supply”. The car itself is a pure energy suck. The battery is just an energy storage device – it can store a finite capacity of energy from another source, and output that energy as required, but it does not produce energy.

As with internal-combustion powered cars, when the tank/battery is drained for a purpose other than driving, then the car ceases to be a functional car until refueled.

That will leave some niche scenarios where vehicle batteries really might offer a significant advantage to grid supply management. The Grist article begins with one such scenario: three yellow school buses which run on battery power through the school year, and serve as a battery bank while parked for the summer months. If all 8,000 school buses in the local utility service area were EVs, the article notes, their fully-charged batteries “could collectively supply more than 100 megawatts of power to the grid for short periods — or nearly 1 percent of Con Ed’s peak summer power demand.”

When parked for the summer, electric school buses would not need to be charged and ready to drive first thing every weekday morning. So they could indeed be used simply as (terribly expensive) battery cases for two or three months each year.

OK, but … let’s be careful about singing the praises of school buses. This might be a slippery slope. If big buses catch on, soon Americans might start taking their kids to school in giant pick-up trucks!

Of course I jest – that horse has already left the barn. The top three selling vehicles in the US, it may surprise people from elsewhere to learn, are pick-up trucks that dwarf the pick-ups used by farmers and some tradespeople in previous generations. (It will not surprise Canadians, who play second fiddle to no-one in car culture madness. Canadians tend to buy even larger, heavier, more powerful, and more expensive trucks than Americans do.)

The boom in overgrown pick-ups has not come about because North Americans are farming and logging in record numbers, nor even, as one wag put it, that a 4X8 sheet of plywood has gotten so much bigger in recent years. Yet urban streets, parking lots, and suburban driveways are now crowded with hulking four-door, four-wheel-drive, spotlessly clean limousine-trucks. Those vehicles, regardless of their freight-carrying or freight-pulling capacity, are used most to carry one or two people around urbanized areas.

If we are foolish enough to attempt electrification of this fleet, it will take an awesome amount of battery power. And as you might expect, car culture celebrants are already proclaiming what a boon this will be for energy transition.

A pre-production promo video for Ford’s F-150 Lightning electric pick-up truck gets to the critical issue first: the Lightning will accelerate from 0 – 60 mph (0 – 97 km/hr) “in the mid-4-second range”. But wait, there’s more, the ad promises: the battery can “off-board” enough power to run a home “for about three days”.

Keep that in mind when you start seeing big electric pick-up trucks on the road: each one, in just a few hours of highway driving, will use as much power as a typical American home uses in three days.

Keep it in mind, too, when you see a new bank of solar panels going up in a field or on a warehouse roof: the installation might output enough electricity each day to power 100 pickup trucks for a few hours each – or 300 homes for the whole day.

Given that we won’t have enough renewably produced electricity to power existing homes, schools, stores and industries for decades, is it really a good idea to devote a big share of it, right at the outset, to building and charging limousine-trucks? Are the huge batteries required by these vehicles actually features, or are they bugs?

Granted, an electric car battery can provide a modest degree of grid load-levelling capability in some situations. It can be drained back into the grid during some peak-power-demand periods such as early evening in the heat of summer – as long as it can be recharged in time for the morning commute. That’s not nothing. And if we’re determined to keep our society moving by using big cars and trucks, that means we’ll have a huge aggregated battery capacity sitting in parking spots for most of each day. In that scenario, sure, there will be a modest degree of load-levelling capacity in those parked vehicles.

But perhaps there is a better way to add load-levelling capacity to the grid. A better way than producing huge, heavy vehicles, each containing one battery, which suck up that power fast whenever they’re being driven, while also spreading brake dust and worn tire particles through the environment, and which significantly increase the danger to vulnerable road users besides. Not to mention, which result in huge upfront emissions of carbon dioxide during their manufacture.

If it’s really load-levelling we’re after, for the same money and resources we could build a far greater number of batteries, and skip building expensive casings in the form of cars and pick-ups.

Other factors being equal, an electric car is modestly more environmentally friendly than internal-combustion car. (How’s that for damning with faint praise?)  But if we’re ready for a serious response to the climate emergency, we should be rapidly curtailing both the manufacture and use of cars, and making the remaining vehicles only as big and heavy as they actually need to be. The remaining small cars won’t collectively contain such a huge battery capacity, to be sure, but we can then address the difficult problems of grid load management in a more intelligent, efficient and direct fashion.


Illustration at top of post: Energy Utopia, composite by Bart Hawkins Kreps from public domain images.

Sunshine, wind, tides and worldwatts

A review of Renewable Energy: Ten Short Lessons

Also published on Resilience

Fun physics fact: water carries so much more kinetic energy than air that “A tidal current of 3 knots has the same energy density as a steady wind stream at 29 knots (a fair old blow).”

And consider this: “Ninety-nine per cent of planet Earth is hotter than 1,000 °C (1,832 °F). The earth is, in fact, a giant leaky heat battery.”

Stephen Peake uses these bits of information and many more to lucidly outline the physical bases of renewable energy sources, including solar and wind energy, geothermal energy, wave energy and tidal current energy. But the book also touches on the complex relationship between the physics of renewable energy, and the role energy plays in human society – and the results aren’t always enlightening.

Peake takes on a formidable task in Renewable Energy: Ten Short Lessons. The book is part of the “Pocket Einstein” series from Johns Hopkins University Press (or from Michael O’Mara Books in Britain). He has less than 200 small-format pages in which to cover both the need for and the prospects for a transition to 100% renewable energy.

Key to his method is the concept of a “worldwatt” – “the rate at which the world uses all forms of primary energy.” Peake estimates the rate of energy flow around the world from various potential renewable energy sources. Not surprisingly, he finds that the theoretically available renewable energy sources are far greater than all energy currently harnessed – primarily from fossil fuels – by the global economy.

But how do we get from estimates of theoretically available energy, to estimates of how much of that energy is practically and economically available? Here Peake’s book isn’t much help. He asks us to accept this summation:

“Taking a conservative mid-estimate of the numbers in the literature, we see that the global technical potential of different renewable sources adds up to 46 worldwatts. There is a definite and reasonable prospect of humans harnessing 1 worldwatt from 100 per cent renewable energy in the future.” (page 31)

But he offers no evidence or rationale for the conclusion that getting 1 worldwatt from renewable sources is a “reasonable prospect”, nor how near or far “in the future” that might occur.

A skeptic might well dismiss the book as renewable energy boosterism, noting a cheery optimism from the opening pages: “There is an exciting, renewable, electric, peaceful, prosperous, safer future just up ahead.” Others might say such optimism is the most helpful position one can take, given that we have no choice but to switch to a renewable energy way of life, ASAP, if we want human presence on earth to last much longer.

Yet a cheerfully pro-renewable energy position can easily shade into a cheerful pro-consumptionist stance – the belief that renewable energies can quickly become the driving force of our current industrial economies, with little change in living standards and no end to economic growth.

Peake briefly introduces a key concept for assessing which renewable energy sources will be economically viable, and in what quantities: Energy Return On Energy Invested (EROEI). He explains that as we exploit more difficult energy sources, the EROEI goes down:

“As wind turbines have become larger and moved offshore, the EROEI ratio for wind over a twenty-year lifetime has declined from around 20:1 in the early 2000s to as low as 15:1 in recent years for some offshore wind farms.” (page 84)

Affordable renewable energy, in other words, doesn’t always “scale up”. The greater the total energy demanded by society, the more we will be impelled to site wind turbines and solar panels in areas beyond the “sweet spots” for Energy Return On Energy Invested. Peake’s book would be stronger if he used this recognition to give better context to statements such as “Renewable electricity is now cheaper than fossil electricity …” (in the book’s opening paragraph), and “solar is now the cheapest electricity in history” (page 70).

While Peake expresses confidence that a prosperous renewable energy world is just ahead, he doesn’t directly engage with the issue of how, or how much, affluent lifestyles may need to change. The closest he comes to grappling with this contentious issue is in his discussion of energy waste:

“We need to stop wasting all forms of energy, including clean renewable sources of heat and electricity. The sooner we shrink our total overall demand for energy, the sooner renewables will be able to provide 100 per cent of the energy we need to power our zero-carbon economies.” (page 141)

Near the end of the book, in brief remarks about electric cars, Peake makes some curious statements about EVs:

“Millions of [electric vehicles] will need charging from the network. This presents both a challenge and an opportunity in terms of managing the network load.” (page 130, emphasis mine)

And a few pages later:

“In the future, new fleets of electric vehicles parked overnight could become another mass source of electricity storage and supply.” (page 134 emphasis mine)

In my next post I’ll take up this concept of the electric vehicle as energy storage, supply and load management resource.

In conclusion, Renewable Energy: Ten Short Lessons is a valuable primer on the physics of renewable energy, but isn’t a lot of help in establishing whether or not the existing world economy can be successfully transitioned to zero-carbon energy.


Photo at top of page: Wind Turbines near Grevelingenmeer, province of Zeeland, Netherlands