Beware of WEIRD Stochastic Parrots

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part four
Also published on Resilience.

A strange new species is getting a lot of press recently. The New Yorker published the poignant and profound illustrated essay “Is My Toddler a Stochastic Parrot?Wall Street Journal told us about “‘Stochastic Parrot’: A Name for AI That Sounds a Bit Less Intelligent”. And warned of “GPT-3: The Venom-Spitting Stochastic Parrot”.

The American Dialect Society even selected “stochastic parrot” as the AI-Related Word of the Year for 2023.

Yet this species was unknown until March of 2021, when Emily Bender, Timnit Gebru, Angelina McMillan-Major, and (the slightly pseudonymous) Shmargaret Shmitchell published “On the Dangers of Stochastic Parrots.”1

The paper touched a nerve in the AI community, reportedly costing Timnit Gebru and Margaret Mitchell their jobs with Google’s Ethical AI team.2

Just a few days after Chat-GPT was released, Open AI CEO Sam Altman paid snarky tribute to the now-famous phrase by tweeting “i am a stochastic parrot, and so r u.”3

Just what, according to its namers, are the distinctive characteristics of a stochastic parrot? Why should we be wary of this species? Should we be particularly concerned about a dominant sub-species, the WEIRD stochastic parrot? (WEIRD as in: Western, Educated, Industrialized, Rich, Democratic.) We’ll look at those questions for the remainder of this installment.

Haphazardly probable

The first recognized chatbot was 1967’s Eliza, but many of the key technical developments behind today’s chatbots only came together in the last 15 years. The apparent wizardry of today’s Large Language Models rests on a foundation of algorithmic advances, the availability of vast data sets, super-computer clusters employing thousands of the latest Graphics Processing Unit (GPU) chips, and, as discussed in the last post, an international network of poorly paid gig workers providing human input to fill in gaps in the machine learning process. 

By the beginning of this decade, some AI industry figures were arguing that Large Language Models would soon exhibit “human-level intelligence”, could become sentient and conscious, and might even become the dominant new species on the planet.

The authors of the stochastic parrot paper saw things differently:

“Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”4

Let’s start by focusing on two words in that definition: “haphazardly” and “probabilistic”. How do those words apply to the output of ChatGPT or similar Large Language Models?

In a lengthy paper published last year, Stephen Wolfram offers an initial explanation:

“What ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”5

He gives the example of this partial sentence: “The best thing about AI is its ability to”. The Large Language Model will have identified many instances closely matching this phrase, and will have calculated the probability of various words being the next word to follow. The table below lists five of the most likely choices.

The element of probability, then, is clear – but in what way is ChatGPT “haphazard”?

Wolfram explains that if the chatbot always picks the next word with the highest probability, the results will be syntactically correct, sensible, but stilted and boring – and repeated identical prompts will produce repeated identical outputs.

By contrast, if at random intervals the chatbot picks a “next word” that ranks fairly high in probability but is not the highest rank, then more interesting and varied outputs result.

Here is Wolfram’s sample of an output produced by a strict “pick the next word with the highest rank” rule: 

The above output sounds like the effort of someone who is being careful with each sentence, but with no imagination, no creativity, and no real ability to develop a thought.

With a randomness setting introduced, however, Wolfram illustrates how repeated responses to the same prompt produce a wide variety of more interesting outputs:

The above summary is an over-simplification, of course, and if you want a more in-depth exposition Wolfram’s paper offers a lot of complex detail. But Wolfram’s “next word” explanation concurs with at least part of the stochastic parrot thesis: “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine ….”

What follows, in Bender and Gebru’s formulation, is equally significant. An LLM, they wrote, strings together words “without any reference to meaning.”

Do LLM’s actually understand the meaning of the words, phrases, sentences and paragraphs they have read and which they can produce? To answer that question definitively, we’d need definitive answers to questions like “What is meaning?” and “What does it mean to understand?”

A brain is not a computer, and a computer is not a brain

For the past fifty years a powerful but deceptive metaphor has become pervasive. We’ve grown accustomed to describing computers by analogy to the human brain, and vice versa. As the saying goes, these models are always wrong even though they are sometimes useful.

“The Computational Metaphor,” wrote Alexis Barria and Keith Cross, “affects how people understand computers and brains, and of more recent importance, influences interactions with AI-labeled technology.”

The concepts embedded in the metaphor, they added, “afford the human mind less complexity than is owed, and the computer more wisdom than is due.”6

The human mind is inseparable from the brain which is inseparable from the body. However much we might theorize about abstract processes of thought, our thought processes evolved with and are inextricably tangled with bodily realities of hunger, love, fear, satisfaction, suffering, mortality. We learn language as part of experiencing life, and the meanings we share (sometimes incompletely) when we communicate with others depends on shared bodily existence.

Angie Wang put it this way: “A toddler has a life, and learns language to describe it. An L.L.M. learns language, but has no life of its own to describe.”7

In other terms, wrote Bender and Gebru, “languages are systems of signs, i.e. pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning.”

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people”8 who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

“WEIRD in, WEIRD out”

There are no generic humans. As it turns out, counterfeit people aren’t generic either.

Large Language Models are created primarily by large corporations, or by university researchers who are funded by large corporations or whose best job prospects are with those corporations. It would be a fluke if the products and services growing out of these LLMs didn’t also favour those corporations.

But the bias problem embedded in chatbots goes deeper. For decades, the people who contribute the most to digitized data sets are those who have the most access to the internet, who publish the most books, research papers, magazine articles and blog posts – and these people disproportionately live in Western Educated Industrialized Rich Democratic countries. Even social media users, who provide terabytes of free data for the AI machine, are likely to live in WEIRD places.

We should not be surprised, then, when outputs from chatbots express common biases:

“As people in positions of privilege with respect to a society’s racism, misogyny, ableism, etc., tend to be overrepresented in training data for LMs, this training data thus includes encoded biases, many already recognized as harmful.”9

In 2023 a group of scholars at Harvard University investigated those biases. “Technical reports often compare LLMs’ outputs with ‘human’ performance on various tests,” they wrote. “Here, we ask, ‘Which humans?’”10

“Mainstream research on LLMs,” they added, “ignores the psychological diversity of ‘humans’ around the globe.”

Their strategy was straightforward: prompt Open AI’s GPT to answer the questions in the World Values Survey, and then compare the results to the answers that humans around the world gave to the same set of questions. The WVS documents a range of values including but not limited to issues of justice, moral principles, global governance, gender, family, religion, social tolerance, and trust. The team worked with data in the latest WVS surveys, collected from 2017 to 2022.

Recall that GPT does not give identical responses to identical prompts. To ensure that the GPT responses were representative, each of the WVS questions was posed to GPT 1000 times.11

The comparisons with human answers to the same surveys revealed striking similarities and contrasts. The article states:

“GPT was identified to be closest to the United States and Uruguay, and then to this cluster of cultures: Canada, Northern Ireland, New Zealand, Great Britain, Australia, Andorra, Germany, and the Netherlands. On the other hand, GPT responses were farthest away from cultures such as Ethiopia, Pakistan, and Kyrgyzstan.”

In other words, the GPT responses were similar to those of people in WEIRD societies.

The results are summarized in the graphic below. Countries in which humans gave WVS answers close to GPT’s answers are clustered at top left, while countries whose residents gave answers increasingly at variance with GPT’s answers trend along the line running down to the right.

“Figure 3. The scatterplot and correlation between the magnitude of GPT-human similarity and cultural distance from the United States as a highly WEIRD point of reference.” From Atari et al., “Which Humans?

The team went on to consider the WVS responses in various categories including styles of analytical thinking, degrees of individualism, and ways of expressing and understanding personal identity. In these and other domains, they wrote, “people from contemporary WEIRD populations are an outlier in terms of their psychology from a global and historical perspective.” Yet the responses from GPT tracked the WEIRD populations rather than global averages.

Anyone who asks GPT a question with hopes of getting an unbiased answer is running a fool’s errand. Because the data sets include a large over-representation of WEIRD inputs, the outputs, for better or worse, will be no less WEIRD.

As Large Language Models are increasingly incorporated into decision-making tools and processes, their WEIRD biases become increasingly significant. By learning primarily from data that encodes viewpoints of dominant sectors of global society, and then expressing those values in decisions, LLMs are likely to further empower the powerful and marginalize the marginalized.

In the next installment we’ll look at the effects of AI and LLMs on employment conditions, now and in the near future.


1 Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Association for Computing Machinery Digital Library, 1 March 2021.

2 John Naughton, “Google might ask questions about AI ethics, but it doesn’t want answers”, The Guardian, 13 March 2021.

3 As quoted in Elizabeth Weil, “You Are Not a Parrot”, New York Magazine, March 1, 2023.

4 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

5 Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?”, 14 February 2023.

6 Alexis T. Baria and Keith Cross, “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor”, arXiv, 18 July 2021.

7 Angie Wang, “Is My Toddler a Stochastic Parrot?”, The New Yorker, 15 November 2023.

8 The phrase “counterfeit people” is attributed to philosopher David Dennett, quoted by Elizabeth Weil in “You Are Not a Parrot”, New York Magazine.

9 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

10 Mohammed Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi, and Joseph Henrich, “Which Humans?”, arXiv, 22 September 2023.

11 Specifically, the team “ran both GPT 3 and 3.5; they were similar. The paper’s plots are based on 3.5.” Email correspondence with study author Mohammed Atari.

Image at top of post: “The Evolution of Intelligence”, illustration by Bart Hawkins Kreps, posted under CC BY-SA 4.0 DEED license, adapted from “The Yin and Yang of Human Progress”, (Wikimedia Commons), and from parrot illustration courtesy of Judith Kreps Hawkins.

“Warning. Data Inadequate.”

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part three
Also published on Resilience.

“The Navy revealed the embryo of an electronic computer today,” announced a New York Times article, “that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”1

A few paragraphs into the article, “the Navy” was quoted as saying the new “perceptron” would be the first non-living mechanism “capable of receiving, recognizing and identifying its surroundings without any human training or control.”

This example of AI hype wasn’t the first and won’t be the last, but it is a bit dated. To be precise, the Times story was published on July 8, 1958.

Due to its incorporation of a simple “neural network” loosely analogous to the human brain, the perceptron of 1958 is recognized as a forerunner of today’s most successful “artificial intelligence” projects – from facial recognition systems to text extruders like ChatGPT. It’s worth considering this early device in some detail.

In particular, what about the claim that the perceptron could identify its surroundings “without any human training or control”? Sixty years on, the descendants of the perceptron have “learned” a great deal, and can now identify, describe and even transform millions of images. But that “learning” has involved not only billions of transistors, and trillions of watts, but also millions of hours of labour in “human training and control.”

Seeing is not perceiving

When we look at a real-world object – for example, a tree – sensors in our eyes pass messages through a network of neurons and through various specialized areas of the brain. Eventually, assuming we are old enough to have learned what a tree looks like, and both our eyes and the required parts of our brains are functioning well, we might say “I see a tree.” In short, our eyes see a configuration of light, our neural network processes that input, and the result is that our brains perceive and identify a tree.

Accomplishing the perception with electronic computing, it turns out, is no easy feat.

The perceptron invented by Dr. Frank Rosenblatt in the 1950s used a 20 pixel by 20 pixel image sensor, paired with an IBM 704 computer. Let’s look at some simple images, and how a perceptron might process the data to produce a perception. 

Images created by the author.

In the illustration at left above, what the camera “sees” at the most basic level is a column of pixels that are “on”, with all the other pixels “off”. However, if we train the computer by giving it nothing more than labelled images of the numerals from 0 to 9, the perceptron can recognize the input as matching the numeral “1”. If we then add training data in the form of labelled images of the characters in the Latin-script alphabet in a sans serif font, the perceptron can determine that it matches, equally well, the numeral “1”, the lower-case letter “l”, or an upper-case letter “I”.

The figure at right is considerably more complex. Here our perceptron is still working with a low-resolution grid, but pixels can be not only “on” or “off” – black or white – but various shades of grey. To complicate things further, suppose more training data has been added, in the form of hand-written letters and numerals, plus printed letters and numerals in an oblique sans serif font. The perceptron might now determine the figure is a numeral “1” or a lower-case “l” or upper-case “I”, either hand-written or printed in an oblique font, each with an equal probability. The perceptron is learning how to be an optical character recognition (OCR) system, though to be very good at the task it would need the ability to use context to the rank the probabilities of a numeral “1”, a lower-case “l”, or an upper-case “I”.

The possibilities multiply infinitely when we ask the perceptron about real-world objects. In the figure below, a bit of context, in the form of a visual ground, is added to the images. 

Images created by the author.

Depending, again, on the labelled training data already input to the computer, the perceptron may “see” the image at left as a tall tower, a bare tree trunk, or the silhouette of a person against a bright horizon. The perceptron might see, on the right, a leaning tree or a leaning building – perhaps the Leaning Tower of Pisa. With more training images and with added context in the input image – shapes of other buildings, for example – the perceptron might output with high statistical confidence that the figure is actually the Leaning Tower of Leeuwarden.

Today’s perceptrons can and do, with widely varying degrees of accuracy and reliability, identify and name faces in crowds, label the emotions shown by someone in a recorded job interview, analyse images from a surveillance drone and indicate that a person’s activities and surroundings match the “signature” of terrorist operations, or identify a crime scene by comparing an unlabelled image with photos of known settings from around the world. Whether right or wrong, the systems’ perceptions sometimes have critical consequences: people can be monitored, hired, fired, arrested – or executed in an instant by a US Air Force Reaper drone.

As we will discuss below, these capabilities have been developed with the aid of millions of hours of poorly-paid or unpaid human labour.

The Times article of 1958, however, described Dr. Rosenblatt’s invention this way: “the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience ….” The kernel of truth in that claim lies in the concept of a neural network.

Rosenblatt told the Times reporter “he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a ‘self-induced change in the wiring diagram.’”

I can empathize with that Times reporter. I still hope to find a person sufficiently intelligent to explain the machine learning process so clearly that even a simpleton like me can fully understand. However, New Yorker magazine writers in 1958 made a good attempt. As quoted in Matteo Pasquinelli’s book The Eye of the Master, the authors wrote:

“If a triangle is held up to the perceptron’s eye, the association units connected with the eye pick up the image of the triangle and convey it along a random succession of lines to the response units, where the image is registered. The next time the triangle is held up to the eye, its image will travel along the path already travelled by the earlier image. Significantly, once a particular response has been established, all the connections leading to that response are strengthened, and if a triangle of a different size and shape is held up to the perceptron, its image will be passed along the track that the first triangle took.”2

With hundreds, thousands, millions and eventually billions of steps in the perception process, the computer gets better and better at interpreting visual inputs.

Yet this improvement in machine perception comes at a high ecological cost. A September 2021 article entitled “Deep Learning’s Diminishing Returns” explained:

“[I]n 2012 AlexNet, the model that first showed the power of training deep-learning systems on graphics processing units (GPUs), was trained for five to six days using two GPUs. By 2018, another model, NASNet-A, had cut the error rate of AlexNet in half, but it used more than 1,000 times as much computing to achieve this.”

The authors concluded that, “Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools.”3

The steep increase in the computing demands of AI is illustrated in a graph by Anil Ananthaswamy.

“The Drive to Bigger AI Models” shows that AI models used for language and image generation have grown in size by several orders of magnitude since 2010.  Graphic from “In AI, is Bigger Better?”, by Anil Ananthaswamy, Nature, 9 March 2023.

Behold the Mechanical Turk

In the decades since Rosenblatt built the first perceptron, there were periods when progress in this field seemed stalled. Additional theoretical advances in machine learning, a many orders-of-magnitude increase in computer processing capability, and vast quantities of training data were all prerequisites for today’s headline-making AI systems. In Atlas of AI, Kate Crawford gives a fascinating account of the struggle to acquire that data.

Up to the 1980s artificial intelligence researchers didn’t have access to large quantities of digitized text or digitized images, and the type of machine learning that makes news today was not yet possible. The lengthy antitrust proceedings against IBM provided an unexpected boost to AI research, in the form of a hundred million digital words from legal proceedings. In the 1990s, court proceedings against Enron collected more than half a million email messages sent among Enron employees. This provided text exchanges in everyday English, though Crawford notes wording “represented the gender, race, and professional skews of those 158 workers.”

And the data floodgates were just beginning to open. As Crawford describes the change,

“The internet, in so many ways, changed everything; it came to be seen in the AI research field as something akin to a natural resource, there for the taking. As more people began to upload their images to websites, to photo-sharing services, and ultimately to social media platforms, the pillaging began in earnest. Suddenly, training sets could reach a size that scientists in the 1980s could never have imagined.”4

It took two decades for that data flood to become a tsunami. Even then, although images were often labelled and classified for free by social media users, the labels and classifications were not always consistent or even correct. There remained a need for humans to look at millions of images and create or check the labels and classifications.

Developers of the image database ImageNet collected 14 million images and eventually organized them into over twenty thousand categories. They initially hired students in the US for labelling work, but concluded that even at $10/hour, this work force would quickly exhaust the budget.

Enter the Mechanical Turk.

The original Mechanical Turk was a chess-playing scam originally set up in 1770 by a Hungarian inventor. An apparently autonomous mechanical human model, dressed in the Ottoman fashion of the day, moved chess pieces and could beat most human chess players. Decades went by before it was revealed that a skilled human chess player was concealed inside the machine for each exhibition, controlling all the motions.

In the early 2000s, Amazon developed a web platform by which AI developers, among others, could contract gig workers for many tasks that were ostensibly being done by artificial intelligence. These tasks might include, for example, labelling and classifying photographic images, or making judgements about outputs from AI-powered chat experiments. In a rare fit of honesty, Amazon labelled the process “artificial artificial intelligence”5 and launched its service, Amazon Mechanical Turk, in 2005.

screen shot taken 3 February 2024, from opening page at

Crawford writes,

“ImageNet would become, for a time, the world’s largest academic user of Amazon’s Mechanical Turk, deploying an army of piecemeal workers to sort an average of fifty images a minute into thousands of categories.”6

Chloe Xiang described this organization of work for Motherboard in an article entitled “AI Isn’t Artificial or Intelligent”:

“[There is a] large labor force powering AI, doing jobs that include looking through large datasets to label images, filter NSFW content, and annotate objects in images and videos. These tasks, deemed rote and unglamorous for many in-house developers, are often outsourced to gig workers and workers who largely live in South Asia and Africa ….”7

Laura Forlano, Associate Professor of Design at Illinois Institute of Technology, told Xiang “what human labor is compensating for is essentially a lot of gaps in the way that the systems work.”

Xiang concluded,

“Like other global supply chains, the AI pipeline is greatly imbalanced. Developing countries in the Global South are powering the development of AI systems by doing often low-wage beta testing, data annotating and labeling, and content moderation jobs, while countries in the Global North are the centers of power benefiting from this work.”

In a study published in late 2022, Kelle Howson and Hannah Johnston described why “platform capitalism”, as embodied in Mechanical Turk, is an ideal framework for exploitation, given that workers bear nearly all the costs while contractors take no responsibility for working conditions. The platforms are able to enroll workers from many countries in large numbers, so that workers are constantly low-balling to compete for ultra-short-term contracts. Contractors are also able to declare that the work submitted is “unsatisfactory” and therefore will not be paid, knowing the workers have no effective recourse and can be replaced by other workers for the next task. Workers are given an estimated “time to complete” before accepting a task, but if the work turns out to require two or three times as many hours, the workers are still only paid for the hours specified in the initial estimate.8

A survey of 700 cloudwork employees (or “independent contractors” in the fictive lingo of the gig work platforms) found about 34% of the time they spent on these platforms was unpaid. “One key outcome of these manifestations of platform power is pervasive unpaid labour and wage theft in the platform economy,” Howson and Johnston wrote.9 From the standpoint of major AI ventures at the top of the extraction pyramid, pervasive wage theft is not a bug in the system, it is a feature.

The apparently dazzling brilliance of AI-model creators and semi-conductor engineers gets the headlines in western media. But without low-paid or unpaid work by employees in the Global South, “AI systems won’t function,” Crawford writes. “The technical AI research community relies on cheap, crowd-sourced labor for many tasks that can’t be done by machines.”10

Whether vacuuming up data that has been created by the creative labour of hundreds of millions of people, or relying on tens of thousands of low-paid workers to refine the perception process for reputedly super-intelligent machines, the AI value chain is another example of extractivism.

“AI image and text generation is pure primitive accumulation,” James Bridle writes, “expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners.”11

“All seven emotions”

New AI implementations don’t usually start with a clean slate, Crawford says – they typically borrow classification systems from earlier projects.

“The underlying semantic structure of ImageNet,” Crawford writes, “was imported from WordNet, a database of word classifications first developed at Princeton University’s Cognitive Science Laboratory in 1985 and funded by the U.S. Office of Naval Research.”12

But classification systems are unavoidably political when it comes to slotting people into categories. In the ImageNet groupings of pictures of humans, Crawford says, “we see many assumptions and stereotypes, including race, gender, age, and ability.”

She explains,

“In ImageNet the category ‘human body’ falls under the branch Natural Object → Body → Human Body. Its subcategories include ‘male body,’ ‘person,’ ‘juvenile body,’ ‘adult body,’ and ‘female body.’ The ‘adult body’ category contains the subclasses ‘adult female body’ and ‘adult male body.’ There is an implicit assumption here that only ‘male’ and ‘female’ bodies are recognized as ‘natural.’”13

Readers may have noticed that US military agencies were important funders of some key early AI research: Frank Rosenblatt’s perceptron in the 1950s, and the WordNet classification scheme in the 1980s, were both funded by the US Navy.

For the past six decades, the US Department of Defense has also been interested in systems that might detect and measure the movements of muscles in the human face, and in so doing, identify emotions. Crawford writes, “Once the theory emerged that it is possible to assess internal states by measuring facial movements and the technology was developed to measure them, people willingly adopted the underlying premise. The theory fit what the tools could do.”14

Several major corporations now market services with roots in this military-funded research into machine recognition of human emotion – even though, as many people have insisted, the emotions people express on their faces don’t always match the emotions they are feeling inside.

Affectiva is a corporate venture spun out of the Media Lab at Massachusetts Institute of Technology. On their website they claim “Affectiva created and defined the new technology category of Emotion AI, and evangelized its many uses across industries.” The opening page of spins their mission as “Humanizing Technology with Emotion AI.”

Who might want to contract services for “Emotion AI”? Media companies, perhaps, want to “optimize content and media spend by measuring consumer emotional responses to videos, ads, movies and TV shows – unobtrusively and at scale.” Auto insurance companies, perhaps, might want to keep their (mechanical) eyes on you while you drive: “Using in-cabin cameras our AI can detect the state, emotions, and reactions of drivers and other occupants in the context of a vehicle environment, as well as their activities and the objects they use. Are they distracted, tired, happy, or angry?”

Affectiva’s capabilities, the company says, draw on “the world’s largest emotion database of more than 80,000 ads and more than 14.7 million faces analyzed in 90 countries.”15 As reported by The Guardian, the videos are screened by workers in Cairo, “who watch the footage and translate facial expressions to corresponding emotions.”6

There is a slight problem: there is no clear and generally accepted definition of an emotion, nor general agreement on just how many emotions there might be. But “emotion AI” companies don’t let those quibbles get in the way of business.

Amazon’s Rekognition service announced in 2019 “we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’)” – but they were proud to have “added a new emotion: ‘Fear’.”17

Facial- and emotion-recognition systems, with deep roots in military and intelligence agency research, are now widely employed not only by these agencies but also by local police departments. Their use is not confined to governments: they are used in the corporate world for a wide range of purposes. And their production and operation likewise crosses public-private lines; though much of the initial research was government-funded, the commercialization of the technologies today allows corporate interests to sell the resulting services to public and private clients around the world.

What is the likely impact of these AI-aided surveillance tools? Dan McQuillan sees it this way:

“We can confidently say that the overall impact of AI in the world will be gendered and skewed with respect to social class, not only because of biased data but because engines of classification are inseparable from systems of power.”18

In our next installment we’ll see that biases in data sources and classification schemes are reflected in the outputs of the GPT large language model.

Image at top of post: The Senture computer server facility in London, Ky, on July 14, 2011, photo by US Department of Agriculture, public domain, accessed on flickr.

Title credit: the title of this post quotes a lyric of “Data Inadequate”, from the 1998 album Live at Glastonbury by Banco de Gaia.


1 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

2 “Rival”, in The New Yorker, by Harding Mason, D. Stewart, and Brendan Gill, November 28, 1958, synopsis here. Quoted by Matteo Pasquinelli in The Eye of the Master: A Social History of Artificial Intelligence, Verso Books, October 2023, page 137.

 Deep Learning’s Diminishing Returns”, by Neil C. Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso, IEEE Spectrum, 24 September 2021.

4 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

5 This phrase is cited by Elizabeth Stevens and attributed to Jeff Bezos, in “The mechanical Turk: a short history of ‘artificial artificial intelligence’”, Cultural Studies, 08 March 2022.

6 Crawford, Atlas of AI.

7 Chloe Xiang, “AI Isn’t Artificial or Intelligent: How AI innovation is powered by underpaid workers in foreign countries,” Motherboard, 6 December 2022.

8 Kelle Howson and Hannah Johnston, “Unpaid labour and territorial extraction in digital value networks,” Global Network, 26 October 2022.

9 Howson and Johnston, “Unpaid labour and territorial extraction in digital value networks.”

10 Crawford, Atlas of AI.

11 James Bridle, “The Stupidity of AI”, The Guardian, 16 Mar 2023.

12 Crawford, Atlas of AI.

13 Crawford, Atlas of AI.

14 Crawford, Atlas of AI.

15 Quotes from Affectiva taken from on 5 February 2024.

16 Oscar Schwarz, “Don’t look now: why you should be worried about machines reading your emotions,” The Guardian, 6 March 2019.

17 From Amazon Web Services Rekognition website, accessed on 5 February 2024; italics added.

18 Dan McQuillan, “Post-Humanism, Mutual Aid,” in AI for Everyone? Critical Perspectives, University of Westminster Press, 2021.

Artificial Intelligence in the Material World

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part two
Also published on Resilience.

Picture a relatively simple human-machine interaction: I walk two steps, flick a switch on the wall, and a light comes on.

Now picture a more complex interaction. I say, “Alexa, turn on the light” – and, if I’ve trained my voice to match the classifications in the electronic monitoring device and its associated global network, a light comes on.

“In this fleeting moment of interaction,” write Kate Crawford and Vladan Joler, “a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization.”

“The scale of resources required,” they add, “is many magnitudes greater than the energy and labor it would take a human to … flick a switch.”1

Crawford and Joler wrote these words in 2018, at a time when “intelligent assistants” were recent and rudimentary products of AI. The industry has grown by leaps and bounds since then – and the money invested is matched by the computing resources now devoted to processing and “learning” from data.

In 2021, a much-discussed paper found that “the amount of compute used to train the largest deep learning models (for NLP [natural language processing] and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law.”2

An analysis in 2023 backed up this conclusion. Computing calculations are often measured in Floating Point OPerations. A Comment piece in the journal Nature Machine Intelligence illustrated the steep rise in the number of FLOPs used in training recent AI models.

Changes in the number of FLOPs needed for state-of-the-art AI model training, graph from “Reporting electricity consumption is essential for sustainable AI”, Charlotte Debus, Marie Piraud, Achim Streit, Fabian Theis & Markus Götz, Nature Machine Intelligence, 10 November 2023. AlexNet is a neural network model used to great effect with the image classification database ImageNet, which we will discuss in a later post. GPT-3 is a Large Language Model developed by OpenAI, for which Chat-GPT is the free consumer interface.

With the performance of individual AI-specialized computer chips now measured in TeraFLOPs, and thousands of these chips harnessed together in an AI server farm, the electricity consumption of AI is vast.

As many researchers have noted, accurate electricity consumption figures are difficult to find, making it almost impossible to calculate the worldwide energy needs of the AI Industrial Complex.

However, Josh Saul and Dina Bass reported last year that

“Artificial intelligence made up 10 to 15% of [Google’s] total electricity consumption, which was 18.3 terawatt hours in 2021. That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta.”3

However, researcher Alex de Vries reported if an AI system similar to ChatGPT were used for each Google search, electricity usage would spike to 29.2 TWh just for the search engine.4

In Scientific American, Lauren Leffer cited projections that Nvidia, manufacturer of the most sophisticated chips for AI servers, will ship “1.5 million AI server units per year by 2027.”

“These 1.5 million servers, running at full capacity,” she added, “would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year, according to the new assessment.”5

OpenAI CEO Sam Altman expects AI’s appetite for energy will continue to grow rapidly. At the Davos confab in January 2024 he told the audience, “We still don’t appreciate the energy needs of this technology.” As quoted by The Verge, he added, “There’s no way to get there without a breakthrough. We need [nuclear] fusion or we need like radically cheaper solar plus storage or something at massive scale.” Altman has invested $375 million in fusion start-up Helion Energy, which hopes to succeed soon with a technology that has stubbornly remained 50 years in the future for the past 50 years.

In the near term, at least, electricity consumption will act as a brake on widespread use of AI in standard web searches, and will restrict use of the most sophisticated AI models to paying customers. That’s because the cost of AI use can be measured not only in watts, but in dollars and cents.

Shortly after the launch of Chat-GPT,  Sam Altman was quoted as saying that Chat-GPT cost “probably single-digit cents per chat.” Pocket change – until you multiply it by perhaps 10 million users each day. Citing figures from SemiAnalysis, the Washington Post reported that by February 2023, “ChatGPT was costing OpenAI some $700,000 per day in computing costs alone.” Will Oremus concluded,

“Multiply those computing costs by the 100 million people per day who use Microsoft’s Bing search engine or the more than 1 billion who reportedly use Google, and one can begin to see why the tech giants are reluctant to make the best AI models available to the public.”6

In any case, Alex de Vries says, “NVIDIA does not have the production capacity to promptly deliver 512,821 A100 HGX servers” which would be required to pair every Google search with a state-of-the-art AI model. And even if Nvidia could ramp up that production tomorrow, purchasing the computing hardware would cost Google about $100 billion USD.

Detail from: Nvidia GeForce RTX 2080, (TU104 | Turing), (Polysilicon | 5x | External Light), photograph by Fritzchens Fritz, at Wikimedia Commons, licensed under Creative Commons CC0 1.0 Universal Public Domain Dedication

A 457,000-item supply chain

Why is AI computing hardware so difficult to produce and so expensive? To understand this it’s helpful to take a greatly simplified look at a few aspects of computer chip production.

That production begins with silicon, one of the most common elements on earth and a basic constituent of sand. The silicon must be refined to 99.9999999% purity before being sliced into wafers.

Image from Intel video From Sand to Silicon: The Making of a Microchip.

Eventually each silicon wafer will be augmented with an extraordinarily fine pattern of transistors. Let’s look at the complications involved in just one step, the photolithography that etches a microscopic pattern in the silicon.

As Chris Miller explains in Chip War, the precision of photolithography is determined by, among other factors, the wavelength of the light being used: “The smaller the wavelength, the smaller the features that could be carved onto chips.”7 By the early 1990s, chipmakers had learned to pack more than 1 million transistors onto one of the chips used in consumer-level desktop computers. To enable the constantly climbing transistor count, photolithography tool-makers were using deep ultraviolet light, with wavelengths of about 200 nanometers (compared to visible light with wavelengths of about 400 to 750 nanometers; a nanometer is one-billionth of a meter). It was clear to some industry figures, however, that the wavelength of deep ultraviolet light would soon be too long for continued increases in the precision of etching and for continued increases in transistor count.

Thus began the long, difficult, and immensely expensive development of Extreme UltraViolet (EUV) photolithography, using light with a wavelength of about 13.5 nanometers.

Let’s look at one small part of the complex EUV photolithography process: producing and focusing the light. In Miller’s words,

“[A]ll the key EUV components had to be specially created. … Producing enough EUV light requires pulverizing a small ball of tin with a laser. … [E]ngineers realized the best approach was to shoot a tiny ball of tin measuring thirty-millionths of a meter wide moving through a vacuum at a speed of around two hundred miles an hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.”8

Heating the tin droplets to that temperature, “required a carbon dioxide-based laser more powerful than any that previously existed.”9 Laser manufacturer Trumpf worked for 10 years to develop a laser powerful enough and reliable enough – and the resulting tool had “exactly 457,329 component parts.”10

Once the extremely short wavelength light could be reliably produced, it needed to be directed with great precision – and for that purpose German lens company Zeiss “created mirrors that were the smoothest objects ever made.”11

Nearly 20 years after development of EUV lithography began, this technique is standard for the production of sophisticated computer chips which now contain tens of billions of transistors each. But as of 2023, only Dutch company ASML had mastered the production of EUV photolithography machines for chip production. At more than $100 million each, Miller says “ASML’s EUV lithography tool is the most expensive mass-produced machine tool in history.”12

Landscape Destruction: Rio Tinto Kennecott Copper Mine from the top of Butterfield Canyon. Photographed in 665 nanometer infrared using an infrared converted Canon 20D and rendered in channel inverted false color infrared, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.

No, data is not the “new oil”

US semi-conductor firms began moving parts of production to Asia in the 1960s. Today much of semi-conductor manufacturing and most of computer and phone assembly is done in Asia – sometimes using technology more advanced than anything in use within the US.

The example of EUV lithography indicates how complex and energy-intensive chipmaking has become. At countless steps from mining to refining to manufacturing, chipmaking relies on an industrial infrastructure that is still heavily reliant on fossil fuels.

Consider the logistics alone. A wide variety of metals, minerals, and rare earth elements, located at sites around the world, must be extracted, refined, and processed. These materials must then be transformed into the hundreds of thousands of parts that go into computers, phones, and routers, or which go into the machines that make the computer parts.

Co-ordinating all of this production, and getting all the pieces to where they need to be for each transformation, would be difficult if not impossible if it weren’t for container ships and airlines. And though it might be possible someday to run most of those processes on renewable electricity, for now those operations have a big carbon footprint.

It has become popular to proclaim that “data is the new oil”13, or “semi-conductors are the new oil”14. This is nonsense, of course. While both data and semi-conductors are worth a lot of money and a lot of GDP growth in our current economic context, neither one produces energy – they depend on available and affordable energy to be useful.

A world temporarily rich in surplus energy can produce semi-conductors to extract economic value from data. But warehouses of semi-conductors and petabytes of data will not enable us to produce surplus energy.

Artificial Intelligence powered by semi-conductors and data could, perhaps, help us to use the surplus energy much more efficiently and rationally. But that would require a radical change in the economic religion that guides our whole economic system, including the corporations at the top of the Artificial Intelligence Industrial Complex.

Meanwhile the AI Industrial Complex continues to soak up huge amounts of money and energy.

Open AI CEO Sam Altman has been in fund-raising mode recently, seeking to finance a network of new semi-conductor fabrication plants. As reported in Fortune, “Constructing a single state-of-the-art fabrication plant can require tens of billions of dollars, and creating a network of such facilities would take years. The talks with [Abu Dhabi company] G42 alone had focused on raising $8 billion to $10 billion ….”

This round of funding would be in addition to the $10 billion Microsoft has already invested in Open AI. Why would Altman want to get into the hardware production side of the Artificial Intelligence Industrial Complex, in addition to Open AI’s leading role in software operations? According to Fortune,

“Since OpenAI released ChatGPT more than a year ago, interest in artificial intelligence applications has skyrocketed among companies and consumers. That in turn has spurred massive demand for the computing power and processors needed to build and run those AI programs. Altman has said repeatedly that there already aren’t enough chips for his company’s needs.”15

Becoming data

We face the prospect, then, of continuing rapid growth in the Artificial Intelligence Industrial Complex, accompanied by continuing rapid growth in the extraction of materials and energy – and data.

How will major AI corporations obtain and process all the data that will keep these semi-conductors busy pumping out heat?

Consider the light I turned on at the beginning of this post. If I simply flick the switch on the wall and the light goes off, the interaction will not be transformed into data. But if I speak to an Echo, asking Alexa to turn off the light, many data points are created and integrated into Amazon’s database: the time of the interaction, the IP address and physical location where this takes place, whether I speak English or some other language, whether my spoken words are unclear and the device asks me to repeat, whether the response taken appears to meet my approval, or whether I instead ask for the response to be changed. I would be, in Kate Crawford’s and Vladan Joler’s words, “simultaneously a consumer, a resource, a worker, and a product.”15

By buying into the Amazon Echo world,

“the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.”16

How will AI corporations monetize that data so they can cover their hardware and energy costs, and still return a profit on their investors’ money? We’ll turn to that question in coming installments.

Image at top of post: Bingham Canyon Open Pit Mine, Utah, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.


1 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

2 Emily M. Bender, Timnit Gebru and Angelina McMillan-Major, Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Library, March 1, 2021. Thanks to Paris Marx for introducing me to the work of Emily M. Bender on the excellent podcast Tech Won’t Save Us.

3 Artificial Intelligence Is Booming—So Is Its Carbon Footprint”, Bloomberg, 9 March 2023.

4 Alex de Vries, “The growing energy footprint of artificial intelligence,” Joule, 18 October 2023.

5 Lauren Leffer, “The AI Boom Could Use a Shocking Amount of Electricity,” Scientific American, 13 October 2023.

6 Will Oremus, “AI chatbots lose money every time you use them. That is a problem.Washington Post, 5 June 2023.

7 Chris Miller, Chip War: The Fight for the World’s Most Critical Technology, Simon & Schuster, October 2022; page 183

8 Chip War, page 226.

9 Chip War, page 227.

10 Chip War, page 228.

11 Chip War, page 228.

12 Chip War, page 230.

13 For example, in “Data Is The New Oil — And That’s A Good Thing,” Forbes, 15 Nov 2019.

14  As in, “Semi-conductors may be to the twenty-first century what oil was to the twentieth,” Lawrence Summer, former US Secretary of the Treasury, in blurb to Chip War.

15 OpenAI CEO Sam Altman is fundraising for a network of AI chips factories because he sees a shortage now and well into the future,” Fortune, 20 January 2024.

16 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.

Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

Building car-dependent neighbourhoods

Also published on Resilience

Car-dependent neighbourhoods arise in a multi-level framework of planning, subsidies, advertising campaigns and cultural choices. After that, car dependency requires little further encouragement. Residents are mostly “locked-in”, since possible alternatives to car transport are either dangerous, unpleasant, time-consuming, or all three.

At the same time, municipal officials have strong incentives to simply accept car dependency – it takes bold new thinking to retrofit such neighbourhoods. Voters are likely to resist such new directions, since it is hard for them to imagine making their daily rounds using anything except private cars.

This post continues a discussion of what car dependency looks like on the map. The previous installment looked at car dependency on a regional scale, while this one looks at the neighbourhood scale.

Both posts use examples from Durham Region, a large administrative district on the east flank of Toronto. With a current population of about 700,000, Durham Region is rapidly suburbanizing.

I’ve picked one neighbourhood to illustrate some common characteristics of car-dependent sprawl. I have chosen not to name the neighbourhood, since the point is not to single out any specific locale. The key features discussed below can be seen in recent suburban developments throughout Durham Region, elsewhere in Ontario, and around North America.

Let’s begin to zoom in. In the aerial view below you can see new subdivisions creeping out towards a new expressway. Brown swatches represent farmland recently stripped of topsoil as the first step in transforming rich agricultural land into suburban “development”. (In the short time since this aerial imagery was obtained, the brown swatches have become noticeably more extensive.)

The neighbourhood we’ll focus on includes a high school, conveniently identifiable by its distinctive oval running track.

Subdivisions here are built in a megablock layout, with the large-scale grid intended to handle most of the traffic. Within each megablock is a maze of winding roads and lots of dead-ends. The idea is to discourage through traffic on residential streets, but this street pattern has many additional consequences.

First, from the centre of one megablock to the centre of another nearby megablock, there is seldom a direct and convenient route. A trip that might be a quarter of a kilometer as the crow flies might be a kilometer or two as the car drives. In the worst areas, there are no available short cuts for cyclists or pedestrians either.

Second, the arterial roads need to be multilane to cope with all the traffic they collect – and as “development” proceeds around them they are soon overwhelmed. “Recovering engineer” Charles Marohn explains this phenomenon using an analogy from hydrology. At a time of heavy rain, a whole bunch of little streams feed into progressively larger streams, which soon fill to capacity. With a pattern of “collector” roads emptying into secondary arterial roads into primary arterials and then into expressways, suburban road systems manage to engineer traffic “floods” each time there is a “heavy rain” – that is, each morning and afternoon at rush hour.1

As we zoom in to our high school’s neighbourhood, note another pattern repeated throughout this region. Within a residential neighbourhood there may be a row of houses close to and facing an arterial road. Yet these houses are on the equivalent of a “service road” rather than having direct access to the arterial. For motorists living here the first stage of a journey, to the arterial road just 50 meters from their driveway, requires driving ten times that far before their journey can really begin. Though the maze pattern is intended to limit traffic in such neighborhoods, residents create a lot of traffic simply to escape the maze.

The residential service road pattern has the effect of making arterial roads into semi-controlled-access roads. As seen in this example, there are few driveways or other vehicle entry points in long straight stretches of such an arterial. This design encourages drivers to drive well above the posted 60 km/hr speed limit … whenever the road is not clogged with rush-hour traffic, that is.

High traffic speeds make crossing such roads a dangerous undertaking for pedestrians and cyclists. True, there are some widely-spaced authorized crossing points, with long waits for the “walk” light. But when getting to and waiting at a crosswalk is not convenient, some people will predictably take their chances fording the rushing stream at other points. How many parents will encourage or even allow their children to walk to school, a playground, or a friend’s house if the trip involves crossing roads like these?

Just across the road. High school is on the left of the road, residential neighbourhood to the right.

Pedestrian access is at best a secondary consideration in such developments. Consider the aerial view below.

Directly across one arterial road from the high school, and across another arterial from a residential neighbourhood, is a cluster of big box retail stores including a Walmart Supercentre. The Walmart has 200 meters of frontage on the street, but in that stretch there is no entrance, nothing but concrete wall to greet the occasional lonesome pedestrian.

From another direction, many people live “just across the street” from the Walmart and other stores. Except … would-be pedestrian shoppers will need to cross not just a multilane urban highway, but also hectares of parking lot, before reaching the doors of a store. These stores are large in retail floor area, but they are dwarfed by the land given to parking. In accord with minimum parking requirements, the stores have spent hundreds of thousands of dollars to provide “free parking”. But there is no requirement to take the convenience of pedestrians into account. The doors open to the parking lots, not to the streets, because the vast majority of shoppers will arrive in large private vehicles that will need to be stored somewhere while the owner goes shopping.

Nevertheless there will be a small minority in such neighbourhoods who get to the store on foot or on bike. A few might be brave, stubborn environmentalists or exercise freaks. But mostly they will be people who can’t afford a car, or who can’t drive because of some type or degree of disability. Disproportionately, they will be elderly and/or in poor health. Particularly when carrying heavy bags of groceries, they will not want to go far out of their way to get to a crosswalk, preferring instead to make the shortest straightest trip home. It is not an accident that high-volume arterial roads in suburbs account for a large proportion of pedestrian deaths in North American cities. It is not an accident, either, that a disproportionate number of these deaths are inflicted on elderly, disabled, poor, or racially disadvantaged pedestrians.2

Lamp posts

Out beyond the beyond

It is now widely recognized that car-dependent suburbia hurts public health via an increase in diseases of sedentary lifestyle and due to the stress of spending many hours a week in alternately frenetic and creeping traffic.3 The environmental costs of sprawl include high carbon emissions, impermeable ground covering that rapidly flushes polluted run-off into diminishing areas of creeks and wetlands, and urban heat-island effects from so much concrete and asphalt. Particularly in Ontario, new tracts of car-dependent sprawl can only be built with the sacrifice of increasingly scarce class one farmland.4 Finally, groups such as Strong Towns have documented the long-term fiscal disaster of suburban development.5 Even though higher levels of government typically pay much of the initial cost of major infrastructure, municipalities will be on the hook for maintenance and eventual rebuilding – and property taxes in low-density suburbs seldom bring in enough revenue to cover these steadily accruing liabilities.

Yet in Ontario the large property developer lobby remains as strong a political force as ever. The Premier of Ontario makes no real attempt to hide his allegiance to the largest property developers.6 In Durham Region, after a long public consultation process recommended intensification of existing urban areas to accommodate growing populations, politicians suddenly voted instead for a sprawl-expanding proposal put forward by the development industry lobby.7

So in 2023, corn fields and pastures beyond the current edge of suburbia are being bulldozed, new maze-like streets laid out, thousands of big, cheaply-made, dearly-purchased, cookie-cutter houses stuffed into small lots. For a brief period new residents can look through the construction dust and see nearby farmland or woodland – until the edge of suburbia takes the next step outward.

Suppose you believe, as I do, that this ruinous pattern of development should not and cannot last – that this pattern will not survive past the era of cheap energy, and will not survive when its long-term fiscal non-sustainability results in collapsing services and municipal bankruptcies. When car culture sputters, falters and runs off the road, can these thousands of neighbourhoods, home to millions of people, be transformed so they are no longer car dependent? That’s a big question, but the next post will offer a few ideas.

For today, the edge

Image at top of page: Bulldozertown (click here for full-screen image). All photos used here are taken in the same area shown in satellite views.


Charles Marohn, Confessions of a Recovering Engineer, Wiley, 2021; pages 85–87.

For analyses of trends in pedestrian deaths, see Angie Schmitt’s 2020 book Right of Way (reviewed here), and Jessie Singer’s 2022 book There Are No Accidents (reviewed here).

See “Suburbs increasingly view their auto-centric sprawl as a health hazard,” by Katherine Shaver, Washington Post, December 28, 2016.

“Ontario losing 319 acres of farmland every day,” Ontario Farmland Trust, July 4, 2022.

See “The Growth Ponzi Scheme: A Crash Course,” by John Pattison,

See The Narwhal, “Six developers bought Greenbelt land after Ford came to power. Now, they stand to profit,” November 17, 2022; BlogTO, “All the crazy details about Doug Ford’s controversial stag and doe party with developers,” February 9, 2023.

See The Narwhal, “Ontario’s Durham Region approves developer-endorsed plan to open 9,000 acres of farmland,” May 26, 2022.

A road map that misses some turns

A review of No Miracles Needed

Also published on Resilience

Mark Jacobson’s new book, greeted with hosannas by some leading environmentalists, is full of good ideas – but the whole is less than the sum of its parts.

No Miracles Needed, by Mark Z. Jacobson, published by Cambridge University Press, Feb 2023. 437 pages.

The book is No Miracles Needed: How Today’s Technology Can Save Our Climate and Clean Our Air (Cambridge University Press, Feb 2023).

Jacobson’s argument is both simple and sweeping: We can transition our entire global economy to renewable energy sources, using existing technologies, fast enough to reduce annual carbon dioxide emissions at least 80% by 2030, and 100% by 2050. Furthermore, we can do all this while avoiding any major economic disruption such as a drop in annual GDP growth, a rise in unemployment, or any drop in creature comforts. But wait – there’s more! In so doing, we will also completely eliminate pollution.

Just don’t tell Jacobson that this future sounds miraculous.

The energy transition technologies we need – based on Wind, Water and Solar power, abbreviated to WWS – are already commercially available, Jacobson insists. He contrasts the technologies he favors with “miracle technologies” such as geoengineering, Carbon Capture Storage and Utilization (CCUS), or Direct Air Capture of carbon dioxide (DAC). These latter technologies, he argues, are unneeded, unproven, expensive, and will take far too long to implement at scale; we shouldn’t waste our time on such schemes.  

The final chapter helps to understand both the hits and misses of the previous chapters. In “My Journey”, a teenage Jacobson visits the smog-cloaked cities of southern California and quickly becomes aware of the damaging health effects of air pollution:

“I decided then and there, that when I grew up, I wanted to understand and try to solve this avoidable air pollution problem, which affects so many people. I knew what I wanted to do for my career.” (No Miracles Needed, page 342)

His early academic work focused on the damages of air pollution to human health. Over time, he realized that the problem of global warming emissions was closely related. The increasingly sophisticated computer models he developed were designed to elucidate the interplay between greenhouse gas emissions, and the particulate emissions from combustion that cause so much sickness and death.

These modeling efforts won increasing recognition and attracted a range of expert collaborators. Over the past 20 years, Jacobson’s work moved beyond academia into political advocacy. “My Journey” describes the growth of an organization capable of developing detailed energy transition plans for presentation to US governors, senators, and CEOs of major tech companies. Eventually that led to Jacobson’s publication of transition road maps for states, countries, and the globe – road maps that have been widely praised and widely criticized.

In my reading, Jacobson’s personal journey casts light on key features of No Miracles Needed in two ways. First, there is a singular focus on air pollution, to the omission or dismissal of other types of pollution. Second, it’s not likely Jacobson would have received repeat audiences with leading politicians and business people if he challenged the mainstream orthodox view that GDP can and must continue to grow.

Jacobson’s road map, then, is based on the assumption that all consumer products and services will continue to be produced in steadily growing quantities – but they’ll all be WWS based.

Does he prove that a rapid transition is a realistic scenario? Not in this book.

Hits and misses

Jacobson gives us brief but marvelously lucid descriptions of many WWS generating technologies, plus storage technologies that will smooth the intermittent supply of wind- and sun-based energy. He also goes into considerable detail about the chemistry of solar panels, the physics of electricity generation, and the amount of energy loss associated with each type of storage and transmission.

These sections are aimed at a lay readership and they succeed admirably. There is more background detail, however, than is needed to explain the book’s central thesis.

The transition road map, on the other hand, is not explained in much detail. There are many references to scientific papers in which he outlines his road maps. A reader of No Miracles Needed can take Jacobson’s word that the model is a suitable representation, or you can find and read Jacobson’s articles in academic journals – but you don’t get the needed details in this book.

Jacobson explains why, at the level of a device such as a car or a heat pump, electric energy is far more efficient in producing motion or heat than is an internal combustion engine or a gas furnace. Less convincingly, he argues that electric technologies are far more energy-efficient than combustion for the production of industrial heat – while nevertheless conceding that some WWS technologies needed for industrial heat are, at best, in prototype stages.

Yet Jacobson expresses serene confidence that hard-to-electrify technologies, including some industrial processes and long-haul aviation, will be successfully transitioning to WWS processes – perhaps including green hydrogen fuel cells, but not hydrogen combustion – by 2035.

The confidence in complex global projections is often jarring. For example, Jacobson tells us repeatedly that the fully WWS energy system of 2050 “reduces end-use energy requirements by 56.4 percent” (page 271, 275).1 The expressed precision notwithstanding, nobody yet knows the precise mix of storage types, generation types, and transmission types, which have various degrees of energy efficiency, that will constitute a future WWS global system. What we should take from Jacobson’s statements is that, based on the subset of factors and assumptions – from an almost infinitely complex global energy ecosystem – which Jacobson has included in his model, the calculated outcome is a 56% end-use energy reduction.

Canada’s Premiers visit Muskrat Falls dam construction site, 2015. Photo courtesy of Government of Newfoundland and Labrador; CC BY-NC-ND 2.0 license, via Flickr.

Also jarring is the almost total disregard of any type of pollution other than that which comes from fossil fuel combustion. Jacobson does briefly mention the particles that grind off the tires of all vehicles, including typically heavier EVs. But rather than concede that these particles are toxic and can harm human and ecosystem health, he merely notes that the relatively large particles “do not penetrate so deep into people’s lungs as combustion particles do.” (page 49)

He claims, without elaboration, that “Environmental damage due to lithium mining can be averted almost entirely.” (page 64) Near the end of the book, he states that “In a 2050 100 percent WWS world, WWS energy private costs equal WWS energy social costs because WWS eliminates all health and climate costs associated with energy.” (page 311; emphasis mine)

In a culture which holds continual economic growth to be sacred, it would be convenient to believe that business-as-usual can continue through 2050, with the only change required being a switch to WWS energy.

Imagine, then, that climate-changing emissions were the only critical flaw in the global economic system. Given that assumption, is Jacobson’s timetable for transition plausible?

No. First, Jacobson proposes that “by 2022”, no new power plants be built that use coal, methane, oil or biomass combustion; and that all new appliances for heating, drying and cooking in the residential and commercial sectors “should be powered by electricity, direct heat, and/or district heating.” (page 319) That deadline has passed, and products that rely on combustion continue to be made and sold. It is a mystery why Jacobson or his editors would retain a 2022 transition deadline in a book slated for publication in 2023.

Other sections of the timeline also strain credulity. “By 2023”, the timeline says, all new vehicles in the following categories should be either electric or hydrogen fuel-cell: rail locomotives, buses, nonroad vehicles for construction and agriculture, and light-duty on-road vehicles. This is now possible only in a purely theoretical sense. Batteries adequate for powering heavy-duty locomotives and tractors are not yet in production. Even if they were in production, and that production could be scaled up within a year, the charging infrastructure needed to quickly recharge massive tractor batteries could not be installed, almost overnight, at large farms or remote construction sites around the world.

While electric cars, pick-ups and vans now roll off assembly lines, the global auto industry is not even close to being ready to switch the entire product lineup to EV only. Unless, of course, they were to cut back auto production by 75% or more until production of EV motors, batteries, and charging equipment can scale up. Whether you think that’s a frightening prospect or a great idea, a drastic shrinkage in the auto industry would be a dramatic departure from a business-as-usual scenario.

What’s the harm, though, if Jacobson’s ambitious timeline is merely pushed back by two or three years?

If we were having this discussion in 2000 or 2010, pushing back the timeline by a few years would not be as consequential. But as Jacobson explains effectively in his outline of the climate crisis, we now need both drastic and immediate actions to keep cumulative carbon emissions low enough to avoid global climate catastrophe. His timeline is constructed with the goal of reducing carbon emissions by 80% by 2030, not because those are nice round figures, but because he (and many others) calculate that reductions of that scale and rapidity are truly needed. Even one or two more years of emissions at current rates may make the 1.5°C warming limit an impossible dream.

The picture is further complicated by a factor Jacobson mentions only in passing. He writes,

“During the transition, fossil fuels, bioenergy, and existing WWS technologies are needed to produce the new WWS infrastructure. … [A]s the fraction of WWS energy increases, conventional energy generation used to produce WWS infrastructure decreases, ultimately to zero. … In sum, the time-dependent transition to WWS infrastructure may result in a temporary increase in emissions before such emissions are eliminated.” (page 321; emphasis mine)

Others have explained this “temporary increase in emissions” at greater length. Assuming, as Jacobson does, that a “business-as-usual” economy keeps growing, the vast majority of goods and services will continue, in the short term, to be produced and/or operated using fossil fuels. If we embark on an intensive, global-scale, rapid build-out of WWS infrastructures at the same time, a substantial increment in fossil fuels will be needed to power all the additional mines, smelters, factories, container ships, trucks and cranes which build and install the myriad elements of a new energy infrastructure. If all goes well, that new energy infrastructure will eventually be large enough to power its own further growth, as well as to power production of all other goods and services that now rely on fossil energy.

Unless we accept a substantial decrease in non-transition-related industrial activity, however, the road that takes us to a full WWS destination must route us through a period of increased fossil fuel use and increased greenhouse gas emissions.

It would be great if Jacobson modeled this increase to give us some guidance how big this emissions bump might be, how long it might last, and therefore how important it might be to cumulative atmospheric carbon concentrations. There is no suggestion in this book that he has done that modeling. What should be clear, however, is that any bump in emissions at this late date increases the danger of moving past a climate tipping point – and this danger increases dramatically with every passing year.

1In a tl;dr version of No Miracles Needed published recently in The Guardian, Jacobson says “Worldwide, in fact, the energy that people use goes down by over 56% with a WWS system.” (“‘No miracles needed’: Prof Mark Jacobson on how wind, sun and water can power the world”, 23 January 2023)


Photo at top of page by Romain Guy, 2009; public domain, CC0 1.0 license, via Flickr.

Osprey and Otter have a message for Ford

On most summer afternoons, if you gaze across Bowmanville Marsh long enough you’ll see an Osprey flying slow above the water, then suddenly dropping to the surface before rising up with a fish in its talons.

But the Osprey doesn’t nest in Bowmanville Marsh – it nests about a kilometer away in Westside Marsh. That’s where a pair of Ospreys fix up their nest each spring, and that’s where they feed one or two chicks through the summer until they can all fly away together. Quite often the fishing is better in one marsh than the other – and the Ospreys know where to go.

Otter knows this too. You might see a family of Otters in one marsh several days in a row, and then they trot over the small upland savannah to the other marsh.

Osprey and Otter know many things that our provincial government would rather not know. One of those is that the value of a specific parcel of wetland can’t be judged in isolation. Many wetland mammals, fish and birds – even the non-migratory ones – need a complex of wetlands to stay healthy.

To developers and politicians with dollar signs in their eyes, a small piece of wetland in an area with several more might seem environmentally insignificant. Otters and Ospreys and many other creatures know better. Filling in or paving over one piece of wetland can have disastrous effects for creatures that spend much of their time in other nearby wetlands.

A change in how wetlands are evaluated – so that the concept of a wetland complex is gone from the criteria – is just one of the many ecologically disastrous changes the Doug Ford government in Ontario is currently rushing through. These changes touch on most of the issues I’ve written about in this blog, from global ones like climate change to urban planning in a single city. This time I’ll focus on threats to the environment in my own small neighbourhood.

Beavers move between Bowmanville and Westside Marshes as water levels change, as food sources change in availability, and as their families grow. They have even engineered themselves a new area of wetland close to the marshes. Great Blue Herons move back and forth between the marshes and nearby creeks on a daily basis throughout the spring, summer and fall.

In our sprawl-loving Premier’s vision, neither wetlands nor farmland are nearly as valuable as the sprawling subdivisions of cookie-cutter homes that make his campaign donors rich. The Premier, who tried in 2021 to have a wetland in Pickering filled and paved for an Amazon warehouse, thinks it’s a great idea to take chunks of farmland and wetland out of protected status in the Greenbelt. One of those parcels – consisting of tilled farmland as well as forested wetland – is to be removed from the Greenbelt in my municipality of Clarington.

The Premier’s appetite for environmental destruction makes it clear that no element of natural heritage in the Greater Toronto area can be considered safe. That includes the Lake Ontario wetland complex that I spend so much time in.

This wetland area now has Provincially Significant Wetland status, but that could change in the near future. As Anne Bell of Ontario Nature explains,

“The government is proposing to completely overhaul the Ontario Wetland Evaluation System for identifying Provincially Significant Wetlands (PSWs), ensuring that very few wetlands would be deemed provincially significant in the future. Further, many if not most existing PSWs could lose that designation because of the changes, and if so, would no longer benefit from the high level of protection that PSW designation currently provides.” (Ontario Nature blog, November 10, 2022)

The Bowmanville Marsh/Westside Marsh complex is home, at some time in the year, to scores of species of birds. Some of these are already in extreme decline, and at least one is threatened.

Up to now, when evaluators were judging the significance of a particular wetland, the presence of a threatened or endangered species was a strong indicator. If the Ford government’s proposed changes go through, the weight given to threatened or endangered species will drop.

The Rusty Blackbird is a formerly numerous bird whose population has dropped somewhere between 85 – 99 percent; it stopped by the Bowmanville Marsh in September on its migration. The Least Bittern is already on the threatened species list in Ontario, but is sometimes seen in Bowmanville Marsh. If the Least Bittern or the Rusty Blackbird drop to endangered species status, will the provincial government care? And will there be any healthy wetlands remaining for these birds to find a home?

Osprey and Otter know that if you preserve a small piece of wetland, but it’s hemmed in by a busy new subdivision, that wetland is a poor home for most wildlife. Many creatures need the surrounding transitional ecozone areas for some part of their livelihood. The Heron species spend many hours a day stalking the shallows of marshes – but need tall trees nearby to nest in.

Green Heron (left) and juvenile Black-crowned Night Heron

And for some of our shyest birds, only the most secluded areas of marsh will do as nesting habitats. That includes the seldom-seen Least Bittern, as well as the several members of the Rail family who nest in the Bowmanville Marsh.

There are many hectares of cat-tail reeds in this Marsh, but the Virginia Rails, Soras and Common Gallinules only nest where the stand of reeds is sufficiently dense and extensive to disappear in, a safe distance from a road, and a safe distance from any walking path. That’s one reason I could live beside this marsh for several years before I spotted any of these birds, and before I ever figured out what was making some of the strange bird calls I often heard.

Juvenile Sora, and adult Virginia Rail with hatchling

There are people working in government agencies, of course, who have expertise in bird populations and habitats. One of the most dangerous changes now being pushed by our Premier is to take wildlife experts out of the loop, so their expertise won’t threaten the designs of big property developers.

No longer is the Ministry of Natural Resources and Forestry (MNRF) to be involved in decisions about Provincially Designated Wetland status. Furthermore, local Conservation Authorities (CAs), who also employ wetland biologists and watershed ecologists, are to be muzzled when it comes to judging the potential impacts of development proposals: 

“CAs would be prevented from entering into agreements with municipalities regarding the review of planning proposals or applications. CAs would in effect be prohibited from providing municipalities with the expert advice and information they need on environmental and natural heritage matters.” (Ontario Nature blog)

Individual municipalities, who don’t typically employ ecologists, and who will be struggling to cope with the many new expenses being forced on them by the Ford government, will be left to judge ecological impacts without outside help. In practice, that might mean they will accept whatever rosy environmental impact statements the developers put forth.

It may be an exaggeration to say that ecological ignorance will become mandatory. Let’s just say, in Doug Ford’s brave new world ecological ignorance will be strongly incentivized.

Marsh birds of Bowmanville/Westside Marsh Complex

These changes to rules governing wetlands and the Greenbelt are just a small part of the pro-sprawl, anti-environment blizzard unleashed by the Ford government in the past month. The changes have resulted in a chorus of protests from nearly every municipality, in nearly every MPP’s riding, and in media outlets large and small.

The protests need to get louder. Osprey and Otter have a message, but they need our help.

Make Your Voice Heard

Friday Dec 2, noon – 1 pm: Rally at MPP Todd McCarthy’s office, 23 King Street West in Bowmanville.

Write McCarthy at, or phone him at 905-697-1501.

Saturday Dec 3, rally starting at 2:30 pm: in Toronto at Bay St & College St.

Send Premier Ford a message at:, 416-325-1941

Send Environment Minister David Piccini a message at:, 416-314-6790

Send Housing Minister Steve Clark a message at:, 416-585-7000

All photos taken by Bart Hawkins Kreps in Bowmanville/Westside Marsh complex, Port Darlington.

Dreaming of clean green flying machines

Also published on Resilience

In common with many other corporate lobby groups, the International Air Transport Association publicly proclaims their commitment to achieving net-zero carbon emissions by 2050.1

Yet the evidence that such an achievement is likely, or even possible, is thin … to put it charitably. Unless, that is, major airlines simply shut down.

As a 2021 Nova documentary put it, aviation “is the high-hanging fruit – one of the hardest climate challenges of all.”2 That difficulty is due to the very essence of the airline business.

What has made aviation so attractive to the relatively affluent people who buy most tickets is that commercial flights maintain great speed over long distances. Aviation would have little appeal if airplanes were no faster than other means of transportation, or if they could be used only for relatively short distances. These characteristics come with rigorous energy demands.

A basic challenge for high-speed transportation – whether that’s pedaling a bike fast, powering a car fast, or propelling an airplane fast – is that the resistance from the air goes up with speed, not linearly but exponentially. As speed doubles, air resistance quadruples; as speed triples, air resistance increases by a factor of nine; and so forth.

That is one fundamental reason why no high-speed means of transportation came into use until the fossil fuel era. The physics of wind resistance become particularly important when a vehicle accelerates up to several hundred kilometers per hour or more.

Contemporary long-haul aircraft accommodate the physics in part by flying at “cruising altitude” – typically about 10,000 meters above sea level. At that elevation the atmosphere is thin enough to cause significantly less friction, while still rich enough in oxygen for combustion of the fuel. Climbing to that altitude, of course, means first fighting gravity to lift a huge machine and its passengers a very long way off the ground.

A long-haul aircraft, then, needs a high-powered engine for climbing, plus a large store of energy-dense fuel to last through all the hours of the flight. That represents a tremendous challenge for inventors hoping to design aircraft that are not powered by fossil fuels.

In Nova’s “The Great Electric Airplane Race”, the inherent problem is illustrated with this graphic:

graphic from Nova, “The Great Electric Airplane Race,” 26 May 2021

A Boeing 737 can carry up to 40,000 pounds of jet fuel. For the same energy content, the airliner would require 1.2 million pounds of batteries (at least several times the maximum take-off weight of any 737 model3). Getting that weight off the ground, and propelling it thousands of miles through the air, is obviously not going to work.

A wide variety of approaches are being tried to get around the drastic energy discrepancy between fossil fuels and batteries. We will consider several such strategies later in this article. First, though, we’ll take a brief look at the strategies touted by major airlines as important short-term possibilities.

“Sustainable fuel” and offsets

The International Air Transport Association gives the following roadmap for its commitment to net-zero by 2050. Anticipated emissions reductions will come in four categories:
3% – Infrastructure and operational efficiencies
13% – New technology, electric and hydrogen
19% – Offsets and carbon capture
65% – Sustainable Aviation Fuel

The tiny improvement predicted for “Infrastructure and operational efficiencies” reflects the fact that airlines have already spent more than half a century trying to wring the most efficiency out of their most costly input – fuel.

The modest emission reductions predicted to come from battery power and hydrogen reflects a recognition that these technologies, for all their possible strengths, still appear to be a poor fit for long-haul aviation.

That leaves two categories of emission reductions, “Offsets and carbon capture”, and “Sustainable Aviation Fuel”.

So-called Sustainable Aviation Fuel (SAF) is compatible with current jet engines and can provide the same lift-off power and long-distance range as fossil-derived aviation fuel. SAF is typically made from biofuel feedstocks such as vegetable oils and used cooking oils. SAF is already on the market, which might give rise to the idea that a new age of clean flight is just around the corner. (No further away, say, than 2050.)

Yet as a Comment piece in Nature* notes, only .05% of fuel currently used meets the definition of SAF.4 Trying to scale that up to meet most of the industry’s need for fuel would clearly result in competition for agricultural land. Since growing enough food to feed all the people on the ground is an increasingly difficult challenge, devoting a big share of agricultural production to flying a privileged minority of people through the skies is a terrible idea.5

In addition, it’s important to note that the burning of SAF still produces carbon emissions and climate-impacting contrails. The use of SAF is only termed “carbon neutral” because of the assumption that the biofuels are renewable, plant-based products that would decay and emit carbon anyway. That’s a dubious assumption, when there’s tremendous pressure to clear more forests, plant more hectares into monocultures, and mine soils in a rush to produce not only more food for people, but also more fuel for wood-fired electric generating stations, more ethanol to blend with gasoline, more biofuel diesel, and now biofuel SAF too. When SAF is scaled up, there’s nothing “sustainable” about it.

What about offsets? My take on carbon offsets is this: Somebody does a good thing by planting some trees. And then, on the off chance that these trees will survive to maturity and will someday sequester significant amounts of carbon, somebody offsets those trees preemptively by emitting an equivalent amount of carbon today.

Kallbekken and Victor’s more diplomatic judgement on offsets is this:

“The vast majority of offsets today and in the expected future come from forest-protection and regrowth projects. The track record of reliable accounting in these industries is poor …. These problems are essentially unfixable. Evidence is mounting that offsetting as a strategy for reaching net zero is a dead end.”6 (emphasis mine)

Summarizing the heavy reliance on offsetting and SAF in the aviation lobby’s net-zero plan, Kallbekken and Victor write “It is no coincidence that these ideas are also the least disruptive to how the industry operates today.” The IATA “commitment to net-zero”, basically, amounts to hoping to get to net-zero by carrying on with Business As Usual.

Contestants, start your batteries!

Articles appear in newspapers, magazines and websites on an almost daily basis, discussing new efforts to operate aircraft on battery power. Is this a realistic prospect? A particularly enthusiastic answer comes in an article from the Aeronautical Business School: “Electric aviation, with its promise of zero-emission flights, is right around the corner with many commercial projects already launched. …”7

Yet the electric aircraft now on the market or in prototyping are aimed at very short-haul trips. That reflects the reality that, in spite of intensive research and development in battery technology through recent decades, batteries are not remotely capable of meeting the energy and power requirements of large, long-haul aircraft.

The International Council on Clean Transportation (ICCT) recently published a paper on electric aircraft which shows why most flights are not in line to be electrified any time soon. Jayant Mukhopadhaya, one of the report’s co-authors, discusses the energy requirements of aircraft for four segments of the market. The following chart presents these findings: 

Table from Jayant Mukhopadhaya, “What to expect when expecting electric airplanes”, ICCT, July 14, 2022.

The chart shows the specific energy (“eb”, in Watt-hours per kilogram) and energy density (“vb”, in Watt-hours per liter) available in batteries today, plus the corresponding values that would be required to power aircraft in the four major market segments. Even powering a commuter aircraft, carrying 19 passengers up to 450 km, would require a 3-time improvement in specific energy of batteries.

Larger aircraft on longer flights won’t be powered by batteries alone unless there is a completely new, far more effective type of battery invented and commercialized:

“Replacing regional, narrowbody, and widebody aircraft would require roughly 6x, 9x, and 20x improvements in the specific energy of the battery pack. In the 25 years from 1991 to 2015, the specific energy and energy density of lithium-ion batteries improved by a factor of 3.”8

If the current rate of battery improvement were to continue for another 25 years, then, commuter aircraft carrying up to 19 passengers could be powered by batteries alone. That would constitute one very small step toward net-zero aviation – by the year 2047.

This perspective helps explain why most start-ups hoping to bring electric aircraft to market are targeting very short flights – from several hundred kilometers down to as little as 30 kilometers – and very small payloads – from one to five passengers, or freight loads of no more than a few hundred kilograms.

The Nova documentary “The Great Electric Airplane Race” took an upbeat tone, but most of the companies profiled, even if successful, would have no major impact on aviation’s carbon emissions.

Joby Aviation is touted as “the current leader in the race to fill the world with electric air taxis.” Their vehicles, which they were aiming to have certified by 2023, would carry a pilot and 4 passengers. A company called KittyHawk wanted to build an Electrical Vertical Take-Off and Landing (EVTOL) which they said could put an end to traffic congestion. The Chinese company Ehang was already offering unpiloted tourism flights, for two people and lasting no more than 10 minutes.

Electric air taxis, if they became a reality after 50 years of speculation, would result in no reductions in the emissions from the current aviation industry. They would simply be an additional form of energy-intensive mobility coming onto the market.

Other companies discussed in the Nova program were working on hybrid configurations. Elroy’s cargo delivery vehicle, for example, would have batteries plus a combustion engine, allowing it to carry a few hundred kilograms up to 500 km.

H2Fly, based in Stuttgart, was working on a battery/hydrogen hybrid. H2Fly spokesperson Joseph Kallo explained that “The energy can’t flow out of the [hydrogen fuel] cell as fast as it can from a fossil fuel engine or a battery. So there’s less power available for take-off. But it offers much more range.”

By using batteries for take-off, and hydrogen fuel cells at cruising altitude, Kallo said this technology could eventually work for an aircraft carrying up to 100 passengers with a range of 3500 km – though as of November 2020 they were working on “validating a range of nearly 500 miles”.

To summarize: electric and hybrid aviation technologies could soon power a few segments of the industry. As long as the new aircraft are replacing internal combustion engine aircraft, and not merely adding new vehicles on new routes for new markets, they could result in a small reduction in overall aviation emissions.

Yet this is a small part of the aviation picture. As Jayant Mukhopadhaya told in September,

“2.8% of departures in 2019 were for [flights with] less than 30 passengers going less than 200 km. This increases to 3.8% if you increase the range to 400 km. The third number they quote is 800 km for 25 passengers, which would then cover 4.1% of global departures.”9

This is roughly 3–4% of departures – but it’s important to recognize this does not represent 3–4% of global passenger km or global aviation emissions. When you consider that the other 96% of departures are made by much bigger planes, carrying perhaps 10 times as many passengers and traveling up to 10 times as far, it is clear that small-plane, short-hop aviation represents just a small sliver of both the revenue base and the carbon footprint of the airline industry.

Short-haul flights are exactly the kind of flights that can and should be replaced in many cases by good rail or bus options. (True, there are niche cases where a short flight over a fjord or other impassable landscape can save many hours of travel – but that represents a very small share of air passenger km.)

If we are really serious about a drastic reduction in aviation emissions, by 2030 or even by 2050, there is just one currently realistic route to that goal: we need a drastic reduction in flights.

* * *

Postscript: At the beginning of October a Washington Post article asked “If a Google billionaire can’t make flying cars happen, can anyone?” The article reported that KittyHawk, the EVTOL air taxi startup highlighted by Nova in 2021 and funded by Google co-founder Larry Page, is shutting down. The article quoted Peter Rez, from Arizona State University, explaining that lithium-ion batteries “output energy at a 50 times less efficient rate than their gasoline counterparts, requiring more to be on board, adding to cost and flying car and plane weight.” This story underscores, said the Post, “how difficult it will be to get electric-powered flying cars and planes.”

*Correction: The original version of this article attributed quotes from the Nature Comment article simply to “Nature”. Authors’ names have been added to indicate this is a signed opinion article and does not reflect an official editorial position of Nature.


IATA, “Our Commitment to Fly Net Zero by 2050”.

Nova, “The Great Electric Airplane Race” – 26 May 2021.

The Difference In Weight Between The Boeing 737 Family’s Many Variants”, by Mark Finlay, April 24, 2022.

4  Steffen Kallbekken and David G. Victor, Nature, “A cleaner future for flight — aviation needs a radical redesign”, 16 September 2022.

Dan Rutherford writes, “US soy production contributes to global vegetable oil markets, and prices have spiked in recent years in part due to biofuel mandates. Diverting soy oil to jet fuel would put airlines directly in competition with food at a time when consumers are being hammered by historically high food prices.” In “Zero cheers for the supersoynic renaissance”, July 11, 2022.

Kallbekken and Victor, Nature, “A cleaner future for flight — aviation needs a radical redesign”, 16 September 2022.

The path towards an environmentally sustainable aviation”, by Óscar Castro, March 23, 2022.

Jayant Mukhopadhaya, “What to expect when expecting electric airplanes”, ICCT, July 14, 2022.

Air Canada Electrifies Its Lineup With Hybrid Planes”, by Lloyd Alter, September 20, 2022.

Photo at top of page: “Nice line up at Tom Bradley International Terminal, Los Angeles, November 10, 2013,” photo by wilco737, Creative Commons 2.0 license, on

bright lights of june


In the first week of June, the last of the far-north migratory birds were still passing through. By the end of the month some local nesters were ushering fledglings out into the world.

Ruddy Turnstones and Red Knots at Port Darlington breakwater, June 5, 2022

In the meantime a wide variety of flowering plants made up for a chilly spring by growing inches a day – aided by lots of sunshine and frequent rains.

Primrose rays

But bees of all sorts have been noticeably, worryingly scarce this year. I was glad to see this bumblebee shake off the water and resume flying after a drenching shower.

Bumblebee shower

Some of the beautiful insects I first mistook for solitary bee species turned out to be flies of the hover fly family (aka “flower flies”, aka “Syrphid flies”). They make their way from flower to flower harvesting pollen, so they are important pollinators.

Fleabane after rain

Daisy fleabane is one of the first meadow flowers in our yard each spring, and the hover flies are busy.

Fleabane and Syrphid

Daisy dew

A spread of white daisies also beckons pollinators to unmown areas of the yard.

Daisy flower fly 1

Daisy flower fly 2

Virginia spiderwort blossoms, each only the size of a twenty-five cent piece, look a deep blue in shade and purple-lavender in full sun.

Virginia Spiderwort

Though I spotted what appeared to be a single small grey bumblebee visiting the spiderwort, it didn’t stick around for a photo. There was a much smaller creature grasping the spiderwort’s yellow anther – not a bee as I first thought, but likely a hover fly known as the Eastern Calligrapher.

Eastern Calligrapher

Meanwhile, overhead, the Baltimore Orioles have filled the air with chatter and song – especially as the fledglings were coaxed out of the nest.

It’s time to go

It’s such a nice nest

Perhaps the most ancient beginning-of-summer ritual, in these parts, is the march of turtles to lay their eggs. This Painted Turtle came out of the marsh and made her way across the lawn to the sand. She dug a hole for a nest just a few meters away from last year’s chosen spot, she deposited her eggs, she carefully covered them, and she tamped down the sand. We looked away for a moment, and she was gone.

Turtle procession

lakeshore medley


When you’re looking for fresh new scenery on a daily basis, the January lakeshore obliges – especially when the temperature plunges, heavy snow falls, and waves rearrange the ice, water and steam ceaselessly.

Breakwater Boulder (click image for full-screen view)

As dawn breaks frost is forming on icicles at the waterline.

Mouth of a Cave

The delicate filaments of frost are gone by the end of the day … but they’ll be back soon enough.

Sunset Arch

Gentle waves roll over pebbles at sunset, carving a path under a coating of ice.

Sunset Flow

It takes much bigger waves to topple the more massive ice formations.

Snaggletooth & Friends

Right along the coast is not always the best place to go looking for fauna, as most species of waterfowl stay well away from shore. But just a short drive to the west at Lynd Shores Conservation Area, it’s not hard to spot lots of wildlife.

White-Tailed Deer

Mourning Dove at Evening

Barred Owl

When you love ice and snow the lakeshore is a special place, not least because the sounds are just as beautiful as the sights. Here’s a short suite from the shoreline over the past week:

Photo at top of post: Branching Out (click here for full-screen view)