Counting the here-and-now costs of climate change

A review of Slow Burn: The Hidden Costs of a Warming World

Also published on Resilience.

R. Jisung Park takes us into a thought experiment. Suppose we shift attention away from the prospect of coming climate catastrophes – out-of-control wildfires, big rises in sea levels, stalling of ocean circulation currents – and we focus instead on the ways that rising temperatures are already having daily impacts on people’s lives around the world.

Might these less dramatic and less obvious global-heating costs also provide ample rationale for concerted emissions reductions?

Slow Burn by R. Jisung Park is published by Princeton University Press, April 2024.

Park is an environmental and labour economist at the University of Pennsylvania. In Slow Burn, he takes a careful look at a wide variety of recent research efforts, some of which he participated in. He reports results in several major areas: the effect of hotter days on education and learning; the effect of hotter days on human morbidity and mortality; the increase in workplace accidents during hotter weather; and the increase in conflict and violence as hot days become more frequent.

In each of these areas, he says, the harms are measurable and substantial. And in another theme that winds through each chapter, he notes that the harms of global heating fall disproportionately on the poorest people both internationally and within nations. Unless adaptation measures reflect climate justice concerns, he says, global heating will exacerbate already deadly inequalities.

Even where the effect seems obvious – many people die during heat waves – it’s not a simple matter to quantify the increased mortality. For one thing, Park notes, very cold days as well as very hot days lead to increases in mortality. In some countries (including Canada) a reduction in very cold days will result in a decrease in mortality, which may offset the rise in deaths during heat waves.

We also learn about forward mortality displacement, “where the number of deaths immediately caused by a period of high temperatures is at least partially offset by a reduction in the number of deaths in the period immediately following the hot day or days.” (Slow Burn, p 85) 

After accounting for such complicating factors, a consortium of researchers has estimated the heat-mortality relationship through the end of this century, for 40 countries representing 55 percent of global population. Park summarizes their results:

“The Climate Impact Lab researchers estimate that, without any adaptation (so, simply extrapolating current dose-response relationships into a warmer future), climate change is likely to increase mortality rates by 221 per 100,000 people. … But adaptation is projected to reduce this figure by almost two-thirds: from 221 per 100,000 to seventy-three per 100,000. The bulk of this – 78 percent of the difference – comes from higher incomes.” (pp 198-199)

Let’s look at these estimates from several angles. First, to put the lower estimate of 73 additional deaths per 100,000 people in perspective, Park notes an increase in mortality of this magnitude would be six times larger than the US annual death toll from automobile crashes, and roughly tw0-thirds the US death toll from COVID-19 in 2020. An increase in mortality of 73 per 100,000 is a big number.

Second, it seems logical that people will try to adapt to more and more severe heat waves. If they have the means, they will install or augment their air-conditioning systems, or perhaps they’ll buy homes in cooler areas. But why should anyone have confidence that most people will have higher incomes by 2100, and therefore be in a better position to adapt to heat? Isn’t it just as plausible that most people will have less income and less ability to spend money on adaptation?

Third, Park notes that inequality is already evident in heat-mortality relationships. A single day with average temperature of 90°F (32.2°C) or higher increases the annual mortality in South Asian countries by 1 percent – ten times the heat-mortality increase that the United States experiences. Yet within the United States, there is also a large difference in heat-mortality rates between rich and poor neighbourhoods.

Even in homes that have air-conditioning (globally, only about 30%), low-income people often can’t afford to run the air-conditioners enough to counteract severe heat. “Everyone uses more energy on very hot and very cold days,” Park writes. “But the poor, who have less slack in their budgets, respond more sparingly.” (p 191)

A study in California found a marked increase in utility disconnections due to delinquent payments following heat waves. A cash-strapped household, then, faces an awful choice: don’t turn up the air-conditioner even when it’s baking hot inside, and suffer the ill effects; or turn it up, get through one heat wave, but risk disconnection unless it’s possible to cut back on other important expenses in order to pay the high electric bill.

(As if to underline the point, a headline I spotted as I finished this review reported surges in predatory payday loans following extreme weather.)

The drastic adaptation measure of relocation also depends on socio-economic status. Climate refugees crossing borders get a lot of news coverage, and there’s good reason to expect this issue will grow in prominence. Yet Park finds that “the numerical majority of climate-induced refugees are likely to be those who do not have the wherewithal to make it to an international border.” (p 141) As time goes on and the financial inequities of global heating increase, it may be true that even fewer refugees have the means to get to another country: “recent studies find that gradually rising temperatures may actually reduce the rate of migration in many poorer countries.” (p 141)

Slow Burn is weak on the issue of multiple compounding factors as they will interact over several decades. It’s one thing to measure current heat-mortality rates, but quite another to project that these rates will rise linearly with temperatures 30 or 60 years from now. Suppose, as seems plausible, that a steep rise in 30°C or hotter days is accompanied by reduced food supplies due to lower yields, higher basic food prices, increased severe storms that destroy or damage many homes, and less reliable electricity grids due to storms and periods of high demand. Wouldn’t we expect, then, that the 73-per-100,000-people annual heat-related deaths estimated by the Climate Impact Lab would be a serious underestimate?

Park also writes that due to rising incomes, “most places will be significantly better able to deal with climate change in the future.” (p 229) As for efforts at reducing emissions, in Park’s opinion “it seems reasonable to suppose that thanks in part to pledged and actual emissions cuts achieved in the past few decades, the likelihood of truly disastrous warming may have declined nontrivially.” (p 218) If you don’t share his faith in economic growth, and if you lack confidence that pledged emissions cuts will be made actual, some paragraphs in Slow Burn will come across as wishful thinking.

Yet on the book’s two primary themes – that climate change is already causing major and documentable harms to populations around the world, and that climate justice concerns must be at the forefront of adaptation efforts – Park marshalls strong evidence to present a compelling case.

A fragile frankenstein

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part eight
Also published on Resilience.

Is there an imminent danger that artificial intelligence will leap-frog human intelligence, go rogue, and either eliminate or enslave the human race?

You won’t find an answer to this question in an expert consensus, because there is none.

Consider the contrasting views of Geoffrey Hinton and Yann LeCun. When they and their colleague Yoshua Bengio were awarded the 2018 Turing Prize, the three were widely praised as the “godfathers of AI.”

“The techniques the trio developed in the 1990s and 2000s,” James Vincent wrote, “enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies ….”1

Yet Hinton and LeCun don’t see eye to eye on some key issues.

Hinton made news in the spring of 2023 with his highly-publicized resignation from Google. He stepped away from the company because he had become convinced AI has become an existential threat to humanity, and he felt the need to speak out freely about this danger.

In Hinton’s view, artificial intelligence is racing ahead of human intelligence and that’s not good news: “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”2

LeCun now heads Meta’s AI division while also teaching New York University. He voices a more skeptical perspective on the threat from AI. As reported last month,

“[LeCun] believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence.”3

As we dive deeper into these diverging judgements, we’ll look at a deceptively simple question: What is intelligence good for?

But here’s a spoiler alert: after reading scores of articles and books on AI over the past year, I’ve found I share the viewpoint of computer scientist Jaron Lanier.

In a New Yorker article last May Lanier wrote “The most pragmatic position is to think of A.I. as a tool, not a creature.”4 (emphasis mine) He repeated this formulation more recently:

“We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase ‘artificial intelligence’ itself, is all about the idea that we are making new creatures instead of new tools.”5

This tool might be designed and operated badly or for nefarious purposes, Lanier says, perhaps even in ways that could cause our own and many other species’ extinction. Yet as a tool made and used by humans, the harm would best be attributed to humans and not to the tool.

Common senses

How might we compare different manifestations of intelligence? For many years Hinton thought electronic neural networks were a poor imitation of the human brain. But he told Will Douglas Heaven last year that he now thinks the AI neural networks have turned out to be better than human brains in important respects. While the largest AI neural networks are still small compared to human brains, they make better use of their connections:

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Compared to people, Hinton says, the new Large Language Models learn new tasks extremely quickly.

LeCun argues that in spite of a relatively small number of neurons and connections in its brain, a cat is far smarter than the leading AI systems:

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”7

I’ve turned to a dear friend, who happens to be a cat, for further insight. When we go out for our walks together, each at one end of a leash, I notice how carefully Embers sniffs this bush, that plank, or a spot on the ground where another animal appears to have scratched. I notice how his ears turn and twitch in the wind, how he sniffs and listens before proceeding over a hill.

Embers knows hunger: he once disappeared for four months and came back emaciated and full of worms. He knows where mice might be found, and he knows it can be worth a long wait in tall grass, with ears carefully focused, until a determined pounce may yield a meal. He knows anger and fear: he has been ambushed by a larger cat, suffering injuries that took long painful weeks to heal. He knows that a strong wind, or the roar of crashing waves, make it impossible for him to determine if danger lurks just behind that next bush, and so he turns away in nervous agitation and heads back to a place where he feels safe.

Embers’ ability to “understand the physical world, plan complex actions, do some level of reasoning,” it seems to me, is deeply rooted in his experience of hunger, satiety, cold, warmth, fear, anger, love, comfort. His curiosity, too, is rooted in this sensory knowledge, as is his will – his deep determination to get out and explore his surroundings every morning and every evening. Both his will and his knowledge are rooted in biology. And given that we homo sapiens are no less biological, our own will and our own knowledge also have roots in biology.

For all their abilities to manipulate and reassemble fragments of information, however, I’ve come across nothing to indicate that any AI system will experience similar depths of sensory knowledge, and nothing to indicate they will develop wills or motivations of their own.

In other words, AI systems are not creatures, they are tools.

The elevation of abstraction

“Bodies matter to minds,” writes James Bridle. “The way we perceive and act in the world is shaped by the limbs, senses and contexts we possess and inhabit.”8

However, our human ability to conceive of things, not in their bodily connectedness but in their imagined separateness, has been the facet of intelligence at the center of much recent technological progress. Bridle writes:

“Historically, scientific progress has been measured by its ability to construct reductive frameworks for the classification of the natural world …. This perceived advancement of knowledge has involved a long process of abstraction and isolation, of cleaving one thing from another in a constant search for the atomic basis of everything ….”9

The ability to abstract, to separate into classifications, to simplify, to measure the effects of specific causes in isolation from other causes, has led to sweeping civilizational changes.

When electronic computing pioneers began to dream of “artificial intelligence”, Bridle says, they were thinking of intelligence primarily as “what humans do.” Even more narrowly, they were thinking of intelligence as something separated from and abstracted from bodies, as an imagined pure process of thought.

More narrowly still, the AI tools that have received most of the funding have been tools that are useful to corporate intelligence – the kinds that can be monetized, that can be made profitable, that can extract economic value for the benefit of corporations.

The resulting tools can be used in impressively useful ways – and as discussed in previous posts in this series, in dangerous and harmful ways. To the point of this post, however, we ask instead: Could artificially intelligent tools ever become creatures in their own right? And if they did, could they survive, thrive, take over the entire world, and conquer or eliminate biology-based creatures?

Last June, economist Blair Fix published a succinct takedown of the potential threat of a rogue artificial intelligence. 

“Humans love to talk about ‘intelligence’,” Fix wrote, “because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.”

Living creatures, he argued, must know how to acquire and digest food. From nematodes to homo sapiens we have the ability, quite beyond our conscious intelligence, to digest the food we need. But AI machines, for all their data-manipulating capacity, lack the most basic ability to care for themselves. In Fix’s words,

“Today’s machines may be ‘intelligent’, but they have none of the core competencies that make life robust. We design their metabolism (which is brittle) and we spoon feed them energy. Without our constant care and attention, these machines will do what all non-living matter does — wither against the forces of entropy.”10

Our “thinking machines”, like us, have their own bodily needs. Their needs, however, are vastly more complex and particular than ours are.

Humans, born as almost totally dependent creatures, can digest necessary nourishment from day one, and as we grow we rapidly develop the abilities to draw nourishment from a wide range of foods.

AI machines, on the other hand, are born and remain totally dependent on a single pure form of energy that only exists as produced through a sophisticated industrial complex: electricity, of a reliably steady and specific voltage and power. Learning to understand, manage and provide that sort of energy supply took almost all of human history to date.

Could the human-created AI tools learn to take over every step of their own vast global supply chains, thereby providing their own necessities of “life”, autonomously manufacturing more of their own kind, and escaping any dependence on human industry? Fix doesn’t think so:

“The gap between a savant program like ChatGPT and a robust, self-replicating machine is monumental. Let ChatGPT ‘loose’ in the wild and one outcome is guaranteed: the machine will go extinct.”

Some people have argued that today’s AI bots, or especially tomorrow’s bots, can quickly learn all they need to know to care and provide for themselves. After all, they can inhale the entire contents of the internet and, some say, can quickly learn the combined lessons of every scientific specialty.

But, as my elders used to tell me long before I became one of them, “book learning will only get you so far.” In the hypothetical case of an AI-bot striving for autonomy, digesting all the information on the internet would not grant assurance of survival.

It’s important, first, to recall that the science of robotics is nowhere near as developed as the science of AI. (See the previous post, Watching work, for a discussion of this issue.) Even if the AI-bot could both manipulate and understand all the science and engineering information needed to keep the artificial intelligence industrial complex running, that complex also requires a huge labour force of people with long experience in a vast array of physical skills.

“As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated,” Timothy B. Lee wrote in Slate last year. “But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.”11

In order to rapidly dispense with the need for a human labour force, a rogue cohort of AI-bots would need a sudden quantum leap in robotics. The AI-bots would need to be able to manipulate every type of data, but also every type of physical object. Lee summarizes the obstacles:

“Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber-optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.”

The information available on the internet, vast as it is, has a lot of holes. How many companies have thoroughly documented all of their institutional knowledge, such that an AI-bot could simply inhale all the knowledge essential to each company’s functions? To dispense with the human labour force, the AI-bot would need such documentation for every company that occupies every significant niche in the artificial intelligence industrial complex.

It seems clear, then, that a hypothetical AI overlord could not afford to get rid of a human work force, certainly not in a short time frame. And unless it could dispense with that labour force very soon, it would also need farmers, food distributors, caregivers, parents to raise and teachers to educate the next generation of workers – in short, it would need human society writ large.

But could it take full control of this global workforce and society by some combination of guile or force?

Lee doesn’t think so. “Human beings are social creatures,” he writes. “We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails. A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody.”

It’s easy to imagine a rogue AI tricking some people some of the time, just as AI-enhanced extortion scams can fool many people into handing over money or passwords. But a would-be AI overlord would need to manipulate and control all of the people involved in keeping the industrial supply chain operating smoothly, regardless of the myriad possibilities for sabotage.

Tools and their dangerous users

A frequently discussed scenario is that AI could speed up the development of new and more lethal chemical poisons, new and more lethal microbes, and new, more lethal, and remotely-targeted munitions. All of these scenarios are plausible. And all of these scenarios, to the extent that they come true, will represent further increments in our already advanced capacities to threaten all life and to risk human extinction.

At the beginning of the computer age, after all, humans invented and then constructed enough nuclear weapons to wipe out all human life. Decades ago, we started producing new lethal chemicals on a massive scale, and spreading them with abandon throughout the global ecosystem. We have only a sketchy understanding of how all these chemicals interact with existing life forms, or with new life forms we may spawn through genetic engineering.

There are already many examples of how effective AI can be as a tool for disinformation campaigns. This is a further increment in the progression of new tools which were quickly put to use for disinformation. From the dawn of writing, to the development of low-cost printed materials, to the early days of broadcast media, each technological extension of our intelligence has been used to fan genocidal flames of fear and hatred.

We are already living with, and possibly dying with, the results of a decades-long, devastatingly successful disinformation project, the well-funded campaign by fossil fuel corporations to confuse people about the climate impacts of their own lucrative products.

AI is likely to introduce new wrinkles to all these dangerous trends. But with or without AI, we have the proven capacity to ruin our own world.

And if we drive ourselves to extinction, the AI-bots we have created will also die, as soon as the power lines break and the batteries run down.


Notes

1 James Vincent, “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing,” The Verge, 27 March 2019.

2 As quoted by Timothy B. Lee in “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 9 May 2023.

3 Sissi Cao, “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” Observer, 15 February 2024.

4 Jaron Lanier, “There is No A.I.,” New Yorker, 20 April 2023.

5 Jaron Lanier, “How to Picture A.I.,” New Yorker, 1 March 2024.

6 Quoted in “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” by Will Douglas Heaven, MIT Technology Review, 2 May 2023.

7 Quoted in “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” by Sissi Cao, Observer, 15 February 2024.

8 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Picador MacMillan, 2022; page 38.

9 Bridle, Ways of Being, page 100.

10 Blair Fix, “No, AI Does Not Pose an Existential Risk to Humanity,” Economics From the Top Down, 10 June 2023.

11 Timothy B. Lee, “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 2 May 2023.


Illustration at top of post: Fragile Frankenstein, by Bart Hawkins Kreps, from: “Artificial Neural Network with Chip,” by Liam Huang, Creative Commons license, accessed via flickr; “Native wild and dangerous animals,” print by Johann Theodor de Bry, 1602, public domain, accessed at Look and Learn; drawing of robot courtesy of Judith Kreps Hawkins.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr. 

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

How parking ate North American cities

Also published on Resilience

Forty-odd years ago when I moved from a small village to a big city, I got a lesson in urbanism from a cat who loved to roam. Navigating the streets late at night, he moved mostly under parked cars or in their shadows, intently watching and listening before quickly crossing an open lane of pavement. Parked cars helped him avoid many frightening hazards, including the horrible danger of cars that weren’t parked.

The lesson I learned was simple but naïve: the only good car is a parked car.

Yet as Henry Grabar’s new book makes abundantly clear, parking is far from a benign side-effect of car culture.

The consequences of car parking include the atrophy of many inner-city communities; a crisis of affordable housing; environmental damages including but not limited to greenhouse gas emissions; and the continued incentivization of suburban sprawl.

Paved Paradise is published by Penguin Random House, May 9, 2023

Grabar’s book is titled Paved Paradise: How Parking Explains the World. The subtitle is slightly hyperbolic, but Grabar writes that “I have been reporting on cities for more than a decade, and I have never seen another subject that is simultaneously so integral to the way things work and so overlooked.”

He illustrates his theme with stories from across the US, from New York to Los Angeles, from Chicago to Charlotte to Corvallis.

Paved Paradise is as entertaining as it is enlightening, and it should help ensure that parking starts to get the attention it deserves.

Consider these data points:

  • “By square footage, there is more housing for each car in the United States than there is housing for each person.” (page 71; all quotes in this article are from Paved Paradise)
  • “The parking scholar Todd Litman estimates it costs $4,400 to supply parking for each vehicle for a year, with drivers directly contributing just 20 percent of that – mostly in the form of mortgage payments on a home garage.” (p 81)
  • “Many American downtowns, such as Little Rock, Newport News, Buffalo, and Topeka, have more land devoted to parking than to buildings.” (p 75)
  • Parking scholar Donald Shoup estimated that in 1998, “there existed $12,000 in parking for every one of the country’s 208 million cars. Because of depreciation, the average value of each of those vehicles was just $5,500 …. Therefore, Shoup concluded, the parking stock cost twice as much as the actual vehicles themselves. (p 150)

How did American cities come to devote vast amounts of valuable real estate to car storage? Grabar goes back to basics: “Every trip must begin and end with a parking space ….” A driver needs a parking space at home, and another one at work, another one at the grocery store, and another one at the movie theatre. There are six times as many parking spaces in the US as there are cars, and the multiple is much higher in some cities.

This isn’t a crippling problem in sparsely populated areas – but most Americans live or work or shop in relatively crowded areas. As cars became the dominant mode of transportation the “parking problem” became an obsession. It took another 60 or 70 years for many urban planners to reluctantly conclude that the parking problem can not be solved by building more parking spaces.

By the dawn of the twenty-first century parking had eaten American cities. (And though Grabar limits his story to the US, parking has eaten Canadian cities too.)

Grabar found that “Just one in five cities zoned for parking in 1950. By 1970, 95 percent of U.S. cities with over twenty-five thousand people had made the parking spot as legally indispensable as the front door.” (p 69)

The Institute of Transportation Engineers theorized that every building “generated traffic”, and therefore every type of building should be required to provide at least a specified number of parking spaces. So-called “parking minimums” became a standard feature of the urban planning rulebook, with wide-ranging and long-lasting consequences.

Previously common building types could no longer be built in most areas of most American cities:

“Parking requirements helped trigger an extinction-level event for bite-size, infill apartment buildings …; the production of buildings with two to four units fell more than 90 percent between 1971 and 2021.” (p 180)

On a small lot, even if a duplex or quadplex was theoretically permitted, the required parking would eat up too much space or require the construction of unaffordable underground parking.

Commercial construction, too, was inexorably bent to the will of the parking god:

“Fast-food architecture – low-slung, compact structures on huge lots – is really the architecture of parking requirements. Buildings that repel each other like magnets of the same pole.” (p 181)

While suburban development was subsidized through vast expenditures on highways and multi-lane arterial roads, parking minimums were hollowing out urban cores. New retail developments and office complexes moved to urban edges where big tracts of land could be affordably devoted to “free” parking.

Coupled with separated land use rules – keeping workplaces away from residential or retail areas – parking minimums resulted in sprawling development. Fewer Americans lived within safe walking or cycling distance from work, school or stores. Since few people had a good alternative to driving, there needed to be lots of parking. Since new developments needed lots of extra land for that parking, they had to be built further apart – making people even more car-dependent.

As Grabar explains, the almost universal application of parking minimums does not indicate that there is no market for real estate with little or no parking. To the contrary, the combination of high demand and minimal supply means that neighbourhoods offering escape from car-dependency are priced out of reach of most Americans:

“The most expensive places to live in the country were, by and large, densely populated and walkable neighborhoods. If the market was sending a signal for more of anything, it was that.” (p 281)

Is the solution the elimination of minimum parking requirements? In some cases that has succeeded – but reversing a 70- or 80-year-old development pattern has proven more difficult in other areas. 

Resident parking on Wellington Street, South End, Boston, Massachusetts. Photo by Billy Wilson, September 2022, licensed through Creative Commons BY-NC 2.0, accessed at Flickr.

The high cost of free parking

Paved Paradise acknowledges an enormous debt to the work of UCLA professor Donald Shoup. Published in 2005, Shoup’s 773-page book The High Cost of Free Parking continues to make waves.

As Grabar explains, Shoup “rode his bicycle to work each day through the streets of Los Angeles,” and he “had the cutting perspective of an anthropologist in a foreign land.” (p 149)

While Americans get exercised about the high price they occasionally pay for parking, in fact most people park most of the time for “free.” Their parking space is paid for by tax dollars, or by store owners, or by landlords. Most of the cost of parking is shared between those who drive all the time and those who seldom or never use a car.

By Shoup’s calculations, “the annual American subsidy to parking was in the hundreds of billions of dollars.” Whether or not you had a car,

“You paid [for the parking subsidy] in the rent, in the check at the restaurant, in the collection box at church. It was hidden on your receipt from Foot Locker and buried in your local tax bill. You paid for parking with every breath of dirty air, in the flood damage from the rain that ran off the fields of asphalt, in the higher electricity bills from running an air conditioner through the urban heat-island effect, in the vanishing natural land on the outskirts of the city. But you almost never paid for it when you parked your car ….” (p 150)

Shoup’s book hit a nerve. Soon passionate “Shoupistas” were addressing city councils across the country. Some cities moved toward charging market prices for the valuable public real estate devoted to private car storage. Many cities also started to remove parking minimums from zoning codes, and some cities established parking maximums – upper limits on the number of parking spaces a developer was allowed to build.

In some cases the removal of parking minimums has had immediate positive effects. Los Angeles became a pioneer in doing away with parking minimums. A 2010 survey looked at downtown LA projects constructed following the removal of parking requirements. Without exception, Grabar writes, these projects “had constructed fewer parking spaces than would have been required by [the old] law. Developers built what buyers and renters wanted ….” (p 193) Projects which simply wouldn’t have been built under old parking rules came to market, offering buyers and tenants a range of more affordable options.

In other cities, though, the long habit of car-dependency was more tenacious. Grabar writes:

“Starting around 2015, parking minimums began to fall in city after city. But for every downtown LA, where parking-free architecture burst forth, there was another place where changing the law hadn’t changed much at all.” (p 213)

In neighbourhoods with few stores or employment prospects within a walking or cycling radius, and in cities with poor public transit, there remains a weak market for buildings with little or no parking. After generations of heavily subsidized, zoning-incentivized car-dependency,

“There were only so many American neighborhoods that even had the bones to support a car-free life …. Parking minimums were not the only thing standing between the status quo and the revival of vibrant, walkable cities.” (p 214)

There are many strands to car culture: streets that are unsafe for people outside a heavy armoured box; an acute shortage of affordable housing except at the far edges of cities; public transit that is non-existent or so infrequent that it can’t compete with driving; residential neighbourhoods that fail to provide work, shopping, or education opportunities close by. All of these factors, along with the historical provision of heavily subsidized parking, must be changed in tandem if we want safe, affordable, environmentally sustainable cities.

Though it is an exaggeration to say “parking explains the world”, Grabar makes it clear that you can’t explain the world of American cities without looking at parking.

In the meantime, sometimes it works to use parked cars to promote car-free ways of getting around. Grabar writes,

“One of [Janette] Sadik-Khan’s first steps as transportation commissioner was taking a trip to Copenhagen, where she borrowed an idea for New York: use the parked cars to protect the bike riders. By putting the bike lanes between the sidewalk and the parking lane, you had an instant wall between cyclists and speeding traffic. Cycling boomed; injuries fell ….” (p 256)

A street-wise cat I knew forty years ago would have understood.


Photo at top of page: Surface parking lot adjacent to Minneapolis Armory, adapted from photo by Zach Korb, August 2006. Licensed via Creative Commons BY-NC-2.0, accessed via Flickr. Part of his 116-photo series “Downtown Minneapolis Parking.”

A road map that misses some turns

A review of No Miracles Needed

Also published on Resilience

Mark Jacobson’s new book, greeted with hosannas by some leading environmentalists, is full of good ideas – but the whole is less than the sum of its parts.

No Miracles Needed, by Mark Z. Jacobson, published by Cambridge University Press, Feb 2023. 437 pages.

The book is No Miracles Needed: How Today’s Technology Can Save Our Climate and Clean Our Air (Cambridge University Press, Feb 2023).

Jacobson’s argument is both simple and sweeping: We can transition our entire global economy to renewable energy sources, using existing technologies, fast enough to reduce annual carbon dioxide emissions at least 80% by 2030, and 100% by 2050. Furthermore, we can do all this while avoiding any major economic disruption such as a drop in annual GDP growth, a rise in unemployment, or any drop in creature comforts. But wait – there’s more! In so doing, we will also completely eliminate pollution.

Just don’t tell Jacobson that this future sounds miraculous.

The energy transition technologies we need – based on Wind, Water and Solar power, abbreviated to WWS – are already commercially available, Jacobson insists. He contrasts the technologies he favors with “miracle technologies” such as geoengineering, Carbon Capture Storage and Utilization (CCUS), or Direct Air Capture of carbon dioxide (DAC). These latter technologies, he argues, are unneeded, unproven, expensive, and will take far too long to implement at scale; we shouldn’t waste our time on such schemes.  

The final chapter helps to understand both the hits and misses of the previous chapters. In “My Journey”, a teenage Jacobson visits the smog-cloaked cities of southern California and quickly becomes aware of the damaging health effects of air pollution:

“I decided then and there, that when I grew up, I wanted to understand and try to solve this avoidable air pollution problem, which affects so many people. I knew what I wanted to do for my career.” (No Miracles Needed, page 342)

His early academic work focused on the damages of air pollution to human health. Over time, he realized that the problem of global warming emissions was closely related. The increasingly sophisticated computer models he developed were designed to elucidate the interplay between greenhouse gas emissions, and the particulate emissions from combustion that cause so much sickness and death.

These modeling efforts won increasing recognition and attracted a range of expert collaborators. Over the past 20 years, Jacobson’s work moved beyond academia into political advocacy. “My Journey” describes the growth of an organization capable of developing detailed energy transition plans for presentation to US governors, senators, and CEOs of major tech companies. Eventually that led to Jacobson’s publication of transition road maps for states, countries, and the globe – road maps that have been widely praised and widely criticized.

In my reading, Jacobson’s personal journey casts light on key features of No Miracles Needed in two ways. First, there is a singular focus on air pollution, to the omission or dismissal of other types of pollution. Second, it’s not likely Jacobson would have received repeat audiences with leading politicians and business people if he challenged the mainstream orthodox view that GDP can and must continue to grow.

Jacobson’s road map, then, is based on the assumption that all consumer products and services will continue to be produced in steadily growing quantities – but they’ll all be WWS based.

Does he prove that a rapid transition is a realistic scenario? Not in this book.

Hits and misses

Jacobson gives us brief but marvelously lucid descriptions of many WWS generating technologies, plus storage technologies that will smooth the intermittent supply of wind- and sun-based energy. He also goes into considerable detail about the chemistry of solar panels, the physics of electricity generation, and the amount of energy loss associated with each type of storage and transmission.

These sections are aimed at a lay readership and they succeed admirably. There is more background detail, however, than is needed to explain the book’s central thesis.

The transition road map, on the other hand, is not explained in much detail. There are many references to scientific papers in which he outlines his road maps. A reader of No Miracles Needed can take Jacobson’s word that the model is a suitable representation, or you can find and read Jacobson’s articles in academic journals – but you don’t get the needed details in this book.

Jacobson explains why, at the level of a device such as a car or a heat pump, electric energy is far more efficient in producing motion or heat than is an internal combustion engine or a gas furnace. Less convincingly, he argues that electric technologies are far more energy-efficient than combustion for the production of industrial heat – while nevertheless conceding that some WWS technologies needed for industrial heat are, at best, in prototype stages.

Yet Jacobson expresses serene confidence that hard-to-electrify technologies, including some industrial processes and long-haul aviation, will be successfully transitioning to WWS processes – perhaps including green hydrogen fuel cells, but not hydrogen combustion – by 2035.

The confidence in complex global projections is often jarring. For example, Jacobson tells us repeatedly that the fully WWS energy system of 2050 “reduces end-use energy requirements by 56.4 percent” (page 271, 275).1 The expressed precision notwithstanding, nobody yet knows the precise mix of storage types, generation types, and transmission types, which have various degrees of energy efficiency, that will constitute a future WWS global system. What we should take from Jacobson’s statements is that, based on the subset of factors and assumptions – from an almost infinitely complex global energy ecosystem – which Jacobson has included in his model, the calculated outcome is a 56% end-use energy reduction.

Canada’s Premiers visit Muskrat Falls dam construction site, 2015. Photo courtesy of Government of Newfoundland and Labrador; CC BY-NC-ND 2.0 license, via Flickr.

Also jarring is the almost total disregard of any type of pollution other than that which comes from fossil fuel combustion. Jacobson does briefly mention the particles that grind off the tires of all vehicles, including typically heavier EVs. But rather than concede that these particles are toxic and can harm human and ecosystem health, he merely notes that the relatively large particles “do not penetrate so deep into people’s lungs as combustion particles do.” (page 49)

He claims, without elaboration, that “Environmental damage due to lithium mining can be averted almost entirely.” (page 64) Near the end of the book, he states that “In a 2050 100 percent WWS world, WWS energy private costs equal WWS energy social costs because WWS eliminates all health and climate costs associated with energy.” (page 311; emphasis mine)

In a culture which holds continual economic growth to be sacred, it would be convenient to believe that business-as-usual can continue through 2050, with the only change required being a switch to WWS energy.

Imagine, then, that climate-changing emissions were the only critical flaw in the global economic system. Given that assumption, is Jacobson’s timetable for transition plausible?

No. First, Jacobson proposes that “by 2022”, no new power plants be built that use coal, methane, oil or biomass combustion; and that all new appliances for heating, drying and cooking in the residential and commercial sectors “should be powered by electricity, direct heat, and/or district heating.” (page 319) That deadline has passed, and products that rely on combustion continue to be made and sold. It is a mystery why Jacobson or his editors would retain a 2022 transition deadline in a book slated for publication in 2023.

Other sections of the timeline also strain credulity. “By 2023”, the timeline says, all new vehicles in the following categories should be either electric or hydrogen fuel-cell: rail locomotives, buses, nonroad vehicles for construction and agriculture, and light-duty on-road vehicles. This is now possible only in a purely theoretical sense. Batteries adequate for powering heavy-duty locomotives and tractors are not yet in production. Even if they were in production, and that production could be scaled up within a year, the charging infrastructure needed to quickly recharge massive tractor batteries could not be installed, almost overnight, at large farms or remote construction sites around the world.

While electric cars, pick-ups and vans now roll off assembly lines, the global auto industry is not even close to being ready to switch the entire product lineup to EV only. Unless, of course, they were to cut back auto production by 75% or more until production of EV motors, batteries, and charging equipment can scale up. Whether you think that’s a frightening prospect or a great idea, a drastic shrinkage in the auto industry would be a dramatic departure from a business-as-usual scenario.

What’s the harm, though, if Jacobson’s ambitious timeline is merely pushed back by two or three years?

If we were having this discussion in 2000 or 2010, pushing back the timeline by a few years would not be as consequential. But as Jacobson explains effectively in his outline of the climate crisis, we now need both drastic and immediate actions to keep cumulative carbon emissions low enough to avoid global climate catastrophe. His timeline is constructed with the goal of reducing carbon emissions by 80% by 2030, not because those are nice round figures, but because he (and many others) calculate that reductions of that scale and rapidity are truly needed. Even one or two more years of emissions at current rates may make the 1.5°C warming limit an impossible dream.

The picture is further complicated by a factor Jacobson mentions only in passing. He writes,

“During the transition, fossil fuels, bioenergy, and existing WWS technologies are needed to produce the new WWS infrastructure. … [A]s the fraction of WWS energy increases, conventional energy generation used to produce WWS infrastructure decreases, ultimately to zero. … In sum, the time-dependent transition to WWS infrastructure may result in a temporary increase in emissions before such emissions are eliminated.” (page 321; emphasis mine)

Others have explained this “temporary increase in emissions” at greater length. Assuming, as Jacobson does, that a “business-as-usual” economy keeps growing, the vast majority of goods and services will continue, in the short term, to be produced and/or operated using fossil fuels. If we embark on an intensive, global-scale, rapid build-out of WWS infrastructures at the same time, a substantial increment in fossil fuels will be needed to power all the additional mines, smelters, factories, container ships, trucks and cranes which build and install the myriad elements of a new energy infrastructure. If all goes well, that new energy infrastructure will eventually be large enough to power its own further growth, as well as to power production of all other goods and services that now rely on fossil energy.

Unless we accept a substantial decrease in non-transition-related industrial activity, however, the road that takes us to a full WWS destination must route us through a period of increased fossil fuel use and increased greenhouse gas emissions.

It would be great if Jacobson modeled this increase to give us some guidance how big this emissions bump might be, how long it might last, and therefore how important it might be to cumulative atmospheric carbon concentrations. There is no suggestion in this book that he has done that modeling. What should be clear, however, is that any bump in emissions at this late date increases the danger of moving past a climate tipping point – and this danger increases dramatically with every passing year.


1In a tl;dr version of No Miracles Needed published recently in The Guardian, Jacobson says “Worldwide, in fact, the energy that people use goes down by over 56% with a WWS system.” (“‘No miracles needed’: Prof Mark Jacobson on how wind, sun and water can power the world”, 23 January 2023)

 


Photo at top of page by Romain Guy, 2009; public domain, CC0 1.0 license, via Flickr.

Osprey and Otter have a message for Ford

On most summer afternoons, if you gaze across Bowmanville Marsh long enough you’ll see an Osprey flying slow above the water, then suddenly dropping to the surface before rising up with a fish in its talons.

But the Osprey doesn’t nest in Bowmanville Marsh – it nests about a kilometer away in Westside Marsh. That’s where a pair of Ospreys fix up their nest each spring, and that’s where they feed one or two chicks through the summer until they can all fly away together. Quite often the fishing is better in one marsh than the other – and the Ospreys know where to go.

Otter knows this too. You might see a family of Otters in one marsh several days in a row, and then they trot over the small upland savannah to the other marsh.

Osprey and Otter know many things that our provincial government would rather not know. One of those is that the value of a specific parcel of wetland can’t be judged in isolation. Many wetland mammals, fish and birds – even the non-migratory ones – need a complex of wetlands to stay healthy.

To developers and politicians with dollar signs in their eyes, a small piece of wetland in an area with several more might seem environmentally insignificant. Otters and Ospreys and many other creatures know better. Filling in or paving over one piece of wetland can have disastrous effects for creatures that spend much of their time in other nearby wetlands.

A change in how wetlands are evaluated – so that the concept of a wetland complex is gone from the criteria – is just one of the many ecologically disastrous changes the Doug Ford government in Ontario is currently rushing through. These changes touch on most of the issues I’ve written about in this blog, from global ones like climate change to urban planning in a single city. This time I’ll focus on threats to the environment in my own small neighbourhood.

Beavers move between Bowmanville and Westside Marshes as water levels change, as food sources change in availability, and as their families grow. They have even engineered themselves a new area of wetland close to the marshes. Great Blue Herons move back and forth between the marshes and nearby creeks on a daily basis throughout the spring, summer and fall.

In our sprawl-loving Premier’s vision, neither wetlands nor farmland are nearly as valuable as the sprawling subdivisions of cookie-cutter homes that make his campaign donors rich. The Premier, who tried in 2021 to have a wetland in Pickering filled and paved for an Amazon warehouse, thinks it’s a great idea to take chunks of farmland and wetland out of protected status in the Greenbelt. One of those parcels – consisting of tilled farmland as well as forested wetland – is to be removed from the Greenbelt in my municipality of Clarington.

The Premier’s appetite for environmental destruction makes it clear that no element of natural heritage in the Greater Toronto area can be considered safe. That includes the Lake Ontario wetland complex that I spend so much time in.

This wetland area now has Provincially Significant Wetland status, but that could change in the near future. As Anne Bell of Ontario Nature explains,

“The government is proposing to completely overhaul the Ontario Wetland Evaluation System for identifying Provincially Significant Wetlands (PSWs), ensuring that very few wetlands would be deemed provincially significant in the future. Further, many if not most existing PSWs could lose that designation because of the changes, and if so, would no longer benefit from the high level of protection that PSW designation currently provides.” (Ontario Nature blog, November 10, 2022)

The Bowmanville Marsh/Westside Marsh complex is home, at some time in the year, to scores of species of birds. Some of these are already in extreme decline, and at least one is threatened.

Up to now, when evaluators were judging the significance of a particular wetland, the presence of a threatened or endangered species was a strong indicator. If the Ford government’s proposed changes go through, the weight given to threatened or endangered species will drop.

The Rusty Blackbird is a formerly numerous bird whose population has dropped somewhere between 85 – 99 percent; it stopped by the Bowmanville Marsh in September on its migration. The Least Bittern is already on the threatened species list in Ontario, but is sometimes seen in Bowmanville Marsh. If the Least Bittern or the Rusty Blackbird drop to endangered species status, will the provincial government care? And will there be any healthy wetlands remaining for these birds to find a home?

Osprey and Otter know that if you preserve a small piece of wetland, but it’s hemmed in by a busy new subdivision, that wetland is a poor home for most wildlife. Many creatures need the surrounding transitional ecozone areas for some part of their livelihood. The Heron species spend many hours a day stalking the shallows of marshes – but need tall trees nearby to nest in.

Green Heron (left) and juvenile Black-crowned Night Heron

And for some of our shyest birds, only the most secluded areas of marsh will do as nesting habitats. That includes the seldom-seen Least Bittern, as well as the several members of the Rail family who nest in the Bowmanville Marsh.

There are many hectares of cat-tail reeds in this Marsh, but the Virginia Rails, Soras and Common Gallinules only nest where the stand of reeds is sufficiently dense and extensive to disappear in, a safe distance from a road, and a safe distance from any walking path. That’s one reason I could live beside this marsh for several years before I spotted any of these birds, and before I ever figured out what was making some of the strange bird calls I often heard.

Juvenile Sora, and adult Virginia Rail with hatchling

There are people working in government agencies, of course, who have expertise in bird populations and habitats. One of the most dangerous changes now being pushed by our Premier is to take wildlife experts out of the loop, so their expertise won’t threaten the designs of big property developers.

No longer is the Ministry of Natural Resources and Forestry (MNRF) to be involved in decisions about Provincially Designated Wetland status. Furthermore, local Conservation Authorities (CAs), who also employ wetland biologists and watershed ecologists, are to be muzzled when it comes to judging the potential impacts of development proposals: 

“CAs would be prevented from entering into agreements with municipalities regarding the review of planning proposals or applications. CAs would in effect be prohibited from providing municipalities with the expert advice and information they need on environmental and natural heritage matters.” (Ontario Nature blog)

Individual municipalities, who don’t typically employ ecologists, and who will be struggling to cope with the many new expenses being forced on them by the Ford government, will be left to judge ecological impacts without outside help. In practice, that might mean they will accept whatever rosy environmental impact statements the developers put forth.

It may be an exaggeration to say that ecological ignorance will become mandatory. Let’s just say, in Doug Ford’s brave new world ecological ignorance will be strongly incentivized.

Marsh birds of Bowmanville/Westside Marsh Complex

These changes to rules governing wetlands and the Greenbelt are just a small part of the pro-sprawl, anti-environment blizzard unleashed by the Ford government in the past month. The changes have resulted in a chorus of protests from nearly every municipality, in nearly every MPP’s riding, and in media outlets large and small.

The protests need to get louder. Osprey and Otter have a message, but they need our help.


Make Your Voice Heard

Friday Dec 2, noon – 1 pm: Rally at MPP Todd McCarthy’s office, 23 King Street West in Bowmanville.

Write McCarthy at Todd.McCarthy@pc.ola.org, or phone him at 905-697-1501.

Saturday Dec 3, rally starting at 2:30 pm: in Toronto at Bay St & College St.

Send Premier Ford a message at: doug.fordco@pc.ola.org, 416-325-1941

Send Environment Minister David Piccini a message at: david.Piccini@pc.ola.org, 416-314-6790

Send Housing Minister Steve Clark a message at: Steve.Clark@pc.ola.org, 416-585-7000


All photos taken by Bart Hawkins Kreps in Bowmanville/Westside Marsh complex, Port Darlington.

Dreaming of clean green flying machines

Also published on Resilience

In common with many other corporate lobby groups, the International Air Transport Association publicly proclaims their commitment to achieving net-zero carbon emissions by 2050.1

Yet the evidence that such an achievement is likely, or even possible, is thin … to put it charitably. Unless, that is, major airlines simply shut down.

As a 2021 Nova documentary put it, aviation “is the high-hanging fruit – one of the hardest climate challenges of all.”2 That difficulty is due to the very essence of the airline business.

What has made aviation so attractive to the relatively affluent people who buy most tickets is that commercial flights maintain great speed over long distances. Aviation would have little appeal if airplanes were no faster than other means of transportation, or if they could be used only for relatively short distances. These characteristics come with rigorous energy demands.

A basic challenge for high-speed transportation – whether that’s pedaling a bike fast, powering a car fast, or propelling an airplane fast – is that the resistance from the air goes up with speed, not linearly but exponentially. As speed doubles, air resistance quadruples; as speed triples, air resistance increases by a factor of nine; and so forth.

That is one fundamental reason why no high-speed means of transportation came into use until the fossil fuel era. The physics of wind resistance become particularly important when a vehicle accelerates up to several hundred kilometers per hour or more.

Contemporary long-haul aircraft accommodate the physics in part by flying at “cruising altitude” – typically about 10,000 meters above sea level. At that elevation the atmosphere is thin enough to cause significantly less friction, while still rich enough in oxygen for combustion of the fuel. Climbing to that altitude, of course, means first fighting gravity to lift a huge machine and its passengers a very long way off the ground.

A long-haul aircraft, then, needs a high-powered engine for climbing, plus a large store of energy-dense fuel to last through all the hours of the flight. That represents a tremendous challenge for inventors hoping to design aircraft that are not powered by fossil fuels.

In Nova’s “The Great Electric Airplane Race”, the inherent problem is illustrated with this graphic:

graphic from Nova, “The Great Electric Airplane Race,” 26 May 2021

A Boeing 737 can carry up to 40,000 pounds of jet fuel. For the same energy content, the airliner would require 1.2 million pounds of batteries (at least several times the maximum take-off weight of any 737 model3). Getting that weight off the ground, and propelling it thousands of miles through the air, is obviously not going to work.

A wide variety of approaches are being tried to get around the drastic energy discrepancy between fossil fuels and batteries. We will consider several such strategies later in this article. First, though, we’ll take a brief look at the strategies touted by major airlines as important short-term possibilities.

“Sustainable fuel” and offsets

The International Air Transport Association gives the following roadmap for its commitment to net-zero by 2050. Anticipated emissions reductions will come in four categories:
3% – Infrastructure and operational efficiencies
13% – New technology, electric and hydrogen
19% – Offsets and carbon capture
65% – Sustainable Aviation Fuel

The tiny improvement predicted for “Infrastructure and operational efficiencies” reflects the fact that airlines have already spent more than half a century trying to wring the most efficiency out of their most costly input – fuel.

The modest emission reductions predicted to come from battery power and hydrogen reflects a recognition that these technologies, for all their possible strengths, still appear to be a poor fit for long-haul aviation.

That leaves two categories of emission reductions, “Offsets and carbon capture”, and “Sustainable Aviation Fuel”.

So-called Sustainable Aviation Fuel (SAF) is compatible with current jet engines and can provide the same lift-off power and long-distance range as fossil-derived aviation fuel. SAF is typically made from biofuel feedstocks such as vegetable oils and used cooking oils. SAF is already on the market, which might give rise to the idea that a new age of clean flight is just around the corner. (No further away, say, than 2050.)

Yet as a Comment piece in Nature* notes, only .05% of fuel currently used meets the definition of SAF.4 Trying to scale that up to meet most of the industry’s need for fuel would clearly result in competition for agricultural land. Since growing enough food to feed all the people on the ground is an increasingly difficult challenge, devoting a big share of agricultural production to flying a privileged minority of people through the skies is a terrible idea.5

In addition, it’s important to note that the burning of SAF still produces carbon emissions and climate-impacting contrails. The use of SAF is only termed “carbon neutral” because of the assumption that the biofuels are renewable, plant-based products that would decay and emit carbon anyway. That’s a dubious assumption, when there’s tremendous pressure to clear more forests, plant more hectares into monocultures, and mine soils in a rush to produce not only more food for people, but also more fuel for wood-fired electric generating stations, more ethanol to blend with gasoline, more biofuel diesel, and now biofuel SAF too. When SAF is scaled up, there’s nothing “sustainable” about it.

What about offsets? My take on carbon offsets is this: Somebody does a good thing by planting some trees. And then, on the off chance that these trees will survive to maturity and will someday sequester significant amounts of carbon, somebody offsets those trees preemptively by emitting an equivalent amount of carbon today.

Kallbekken and Victor’s more diplomatic judgement on offsets is this:

“The vast majority of offsets today and in the expected future come from forest-protection and regrowth projects. The track record of reliable accounting in these industries is poor …. These problems are essentially unfixable. Evidence is mounting that offsetting as a strategy for reaching net zero is a dead end.”6 (emphasis mine)

Summarizing the heavy reliance on offsetting and SAF in the aviation lobby’s net-zero plan, Kallbekken and Victor write “It is no coincidence that these ideas are also the least disruptive to how the industry operates today.” The IATA “commitment to net-zero”, basically, amounts to hoping to get to net-zero by carrying on with Business As Usual.

Contestants, start your batteries!

Articles appear in newspapers, magazines and websites on an almost daily basis, discussing new efforts to operate aircraft on battery power. Is this a realistic prospect? A particularly enthusiastic answer comes in an article from the Aeronautical Business School: “Electric aviation, with its promise of zero-emission flights, is right around the corner with many commercial projects already launched. …”7

Yet the electric aircraft now on the market or in prototyping are aimed at very short-haul trips. That reflects the reality that, in spite of intensive research and development in battery technology through recent decades, batteries are not remotely capable of meeting the energy and power requirements of large, long-haul aircraft.

The International Council on Clean Transportation (ICCT) recently published a paper on electric aircraft which shows why most flights are not in line to be electrified any time soon. Jayant Mukhopadhaya, one of the report’s co-authors, discusses the energy requirements of aircraft for four segments of the market. The following chart presents these findings: 

Table from Jayant Mukhopadhaya, “What to expect when expecting electric airplanes”, ICCT, July 14, 2022.

The chart shows the specific energy (“eb”, in Watt-hours per kilogram) and energy density (“vb”, in Watt-hours per liter) available in batteries today, plus the corresponding values that would be required to power aircraft in the four major market segments. Even powering a commuter aircraft, carrying 19 passengers up to 450 km, would require a 3-time improvement in specific energy of batteries.

Larger aircraft on longer flights won’t be powered by batteries alone unless there is a completely new, far more effective type of battery invented and commercialized:

“Replacing regional, narrowbody, and widebody aircraft would require roughly 6x, 9x, and 20x improvements in the specific energy of the battery pack. In the 25 years from 1991 to 2015, the specific energy and energy density of lithium-ion batteries improved by a factor of 3.”8

If the current rate of battery improvement were to continue for another 25 years, then, commuter aircraft carrying up to 19 passengers could be powered by batteries alone. That would constitute one very small step toward net-zero aviation – by the year 2047.

This perspective helps explain why most start-ups hoping to bring electric aircraft to market are targeting very short flights – from several hundred kilometers down to as little as 30 kilometers – and very small payloads – from one to five passengers, or freight loads of no more than a few hundred kilograms.

The Nova documentary “The Great Electric Airplane Race” took an upbeat tone, but most of the companies profiled, even if successful, would have no major impact on aviation’s carbon emissions.

Joby Aviation is touted as “the current leader in the race to fill the world with electric air taxis.” Their vehicles, which they were aiming to have certified by 2023, would carry a pilot and 4 passengers. A company called KittyHawk wanted to build an Electrical Vertical Take-Off and Landing (EVTOL) which they said could put an end to traffic congestion. The Chinese company Ehang was already offering unpiloted tourism flights, for two people and lasting no more than 10 minutes.

Electric air taxis, if they became a reality after 50 years of speculation, would result in no reductions in the emissions from the current aviation industry. They would simply be an additional form of energy-intensive mobility coming onto the market.

Other companies discussed in the Nova program were working on hybrid configurations. Elroy’s cargo delivery vehicle, for example, would have batteries plus a combustion engine, allowing it to carry a few hundred kilograms up to 500 km.

H2Fly, based in Stuttgart, was working on a battery/hydrogen hybrid. H2Fly spokesperson Joseph Kallo explained that “The energy can’t flow out of the [hydrogen fuel] cell as fast as it can from a fossil fuel engine or a battery. So there’s less power available for take-off. But it offers much more range.”

By using batteries for take-off, and hydrogen fuel cells at cruising altitude, Kallo said this technology could eventually work for an aircraft carrying up to 100 passengers with a range of 3500 km – though as of November 2020 they were working on “validating a range of nearly 500 miles”.

To summarize: electric and hybrid aviation technologies could soon power a few segments of the industry. As long as the new aircraft are replacing internal combustion engine aircraft, and not merely adding new vehicles on new routes for new markets, they could result in a small reduction in overall aviation emissions.

Yet this is a small part of the aviation picture. As Jayant Mukhopadhaya told treehugger.com in September,

“2.8% of departures in 2019 were for [flights with] less than 30 passengers going less than 200 km. This increases to 3.8% if you increase the range to 400 km. The third number they quote is 800 km for 25 passengers, which would then cover 4.1% of global departures.”9

This is roughly 3–4% of departures – but it’s important to recognize this does not represent 3–4% of global passenger km or global aviation emissions. When you consider that the other 96% of departures are made by much bigger planes, carrying perhaps 10 times as many passengers and traveling up to 10 times as far, it is clear that small-plane, short-hop aviation represents just a small sliver of both the revenue base and the carbon footprint of the airline industry.

Short-haul flights are exactly the kind of flights that can and should be replaced in many cases by good rail or bus options. (True, there are niche cases where a short flight over a fjord or other impassable landscape can save many hours of travel – but that represents a very small share of air passenger km.)

If we are really serious about a drastic reduction in aviation emissions, by 2030 or even by 2050, there is just one currently realistic route to that goal: we need a drastic reduction in flights.

* * *

Postscript: At the beginning of October a Washington Post article asked “If a Google billionaire can’t make flying cars happen, can anyone?” The article reported that KittyHawk, the EVTOL air taxi startup highlighted by Nova in 2021 and funded by Google co-founder Larry Page, is shutting down. The article quoted Peter Rez, from Arizona State University, explaining that lithium-ion batteries “output energy at a 50 times less efficient rate than their gasoline counterparts, requiring more to be on board, adding to cost and flying car and plane weight.” This story underscores, said the Post, “how difficult it will be to get electric-powered flying cars and planes.”

*Correction: The original version of this article attributed quotes from the Nature Comment article simply to “Nature”. Authors’ names have been added to indicate this is a signed opinion article and does not reflect an official editorial position of Nature.


Footnotes

IATA, “Our Commitment to Fly Net Zero by 2050”.

Nova, “The Great Electric Airplane Race” – 26 May 2021.

The Difference In Weight Between The Boeing 737 Family’s Many Variants”, by Mark Finlay, April 24, 2022.

4  Steffen Kallbekken and David G. Victor, Nature, “A cleaner future for flight — aviation needs a radical redesign”, 16 September 2022.

Dan Rutherford writes, “US soy production contributes to global vegetable oil markets, and prices have spiked in recent years in part due to biofuel mandates. Diverting soy oil to jet fuel would put airlines directly in competition with food at a time when consumers are being hammered by historically high food prices.” In “Zero cheers for the supersoynic renaissance”, July 11, 2022.

Kallbekken and Victor, Nature, “A cleaner future for flight — aviation needs a radical redesign”, 16 September 2022.

The path towards an environmentally sustainable aviation”, by Óscar Castro, March 23, 2022.

Jayant Mukhopadhaya, “What to expect when expecting electric airplanes”, ICCT, July 14, 2022.

Air Canada Electrifies Its Lineup With Hybrid Planes”, by Lloyd Alter, September 20, 2022.



Photo at top of page: “Nice line up at Tom Bradley International Terminal, Los Angeles, November 10, 2013,” photo by wilco737, Creative Commons 2.0 license, on
flickr.

Inequality, the climate crisis, and the frequent flier

ALSO PUBLISHED ON RESILIENCE

If we are to make rapid progress in reducing carbon emissions, and do so in an equitable way, does everybody need to give up flying?

No, not at all – because most people don’t fly anyway, and have never flown. And among those privileged enough to fly, only a small minority fly often.

If most people gave up flying that would have little impact on emissions – because most people fly seldom or never.

Yet major carbon emissions reductions need to happen within the next several years. That’s much faster than any revolutionary new aviation technologies can be developed, let alone rolled out on a large scale. The way to dramatically and quickly reduce aviation emissions is as simple as it is obvious: the small minority of people who fly frequently should give up most of their airline journeys.

We can see clearly where rapid progress might be made when we recognize the tight correlation between global wealth control and global emissions.

On a global scale, and also within most individual countries, both income and wealth is dramatically skewed in favour of a small percentage of the population.

In the same fashion, carbon dioxide emissions are dramatically skewed, as an overwhelming share of the emissions causing the climate crisis are due to the lifestyles of a small proportion of the population.

A relatively affluent minority of the world’s population takes nearly all of the world’s aviation journeys, and within that minority, a small percentage of people take by far the most flights.

Within that wealthiest and most polluting sliver of the world’s population, flying typically accounts for the biggest share of their generally outsize contributions to the climate crisis. Meaning, if they are to reduce their emissions to a level consistent with international climate accords, they will need to change their flying from a frequent, routine practice into a rare, exceptional practice, or cease from flying at all.

Yet in all the sectors that combine to steer our industrial societies, the people that have a significant share of influence typically belong to the frequent fliers club. That is true throughout the corporate world, in major news and entertainment media, in academia, in nearly every level of government in affluent countries, and among the socio-economic elites in non-affluent countries. In all these social sectors, it has become routine over the past 50 years to get on a plane and fly to some formerly distant place multiple times a year, whether for business or for leisure.

The preceding paragraphs outline a daunting list of topics to try to cover in one blog post. We’ll have help from some very useful graphs. Here goes ….

Follow the money

Since flying is an expensive habit, even in monetary terms, we would expect that most flying is done by the people with the most money. Here’s one way of visualizing who has the money:

Global income and wealth inequality, from the World Inequality Report 2022, by Lucas Chancel (lead author), Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, page 10.

As the chart above indicates, money is overwhelmingly concentrated in the hands of a small percentage of the global population – wealth is heavily skewed by class. And as the chart below indicates, money-making activities are overwhelmingly concentrated in some countries – wealth is heavily skewed by geography.

GDP per capita for selected regions and countries, 2010 – 2020, graph from Our World In Data based on World Bank data. The world average for 2020 was $16,608, while GDP per capita in wealthy countries was from 2.5 to about 4 times as high.

Ready for a surprise? You never woulda guessed, but carbon emissions are skewed in roughly the same ways.

Global Carbon Inequality, 2019, from the World Inequality Report 2022, by Lucas Chancel (lead author), Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, page 18.

 

Per capita emissions across the world, 2019, from the World Inequality Report 2022, by Lucas Chancel (lead author), Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, page 19.

The “Global Carbon Inequality” chart tells us that one half of global population are responsible for only a small share, 12%, of global warming emissions. The other half are responsible for 88% of global warming emissions. And just 10% of the population are responsible for nearly half the emissions.

The “Per capita emissions across the world” shows the dramatic variance in emission levels from various geographic regions. It might come as no surprise that both the top 10% and the middle 40% groups in North America leave most of their international rivals in a cloud of fossil fuel smoke, so to speak. Those who are modestly well off, or rich, in the US and Canada tend to live in big houses; drive, a lot, in big cars or “light trucks”; and travel by air frequently.

And in all areas of the world, the top 10% of emitters have per capita emissions far in excess of the middle 40% or lower 50% groups.

What does this mean for our collective hopes of slowing down the accelerating climate crisis? It means that most of the emission reductions must come from a relatively small share of the global population – particularly from the top 10% on a global scale, and to a lesser but still significant extent from the middle 40% within wealthy countries.

Consider this chart from the World Inequality Report.

Per capita emissions reduction requirements, US & France, from the World Inequality Report 2022, by Lucas Chancel (lead author), Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, page 128.

If we were to meet the emissions reduction targets set out for 2030 in the Paris Agreement in a fair and equitable way, the top 10% of people in the US would need to reduce their carbon footprints by 87%, and the middle 40% would need to reduce their carbon footprints by 54%. The lower 50% of the US population could actually increase their carbon footprints by 3% while being consistent with the Paris Agreement – if, that is, the upper 50% actually carried their fair share of the changes needed.

The story is much the same in France, with dramatic per capita emissions reductions needed from the top 50%.

For India and China, as shown below, the picture is significantly different.

Per capita emissions reduction requirements, India & China, from the World Inequality Report 2022, by Lucas Chancel (lead author), Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, page 129.

In both India and China, the upper 10% would need to dramatically reduce their carbon footprints to be consistent with the Paris Agreement. However, both the middle 40% and the lower 50% in those countries could dramatically increase their carbon footprints in the next eight years, if the Paris Agreement targets were not only to be met, but met in an equitable way.

Imagine for a moment that the small minority of people with large carbon footprints, both globally and within countries, made a serious effort at reducing those footprints. What aspect of their lifestyles would be the most logical place to start?

Here, after what might have seemed like a long detour, we get back to the airport.

Panorama from inside Edinburgh air traffic control room, Oct 2013, photo by NATS – UK Air Traffic Control, licensed via CC BY-NC-ND 2.0, accessed on Flickr.

A high-level view

In spite of steep increases in aviation emissions in recent decades, direct emissions from aviation are still a small slice of overall global warming emissions. At the same time, among the world’s affluent classes, per capita emissions from aviation alone are much higher than the total per capita emissions of most people in much of the world.

The explanation lies here: only a small proportion of the world’s population flies at all, and among those, another small proportion takes most of the flights, the longest flights, and the flights that incur the largest per capita carbon footprints.

Even within high-income countries, less than half the population gets on a plane in a given year, according to a recent article in Global Environmental Change.

And on a global scale, Tom Otley reported in 2020,

“The research says that the share of the world’s population travelling by air in 2018 was just 11 per cent, with at most 4 per cent taking international flights.” (Business Traveller)

Can we conclude that 11 per cent of the people have an equal share of the aviation emissions? That would be deeply misleading, because most of those 11% take just the occasional flight, while a smaller number take many flights.

As reported in the article “A few frequent flyers ‘dominate air travel’” on BBC News, here’s how a small share of flyers in selected countries keep airports busy:

“In the UK, 70% of flights are made by a wealthy 15% of the population …. [I]n the US, just 12% of people take two-thirds of flights. … Canada: 22% of the population takes 73% of flights …. The Netherlands: 8% of people takes 42% of flights. … China: 5% of households takes 40% of flights. … India: 1% of households takes 45% of flights.”

But wait – there’s more! Stefan Gössling and Andreas Humpe explain in “The global scale, distribution and growth of aviation”, “The share of the fuel used by these [frequent] air travelers is likely higher, as more frequent fliers will more often travel business or first class ….”

Flying in more luxurious fashion comes at a huge environmental cost:

“The International Council on Clean Transportation (ICCT) (2014) estimates that the carbon footprint of flying business class, first class, or in a large suite is 5.3, 9.2 or 14.8 times larger than for flying in economy class.” (Gössling and Humpe)

Due to the frequency of their flights, plus the more luxurious seating accommodations often favoured by those who can afford many flights, about 10% of the most frequent fliers account for about half of all aviation emissions.

Gössling and Humpe refers to these most frequent fliers as “super emitters”, noting that “[S]uper emitters may contribute to global warming at a rate 225,000 times higher than the global poor” who have almost no carbon footprint.

To summarize: aviation accounts for a relatively small percentage of overall global warming emissions, because flying is a privilege enjoyed almost exclusively by a small percentage of the affluent classes. Yet among these classes, aviation results in a large share of personal carbon footprints, especially if flying is a regular occurrence.

Our World In Data states it starkly: “Air travel dominates a frequent traveller’s individual contribution to climate change.”

The same report adds, “The average rich person emits tonnes of CO2 from flying each year – this is equivalent to the total carbon footprint of tens or hundreds of people in many countries of the world.” (emphasis mine)

If we recall some figures from earlier in this post, those individuals in the US whose carbon footprints rank in the top 10% will need to reduce those footprints by 87%, for fair compliance with the Paris Agreement.

For most of those in the very-high-carbon-emissions bracket, a drastic reduction in flying will be a necessary, though not sufficient, lifestyle change in any future that includes climate justice.

• • •

Not so fast, frequent fliers might protest. Aren’t you overlooking the possibility, perhaps even the probability, that in the near future we will have a flourishing airline industry powered by clean electricity or clean hydrogen?

That’s too complicated a subject for a blog post that’s quite long enough already.

One recurring theme in this series has been the distinction between device-level changes and system-level changes. A speedy, safe, ocean-jumping airliner that burns no fossil fuel, if such an airliner were to exist, would be a great example of a device-level change.

I don’t expect to see such an airliner making commercially-viable trips within my lifetime. I’ll explain that skepticism in the next installment of this series on transportation.


Photo at top of page: Airbus airliners lined up at Chengdu, November 2015; photo by L.G. Liao, accessed at Wikimedia Commons.