A fragile frankenstein

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part eight
Also published on Resilience.

Is there an imminent danger that artificial intelligence will leap-frog human intelligence, go rogue, and either eliminate or enslave the human race?

You won’t find an answer to this question in an expert consensus, because there is none.

Consider the contrasting views of Geoffrey Hinton and Yann LeCun. When they and their colleague Yoshua Bengio were awarded the 2018 Turing Prize, the three were widely praised as the “godfathers of AI.”

“The techniques the trio developed in the 1990s and 2000s,” James Vincent wrote, “enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies ….”1

Yet Hinton and LeCun don’t see eye to eye on some key issues.

Hinton made news in the spring of 2023 with his highly-publicized resignation from Google. He stepped away from the company because he had become convinced AI has become an existential threat to humanity, and he felt the need to speak out freely about this danger.

In Hinton’s view, artificial intelligence is racing ahead of human intelligence and that’s not good news: “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”2

LeCun now heads Meta’s AI division while also teaching New York University. He voices a more skeptical perspective on the threat from AI. As reported last month,

“[LeCun] believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence.”3

As we dive deeper into these diverging judgements, we’ll look at a deceptively simple question: What is intelligence good for?

But here’s a spoiler alert: after reading scores of articles and books on AI over the past year, I’ve found I share the viewpoint of computer scientist Jaron Lanier.

In a New Yorker article last May Lanier wrote “The most pragmatic position is to think of A.I. as a tool, not a creature.”4 (emphasis mine) He repeated this formulation more recently:

“We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase ‘artificial intelligence’ itself, is all about the idea that we are making new creatures instead of new tools.”5

This tool might be designed and operated badly or for nefarious purposes, Lanier says, perhaps even in ways that could cause our own and many other species’ extinction. Yet as a tool made and used by humans, the harm would best be attributed to humans and not to the tool.

Common senses

How might we compare different manifestations of intelligence? For many years Hinton thought electronic neural networks were a poor imitation of the human brain. But he told Will Douglas Heaven last year that he now thinks the AI neural networks have turned out to be better than human brains in important respects. While the largest AI neural networks are still small compared to human brains, they make better use of their connections:

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Compared to people, Hinton says, the new Large Language Models learn new tasks extremely quickly.

LeCun argues that in spite of a relatively small number of neurons and connections in its brain, a cat is far smarter than the leading AI systems:

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”7

I’ve turned to a dear friend, who happens to be a cat, for further insight. When we go out for our walks together, each at one end of a leash, I notice how carefully Embers sniffs this bush, that plank, or a spot on the ground where another animal appears to have scratched. I notice how his ears turn and twitch in the wind, how he sniffs and listens before proceeding over a hill.

Embers knows hunger: he once disappeared for four months and came back emaciated and full of worms. He knows where mice might be found, and he knows it can be worth a long wait in tall grass, with ears carefully focused, until a determined pounce may yield a meal. He knows anger and fear: he has been ambushed by a larger cat, suffering injuries that took long painful weeks to heal. He knows that a strong wind, or the roar of crashing waves, make it impossible for him to determine if danger lurks just behind that next bush, and so he turns away in nervous agitation and heads back to a place where he feels safe.

Embers’ ability to “understand the physical world, plan complex actions, do some level of reasoning,” it seems to me, is deeply rooted in his experience of hunger, satiety, cold, warmth, fear, anger, love, comfort. His curiosity, too, is rooted in this sensory knowledge, as is his will – his deep determination to get out and explore his surroundings every morning and every evening. Both his will and his knowledge are rooted in biology. And given that we homo sapiens are no less biological, our own will and our own knowledge also have roots in biology.

For all their abilities to manipulate and reassemble fragments of information, however, I’ve come across nothing to indicate that any AI system will experience similar depths of sensory knowledge, and nothing to indicate they will develop wills or motivations of their own.

In other words, AI systems are not creatures, they are tools.

The elevation of abstraction

“Bodies matter to minds,” writes James Bridle. “The way we perceive and act in the world is shaped by the limbs, senses and contexts we possess and inhabit.”8

However, our human ability to conceive of things, not in their bodily connectedness but in their imagined separateness, has been the facet of intelligence at the center of much recent technological progress. Bridle writes:

“Historically, scientific progress has been measured by its ability to construct reductive frameworks for the classification of the natural world …. This perceived advancement of knowledge has involved a long process of abstraction and isolation, of cleaving one thing from another in a constant search for the atomic basis of everything ….”9

The ability to abstract, to separate into classifications, to simplify, to measure the effects of specific causes in isolation from other causes, has led to sweeping civilizational changes.

When electronic computing pioneers began to dream of “artificial intelligence”, Bridle says, they were thinking of intelligence primarily as “what humans do.” Even more narrowly, they were thinking of intelligence as something separated from and abstracted from bodies, as an imagined pure process of thought.

More narrowly still, the AI tools that have received most of the funding have been tools that are useful to corporate intelligence – the kinds that can be monetized, that can be made profitable, that can extract economic value for the benefit of corporations.

The resulting tools can be used in impressively useful ways – and as discussed in previous posts in this series, in dangerous and harmful ways. To the point of this post, however, we ask instead: Could artificially intelligent tools ever become creatures in their own right? And if they did, could they survive, thrive, take over the entire world, and conquer or eliminate biology-based creatures?

Last June, economist Blair Fix published a succinct takedown of the potential threat of a rogue artificial intelligence. 

“Humans love to talk about ‘intelligence’,” Fix wrote, “because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.”

Living creatures, he argued, must know how to acquire and digest food. From nematodes to homo sapiens we have the ability, quite beyond our conscious intelligence, to digest the food we need. But AI machines, for all their data-manipulating capacity, lack the most basic ability to care for themselves. In Fix’s words,

“Today’s machines may be ‘intelligent’, but they have none of the core competencies that make life robust. We design their metabolism (which is brittle) and we spoon feed them energy. Without our constant care and attention, these machines will do what all non-living matter does — wither against the forces of entropy.”10

Our “thinking machines”, like us, have their own bodily needs. Their needs, however, are vastly more complex and particular than ours are.

Humans, born as almost totally dependent creatures, can digest necessary nourishment from day one, and as we grow we rapidly develop the abilities to draw nourishment from a wide range of foods.

AI machines, on the other hand, are born and remain totally dependent on a single pure form of energy that only exists as produced through a sophisticated industrial complex: electricity, of a reliably steady and specific voltage and power. Learning to understand, manage and provide that sort of energy supply took almost all of human history to date.

Could the human-created AI tools learn to take over every step of their own vast global supply chains, thereby providing their own necessities of “life”, autonomously manufacturing more of their own kind, and escaping any dependence on human industry? Fix doesn’t think so:

“The gap between a savant program like ChatGPT and a robust, self-replicating machine is monumental. Let ChatGPT ‘loose’ in the wild and one outcome is guaranteed: the machine will go extinct.”

Some people have argued that today’s AI bots, or especially tomorrow’s bots, can quickly learn all they need to know to care and provide for themselves. After all, they can inhale the entire contents of the internet and, some say, can quickly learn the combined lessons of every scientific specialty.

But, as my elders used to tell me long before I became one of them, “book learning will only get you so far.” In the hypothetical case of an AI-bot striving for autonomy, digesting all the information on the internet would not grant assurance of survival.

It’s important, first, to recall that the science of robotics is nowhere near as developed as the science of AI. (See the previous post, Watching work, for a discussion of this issue.) Even if the AI-bot could both manipulate and understand all the science and engineering information needed to keep the artificial intelligence industrial complex running, that complex also requires a huge labour force of people with long experience in a vast array of physical skills.

“As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated,” Timothy B. Lee wrote in Slate last year. “But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.”11

In order to rapidly dispense with the need for a human labour force, a rogue cohort of AI-bots would need a sudden quantum leap in robotics. The AI-bots would need to be able to manipulate every type of data, but also every type of physical object. Lee summarizes the obstacles:

“Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber-optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.”

The information available on the internet, vast as it is, has a lot of holes. How many companies have thoroughly documented all of their institutional knowledge, such that an AI-bot could simply inhale all the knowledge essential to each company’s functions? To dispense with the human labour force, the AI-bot would need such documentation for every company that occupies every significant niche in the artificial intelligence industrial complex.

It seems clear, then, that a hypothetical AI overlord could not afford to get rid of a human work force, certainly not in a short time frame. And unless it could dispense with that labour force very soon, it would also need farmers, food distributors, caregivers, parents to raise and teachers to educate the next generation of workers – in short, it would need human society writ large.

But could it take full control of this global workforce and society by some combination of guile or force?

Lee doesn’t think so. “Human beings are social creatures,” he writes. “We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails. A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody.”

It’s easy to imagine a rogue AI tricking some people some of the time, just as AI-enhanced extortion scams can fool many people into handing over money or passwords. But a would-be AI overlord would need to manipulate and control all of the people involved in keeping the industrial supply chain operating smoothly, regardless of the myriad possibilities for sabotage.

Tools and their dangerous users

A frequently discussed scenario is that AI could speed up the development of new and more lethal chemical poisons, new and more lethal microbes, and new, more lethal, and remotely-targeted munitions. All of these scenarios are plausible. And all of these scenarios, to the extent that they come true, will represent further increments in our already advanced capacities to threaten all life and to risk human extinction.

At the beginning of the computer age, after all, humans invented and then constructed enough nuclear weapons to wipe out all human life. Decades ago, we started producing new lethal chemicals on a massive scale, and spreading them with abandon throughout the global ecosystem. We have only a sketchy understanding of how all these chemicals interact with existing life forms, or with new life forms we may spawn through genetic engineering.

There are already many examples of how effective AI can be as a tool for disinformation campaigns. This is a further increment in the progression of new tools which were quickly put to use for disinformation. From the dawn of writing, to the development of low-cost printed materials, to the early days of broadcast media, each technological extension of our intelligence has been used to fan genocidal flames of fear and hatred.

We are already living with, and possibly dying with, the results of a decades-long, devastatingly successful disinformation project, the well-funded campaign by fossil fuel corporations to confuse people about the climate impacts of their own lucrative products.

AI is likely to introduce new wrinkles to all these dangerous trends. But with or without AI, we have the proven capacity to ruin our own world.

And if we drive ourselves to extinction, the AI-bots we have created will also die, as soon as the power lines break and the batteries run down.


Notes

1 James Vincent, “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing,” The Verge, 27 March 2019.

2 As quoted by Timothy B. Lee in “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 9 May 2023.

3 Sissi Cao, “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” Observer, 15 February 2024.

4 Jaron Lanier, “There is No A.I.,” New Yorker, 20 April 2023.

5 Jaron Lanier, “How to Picture A.I.,” New Yorker, 1 March 2024.

6 Quoted in “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” by Will Douglas Heaven, MIT Technology Review, 2 May 2023.

7 Quoted in “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” by Sissi Cao, Observer, 15 February 2024.

8 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Picador MacMillan, 2022; page 38.

9 Bridle, Ways of Being, page 100.

10 Blair Fix, “No, AI Does Not Pose an Existential Risk to Humanity,” Economics From the Top Down, 10 June 2023.

11 Timothy B. Lee, “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 2 May 2023.


Illustration at top of post: Fragile Frankenstein, by Bart Hawkins Kreps, from: “Artificial Neural Network with Chip,” by Liam Huang, Creative Commons license, accessed via flickr; “Native wild and dangerous animals,” print by Johann Theodor de Bry, 1602, public domain, accessed at Look and Learn; drawing of robot courtesy of Judith Kreps Hawkins.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr. 

Watching work

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part five
Also published on Resilience.

Consider a human vs computer triathlon. The first contest is playing a cognitively demanding game like chess. The second is driving a truck safely through a busy urban downtown. The third is grabbing packages, from warehouse shelves stocked with a great diversity of package types, and placing them safely into tote boxes.

Who would win, humans or computers?

So far the humans are ahead two-to-one. Though a computer program passed the best human chess players more than 25 years ago, replacing humans in the intellectually demanding tasks of truck-driving and package-packing has proved a much tougher challenge.

The reasons for the skills disparity can tell us a lot about the way artificial intelligence has developed and how it is affecting employment conditions.

Some tasks require mostly analytical thinking and perceptual skills, but many tasks require close, almost instantaneous coordination of fine motor control. Many of these latter tasks fall into the category that is often condescendingly termed “manual labour”. But as Antonio Gramsci argued,

“There is no human activity from which every form of intellectual participation can be excluded: Homo faber cannot be separated from homo sapiens.”1

All work involves, to some degree, both body and mind. This plays a major role in the degree to which AI can or cannot effectively replace human labour.

Yet even if AI can not succeed in taking away your job, it might succeed in taking away a big chunk of your paycheque.

Moravec’s paradox

By 2021, Amazon had developed a logistics system that could track millions of items and millions of shipments every day, from factory loading docks to shipping containers to warehouse shelves to the delivery truck that speeds to your door.

But for all its efforts, it hadn’t managed to develop a robot that could compete with humans in the delicate task of grabbing packages off shelves or conveyor belts.

Author Christopher Mims described the challenge in his book Arriving Today2. “Each of these workers is the hub of a three-dimensional wheel, where each spoke is ten feet tall and consists of mail slot-size openings. Every one of these sorters works as fast as they can. First they grab a package off the chute, then they pause for a moment to scan the item and read its destination off a screen …. Then they whirl and drop the item into a slot. Each of these workers must sort between 1,100 and 1,200 parcels per hour ….”

The problem was this: there was huge diversity not only in packaging types but in packaging contents. Though about half the items were concealed in soft poly bags, those bags might contain things that were light and soft, or light and hard, or light and fragile, or surprisingly heavy.

Humans have a remarkable ability to “adjust on the fly”. As our fingers close on the end of a package and start to lift, we can make nearly instantaneous adjustments to grip tighter – but not too tight – if we sense significant resistance due to unexpected weight. Without knowing what is in the packages, we can still grab and sort 20 packages per minute while seldom if ever crushing a package because we grip too tightly, and seldom losing control and having a package fly across the room.

Building a machine with the same ability is terribly difficult, as summed up by robotics pioneer Hans Moravec.

“One formulation of Moravec’s paradox goes like this,” Mims wrote: “it’s far harder to teach a computer to pick up and move a chess piece like its human opponent than it is to teach it to beat that human at chess.”

In the words of robotics scholar Thrishantha Nanayakkara,

“We have made huge progress in symbolic, data-driven AI. But when it comes to contact, we fail miserably. We don’t have a robot that we can trust to hold a hamster safely.”3

In 2021 even Amazon’s newest warehouses had robots working only on carefully circumscribed tasks, in carefully fenced-off and monitored areas, while human workers did most of the sorting and packing.

Amazon’s warehouse staffers still had paying jobs, but AI has already shaped their working conditions for the worse. Since Amazon is one of the world’s largest employers, as well as a major player in AI, their obvious eagerness to extract more value from a low-paid workforce should be seen as a harbinger of AI’s future effects on labour relations. We’ll return to those changing labour relations below.

Behind the wheel

One job which the artificial intelligence industrial complex has tried mightily to eliminate is the work of drivers. On the one hand, proponents of autonomous vehicles have pointed to the shocking annual numbers of people killed or maimed on highways and streets, claiming that self-driving cars and trucks will be much safer. On the other hand, in some industries the wages of drivers are a big part of the cost of business, and thus companies could swell their profit margins by eliminating those wages.

We’ve been hearing that full self-driving vehicles are just a few years away – for the past twenty years. But driving is one of those tasks that requires not only careful and responsive manipulation of vehicle controls, but quick perception and quick judgment calls in situations that the driver may have seldom – or never – confronted before.

Christopher Mims looked at the work of tuSimple, a San Diego-based firm hoping to market self-driving trucks. Counting all the sensors, controllers, and information processing devices, he wrote, “The AI on board TuSimple’s self-driving truck draws about four times as much power as the average American home ….”4

At the time, tuSimple was working on increasing their system’s reliability “from something like 99.99 percent reliable to 99.9999 percent reliable.” That improvement would not come easily, Sims explained: “every additional decimal point of reliability costs as much in time, energy, and money as all the previous ones combined.”

Some of the world’s largest companies have tried, and so far failed, to achieve widespread regulatory approval for their entries in the autonomous-vehicle sweepstakes. Consider the saga of GM’s Cruise robotaxi subsidiary. After GM and other companies had invested billions in the venture, Cruise received permission in August 2023 to operate their robotaxis twenty-four hours a day in San Fransisco.5

Just over two months later, Cruise suddenly suspended its robotaxi operations nationwide following an accident in San Francisco.6

In the wake of the controversy, it was revealed that although Cruise taxis appeared to have no driver and to operate fully autonomously, things weren’t quite that simple. Cruise founder and CEO Kyle Vogt told CNBC that “Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments.”7

Perhaps “2–4% of the time” doesn’t sound like much. But if you have a fleet of vehicles needing help, on average, that often, you need to have quite a few remote operators on call to be reasonably sure they can provide timely assistance. According to the New York Times, the two hundred Cruise vehicles in San Francisco “were supported by a vast operations staff, with 1.5 workers per vehicle.”8 If a highly capitalized company can pay teams of AI and robotics engineers to build vehicles whose electronics cost several times more than the vehicle itself, and the vehicles still require 1.5 workers/vehicle, the self-driving car show is not yet ready for prime time.

In another indication of the difficulty in putting a virtual robot behind the wheel, Bloomberg News reported last month that Apple is delaying launch of its long-rumored vehicle until 2028 at earliest.9 Not only that, but the vehicle will boast no more than Level-2 autonomy. CleanTechnica reported that

“The prior design for the [Apple] vehicle called for a system that wouldn’t require human intervention on highways in approved parts of North America and could operate under most conditions. The more basic Level 2+ plan would require drivers to pay attention to the road and take over at any time — similar to the current standard Autopilot feature on Tesla’s EVs. In other words, it will offer no significant upgrades to existing driver assistance technology from most manufacturers available today.”10

As for self-driving truck companies still trying to tap the US market, most are focused on limited applications that avoid many of the complications involved in typical traffic. For example, Uber Freight targets the “middle mile” segment of truck journeys. In this model, human drivers deliver a trailer to a transfer hub close to a highway. A self-driving tractor then pulls the trailer on the highway, perhaps right across the country, to another transfer hub near the destination. A human driver then takes the trailer to the drop-off point.11

This model limits the self-driving segments to roads with far less complications than urban environments routinely present.

This simplification of the tasks inherent in driving may seem quintessentially twenty-first century. But it represents one step in a process of “de-skilling” that has been a hallmark of industrial capitalism for hundreds of years.

Jacquard looms, patented in France in 1803, were first brought to the U.S. in the 1820s. The loom is an ancestor of the first computers, using hundreds of punchcards to “program” intricate designs for the loom to produce. Photo by Maia C, licensed via CC BY-NC-ND 2.0 DEED, accessed at flickr.

Reshaping labour relations

Almost two hundred years ago computing pioneer Charles Babbage advised industrialists that “The workshops of [England] contain within them a rich mine of knowledge, too generally neglected by the wealthier classes.”12

Babbage is known today as the inventor of the Difference Engine – a working mechanical calculator that could manipulate numbers – and the Analytical Engine – a programmable general purpose computer whose prototypes Babbage worked on for many years.

But Babbage was also interested in the complex skeins of knowledge evidenced in the co-operative activities of skilled workers. In particular, he wanted to break down that working knowledge into small constituent steps that could be duplicated by machines and unskilled workers in factories.

Today writers including Matteo Pasquinelli, Brian Merchant, Dan McQuillan and Kate Crawford highlight factory industrialism as a key part of the history of artificial intelligence.

The careful division of labour not only made proto-assembly lines possible, but they also allowed capitalists to pay for just the quantity of labour needed in the production process:

“The Babbage principle states that the organisation of a production process into small tasks (the division of labour) allows for the calculation and precise purchase of the quantity of labour that is necessary for each task (the division of value).”13

Babbage turned out to be far ahead of his time with his efforts to build a general-purpose computer, but his approach to the division of labour became mainstream management economics.

In the early 20th century assembly-line methods reshaped labour relations even more, thanks in part to the work of management theorist Frederick Taylor.

Taylor carefully measured and noted each movement of skilled mechanics – and used the resulting knowledge to design assembly lines in which cars could be produced at lower cost by workers with little training.

As Christopher Mims wrote, “Taylorism” is now “the dominant ideology of the modern world and the root of all attempts at increasing productivity ….” Indeed,

“While Taylorism once applied primarily to the factory floor, something fundamental has shifted in how we live and work. … the walls of the factory have dissolved. Every day, more and more of what we do, how we consume, even how we think, has become part of the factory system.”14

We can consume by using Amazon’s patented 1-Click ordering system. When we try to remember a name, we can start to type a Google search and get an answer – possibly even an appropriate answer – before we have finished typing our query. In both cases, of course, the corporations use their algorithms to capture and sort the data produced by our keystrokes or vocal requests.

But what about remaining activities on the factory floor, warehouse or highway? Can Taylorism meet the wildest dreams of Babbage, aided today by the latest forms of artificial intelligence? Can AI not only measure our work but replace human workers?

Yes, but only in certain circumstances. For work in which mind-body, hand-eye coordination is a key element, AI-enhanced robots have limited success. As we have seen, where a work task can be broken into discrete motions, each one repeated with little or no variation, it is sometimes economically efficient to develop and build robots. But where flexible and varied manual dexterity is required, or where judgement calls must guide the working hands to deal with frequent but unpredicted contingencies, AI robotization is not up to the job.

A team of researchers at MIT recently investigated jobs that could potentially be replaced by AI, and in particular jobs in which computer vision could play a significant role. They found that “at today’s costs U.S. businesses would choose not to automate most vision tasks that have “AI Exposure,” and that only 23% of worker wages being paid for vision tasks would be attractive to automate. … Overall, our findings suggest that AI job displacement will be substantial, but also gradual ….”15

A report released earlier this month, entitled Generative Artificial Intelligence and the Workforce, found that “Blue-collar jobs are unlikely to be automated by GenAI.” However, many job roles that are more cerebral and less hands-on stand to be greatly affected. The report says many jobs may be eliminated, at least in the short term, in categories including the following:

  • “financial analysts, actuaries and accountants [who] spend much of their time crunching numbers …;”
  • auditors, compliance officers and lawyers who do regulatory compliance monitoring;
  • software developers who do “routine tasks—such as generating code, debugging, monitoring systems and optimizing networks;”
  • administrative and human resource managerial roles.

The report also predicts that

“Given the broad potential for GenAI to replace human labor, increases in productivity will generate disproportionate returns for investors and senior employees at tech companies, many of whom are already among the wealthiest people in the U.S., intensifying wealth concentration.”16

It makes sense that if a wide range of mid-level managers and professional staff can be cut from payrolls, those at the top of the pyramid stand to gain. But even though, as the report states, blue-collar workers are unlikely to lose their jobs to AI-bots, the changing employment trends are making work life more miserable and less lucrative at lower rungs on the socio-economic ladder.

Pasquinelli puts it this way:

“The debate on the fear that AI fully replaces jobs is misguided: in the so-called platform economy, in reality, algorithms replace management and multiply precarious jobs.”17

And Crawford writes:

“Instead of asking whether robots will replace humans, I’m interested in how humans are increasingly treated like robots and what this means for the role of labor.”18

The boss from hell does not have an office

Let’s consider some of the jobs that are often discussed as prime targets for elimination by AI.

The taxi business has undergone drastic upheaval due to the rise of Uber and Lyft. These companies seem driven by a mission to solve a terrible problem: taxi drivers have too much of the nations’ wealth and venture capitalists have too little. The companies haven’t yet eliminated driving jobs, but they have indeed enriched venture capitalists while making the chauffeur-for-hire market less rewarding and less secure. It’s hard for workers to complain to or negotiate with the boss, now that the boss is an app.

How about Amazon warehouse workers? Christopher Mims describes the life of a worker policed by Amazon’s “rate”. Every movement during every warehouse worker’s day is monitored and fed into a data management system. The system comes back with a “rate” of tasks that all workers are expected to meet. Failure to match that rate puts the worker at immediate risk of firing. In fact, the lowest 25 per cent of the workers, as measured by their “rate”, are periodically dismissed. Over time, then, the rate edges higher, and a worker who may have been comfortably in the middle of the pack must keep working faster to avoid slipping into the bottom 25th percentile and thence into the ranks of the unemployed.

“The company’s relentless measurement, drive for efficiency, loose hiring standards, and moving targets for hourly rates,” Mims writes, “are the perfect system for ingesting as many people as possible and discarding all but the most physically fit.”19 Since the style of work lends itself to repetitive strain injuries, and since there are no paid sick days, even very physically fit warehouse employees are always at risk of losing their jobs.

Over the past 40 years the work of a long-distance trucker hasn’t changed much, but the work conditions and remuneration have changed greatly. Mims writes, “The average trucker in the United States made $38,618 a year in 1980, or $120,000 in 2020 dollars. In 2019, the average trucker made about $45,000 a year – a 63 percent decrease in forty years.”

There are many reasons for that redistribution of income out of the pockets of these workers. Among them is the computerization of a swath of supervisory tasks. In Mims words, “Drivers must meet deadlines that are as likely to be set by an algorithm and a online bidding system as a trucking company dispatcher or an account handler at a freight-forwarding company.”

Answering to a human dispatcher or payroll officer isn’t always pleasant or fair, of course – but at least there is the possibility of a human relationship with a human supervisor. That possibility is gone when the major strata of middle management are replaced by AI bots.

Referring to Amazon’s 25th percentile rule and steadily rising “rate”, Mims writes, “Management theorists have known for some time that forcing bosses to grade their employees on a curve is a recipe for low morale and unnecessarily high turnover.” But low morale doesn’t matter among managers who are just successions of binary digits. And high turnover of warehouse staff isn’t a problem for companies like Amazon – little is spent on training, new workers are easy enough to find, and the short average duration of employment makes it much harder for workers to get together in union organizing drives.

Uber drivers, many long-haul truckers, and Amazon packagers have this in common: their cold and heartless bosses are nowhere to be found; they exist only as algorithms. Management-by-AI, Dan McQuillan says, results in “an amplification of casualized and precarious work.”20

Management-by-AI could be seen, then, as just another stage in the development of a centuries-old “counterfeit person” – the legally recognized “person” that is the modern corporation. In the coinage of Charlie Stross, for centuries we’ve been increasingly governed by “old, slow AI”21 – the thinking mode of the corporate personage. We’ll return to the theme of “slow AI” and “fast AI” in a future post.


Notes

1 Antonio Gramsci, The Prison Notebooks, 1932. Quoted in The Eye of the Master: A Social History of Artificial Intelligence, by Matteo Pasquinelli, Verso, 2023.

2 Christopher Mims, Arriving Today: From Factory to Front Door – Why Everything Has Changed About How and What We Buy, Harper Collins, 2021; reviewed here.

3 Tom Chivers, “How DeepMind Is Reinventing the Robot,” IEEE Spectrum, 27 September 2021.

4 Christopher Mims, Arriving Today, 2021, page 143.

5 Johana Bhuiyan, “San Francisco to get round-the-clock robo taxis after controversial vote,” The Guardian, 11 Aug 2023.

6 David Shepardson, “GM Cruise unit suspends all driverless operations after California ban,” Reuters, 27 October 2023.

7 Lora Kolodny, “Cruise confirms robotaxis rely on human assistance every four to five miles,CNBC, 6 Nov 2023.

8 Tripp Mickle, Cade Metz and Yiwen Lu, “G.M.’s Cruise Moved Fast in the Driverless Race. It Got Ugly.” New York Times, 3 November 2023.

9 Mark Gurman, “Apple Dials Back Car’s Self-Driving Features and Delays Launch to 2028”, Bloomberg, 23 January 2024.

10 Steve Hanley, “Apple Car Pushed Back To 2028. Autonomous Driving? Forget About It!” CleanTechnica.com, 27 January 2024.

11 Marcus Law, “Self-driving trucks leading the way to an autonomous future,” Technology, 6 October 2023.

12 Charles Babbage, On the Economy of Machinery and Manufactures, 1832; quoted in Pasquinelli, The Eye of the Master, 2023.

13 Pasquinelli, The Eye of the Master.

14 Christopher Mims, Arriving Today, 2021.

15 Neil Thompson et al., “Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?”, MIT FutureTech, 22 January 2024.

16 Gad Levanon, Generative Artificial Intelligence and the Workforce, The Burning Glass Institute, 1 February 2024.

17 Pasquinelli, The Eye of the Master.

18 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

19 Christopher Mims, Arriving Today, 2021.

20 Dan McQuillan, Resisting AI: An Anti-Fascist Approach to Artificial Intelligence,” Bristol University Press, 2022.

21 Charlie Stross, “Dude, you broke the future!”, Charlie’s Diary, December 2017.

 


Image at top of post: “Mechanically controlled eyes see the controlled eyes in the mirror looking back”, photo from “human (un)limited”, 2019, a joint exhibition project of Hyundai Motorstudio and Ars Electronica, licensed under CC BY-NC-ND 2.0 DEED, accessed via flickr.

Beware of WEIRD Stochastic Parrots

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part four
Also published on Resilience.

A strange new species is getting a lot of press recently. The New Yorker published the poignant and profound illustrated essay “Is My Toddler a Stochastic Parrot?Wall Street Journal told us about “‘Stochastic Parrot’: A Name for AI That Sounds a Bit Less Intelligent”. And expert.ai warned of “GPT-3: The Venom-Spitting Stochastic Parrot”.

The American Dialect Society even selected “stochastic parrot” as the AI-Related Word of the Year for 2023.

Yet this species was unknown until March of 2021, when Emily Bender, Timnit Gebru, Angelina McMillan-Major, and (the slightly pseudonymous) Shmargaret Shmitchell published “On the Dangers of Stochastic Parrots.”1

The paper touched a nerve in the AI community, reportedly costing Timnit Gebru and Margaret Mitchell their jobs with Google’s Ethical AI team.2

Just a few days after Chat-GPT was released, Open AI CEO Sam Altman paid snarky tribute to the now-famous phrase by tweeting “i am a stochastic parrot, and so r u.”3

Just what, according to its namers, are the distinctive characteristics of a stochastic parrot? Why should we be wary of this species? Should we be particularly concerned about a dominant sub-species, the WEIRD stochastic parrot? (WEIRD as in: Western, Educated, Industrialized, Rich, Democratic.) We’ll look at those questions for the remainder of this installment.

Haphazardly probable

The first recognized chatbot was 1967’s Eliza, but many of the key technical developments behind today’s chatbots only came together in the last 15 years. The apparent wizardry of today’s Large Language Models rests on a foundation of algorithmic advances, the availability of vast data sets, super-computer clusters employing thousands of the latest Graphics Processing Unit (GPU) chips, and, as discussed in the last post, an international network of poorly paid gig workers providing human input to fill in gaps in the machine learning process. 

By the beginning of this decade, some AI industry figures were arguing that Large Language Models would soon exhibit “human-level intelligence”, could become sentient and conscious, and might even become the dominant new species on the planet.

The authors of the stochastic parrot paper saw things differently:

“Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”4

Let’s start by focusing on two words in that definition: “haphazardly” and “probabilistic”. How do those words apply to the output of ChatGPT or similar Large Language Models?

In a lengthy paper published last year, Stephen Wolfram offers an initial explanation:

“What ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”5

He gives the example of this partial sentence: “The best thing about AI is its ability to”. The Large Language Model will have identified many instances closely matching this phrase, and will have calculated the probability of various words being the next word to follow. The table below lists five of the most likely choices.

The element of probability, then, is clear – but in what way is ChatGPT “haphazard”?

Wolfram explains that if the chatbot always picks the next word with the highest probability, the results will be syntactically correct, sensible, but stilted and boring – and repeated identical prompts will produce repeated identical outputs.

By contrast, if at random intervals the chatbot picks a “next word” that ranks fairly high in probability but is not the highest rank, then more interesting and varied outputs result.

Here is Wolfram’s sample of an output produced by a strict “pick the next word with the highest rank” rule: 

The above output sounds like the effort of someone who is being careful with each sentence, but with no imagination, no creativity, and no real ability to develop a thought.

With a randomness setting introduced, however, Wolfram illustrates how repeated responses to the same prompt produce a wide variety of more interesting outputs:

The above summary is an over-simplification, of course, and if you want a more in-depth exposition Wolfram’s paper offers a lot of complex detail. But Wolfram’s “next word” explanation concurs with at least part of the stochastic parrot thesis: “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine ….”

What follows, in Bender and Gebru’s formulation, is equally significant. An LLM, they wrote, strings together words “without any reference to meaning.”

Do LLM’s actually understand the meaning of the words, phrases, sentences and paragraphs they have read and which they can produce? To answer that question definitively, we’d need definitive answers to questions like “What is meaning?” and “What does it mean to understand?”

A brain is not a computer, and a computer is not a brain

For the past fifty years a powerful but deceptive metaphor has become pervasive. We’ve grown accustomed to describing computers by analogy to the human brain, and vice versa. As the saying goes, these models are always wrong even though they are sometimes useful.

“The Computational Metaphor,” wrote Alexis Barria and Keith Cross, “affects how people understand computers and brains, and of more recent importance, influences interactions with AI-labeled technology.”

The concepts embedded in the metaphor, they added, “afford the human mind less complexity than is owed, and the computer more wisdom than is due.”6

The human mind is inseparable from the brain which is inseparable from the body. However much we might theorize about abstract processes of thought, our thought processes evolved with and are inextricably tangled with bodily realities of hunger, love, fear, satisfaction, suffering, mortality. We learn language as part of experiencing life, and the meanings we share (sometimes incompletely) when we communicate with others depends on shared bodily existence.

Angie Wang put it this way: “A toddler has a life, and learns language to describe it. An L.L.M. learns language, but has no life of its own to describe.”7

In other terms, wrote Bender and Gebru, “languages are systems of signs, i.e. pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning.”

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people”8 who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

“WEIRD in, WEIRD out”

There are no generic humans. As it turns out, counterfeit people aren’t generic either.

Large Language Models are created primarily by large corporations, or by university researchers who are funded by large corporations or whose best job prospects are with those corporations. It would be a fluke if the products and services growing out of these LLMs didn’t also favour those corporations.

But the bias problem embedded in chatbots goes deeper. For decades, the people who contribute the most to digitized data sets are those who have the most access to the internet, who publish the most books, research papers, magazine articles and blog posts – and these people disproportionately live in Western Educated Industrialized Rich Democratic countries. Even social media users, who provide terabytes of free data for the AI machine, are likely to live in WEIRD places.

We should not be surprised, then, when outputs from chatbots express common biases:

“As people in positions of privilege with respect to a society’s racism, misogyny, ableism, etc., tend to be overrepresented in training data for LMs, this training data thus includes encoded biases, many already recognized as harmful.”9

In 2023 a group of scholars at Harvard University investigated those biases. “Technical reports often compare LLMs’ outputs with ‘human’ performance on various tests,” they wrote. “Here, we ask, ‘Which humans?’”10

“Mainstream research on LLMs,” they added, “ignores the psychological diversity of ‘humans’ around the globe.”

Their strategy was straightforward: prompt Open AI’s GPT to answer the questions in the World Values Survey, and then compare the results to the answers that humans around the world gave to the same set of questions. The WVS documents a range of values including but not limited to issues of justice, moral principles, global governance, gender, family, religion, social tolerance, and trust. The team worked with data in the latest WVS surveys, collected from 2017 to 2022.

Recall that GPT does not give identical responses to identical prompts. To ensure that the GPT responses were representative, each of the WVS questions was posed to GPT 1000 times.11

The comparisons with human answers to the same surveys revealed striking similarities and contrasts. The article states:

“GPT was identified to be closest to the United States and Uruguay, and then to this cluster of cultures: Canada, Northern Ireland, New Zealand, Great Britain, Australia, Andorra, Germany, and the Netherlands. On the other hand, GPT responses were farthest away from cultures such as Ethiopia, Pakistan, and Kyrgyzstan.”

In other words, the GPT responses were similar to those of people in WEIRD societies.

The results are summarized in the graphic below. Countries in which humans gave WVS answers close to GPT’s answers are clustered at top left, while countries whose residents gave answers increasingly at variance with GPT’s answers trend along the line running down to the right.

“Figure 3. The scatterplot and correlation between the magnitude of GPT-human similarity and cultural distance from the United States as a highly WEIRD point of reference.” From Atari et al., “Which Humans?

The team went on to consider the WVS responses in various categories including styles of analytical thinking, degrees of individualism, and ways of expressing and understanding personal identity. In these and other domains, they wrote, “people from contemporary WEIRD populations are an outlier in terms of their psychology from a global and historical perspective.” Yet the responses from GPT tracked the WEIRD populations rather than global averages.

Anyone who asks GPT a question with hopes of getting an unbiased answer is running a fool’s errand. Because the data sets include a large over-representation of WEIRD inputs, the outputs, for better or worse, will be no less WEIRD.

As Large Language Models are increasingly incorporated into decision-making tools and processes, their WEIRD biases become increasingly significant. By learning primarily from data that encodes viewpoints of dominant sectors of global society, and then expressing those values in decisions, LLMs are likely to further empower the powerful and marginalize the marginalized.

In the next installment we’ll look at the effects of AI and LLMs on employment conditions, now and in the near future.


Notes

1 Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Association for Computing Machinery Digital Library, 1 March 2021.

2 John Naughton, “Google might ask questions about AI ethics, but it doesn’t want answers”, The Guardian, 13 March 2021.

3 As quoted in Elizabeth Weil, “You Are Not a Parrot”, New York Magazine, March 1, 2023.

4 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

5 Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?”, 14 February 2023.

6 Alexis T. Baria and Keith Cross, “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor”, arXiv, 18 July 2021.

7 Angie Wang, “Is My Toddler a Stochastic Parrot?”, The New Yorker, 15 November 2023.

8 The phrase “counterfeit people” is attributed to philosopher David Dennett, quoted by Elizabeth Weil in “You Are Not a Parrot”, New York Magazine.

9 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

10 Mohammed Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi, and Joseph Henrich, “Which Humans?”, arXiv, 22 September 2023.

11 Specifically, the team “ran both GPT 3 and 3.5; they were similar. The paper’s plots are based on 3.5.” Email correspondence with study author Mohammed Atari.


Image at top of post: “The Evolution of Intelligence”, illustration by Bart Hawkins Kreps, posted under CC BY-SA 4.0 DEED license, adapted from “The Yin and Yang of Human Progress”, (Wikimedia Commons), and from parrot illustration courtesy of Judith Kreps Hawkins.

“Warning. Data Inadequate.”

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part three
Also published on Resilience.

“The Navy revealed the embryo of an electronic computer today,” announced a New York Times article, “that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”1

A few paragraphs into the article, “the Navy” was quoted as saying the new “perceptron” would be the first non-living mechanism “capable of receiving, recognizing and identifying its surroundings without any human training or control.”

This example of AI hype wasn’t the first and won’t be the last, but it is a bit dated. To be precise, the Times story was published on July 8, 1958.

Due to its incorporation of a simple “neural network” loosely analogous to the human brain, the perceptron of 1958 is recognized as a forerunner of today’s most successful “artificial intelligence” projects – from facial recognition systems to text extruders like ChatGPT. It’s worth considering this early device in some detail.

In particular, what about the claim that the perceptron could identify its surroundings “without any human training or control”? Sixty years on, the descendants of the perceptron have “learned” a great deal, and can now identify, describe and even transform millions of images. But that “learning” has involved not only billions of transistors, and trillions of watts, but also millions of hours of labour in “human training and control.”

Seeing is not perceiving

When we look at a real-world object – for example, a tree – sensors in our eyes pass messages through a network of neurons and through various specialized areas of the brain. Eventually, assuming we are old enough to have learned what a tree looks like, and both our eyes and the required parts of our brains are functioning well, we might say “I see a tree.” In short, our eyes see a configuration of light, our neural network processes that input, and the result is that our brains perceive and identify a tree.

Accomplishing the perception with electronic computing, it turns out, is no easy feat.

The perceptron invented by Dr. Frank Rosenblatt in the 1950s used a 20 pixel by 20 pixel image sensor, paired with an IBM 704 computer. Let’s look at some simple images, and how a perceptron might process the data to produce a perception. 

Images created by the author.

In the illustration at left above, what the camera “sees” at the most basic level is a column of pixels that are “on”, with all the other pixels “off”. However, if we train the computer by giving it nothing more than labelled images of the numerals from 0 to 9, the perceptron can recognize the input as matching the numeral “1”. If we then add training data in the form of labelled images of the characters in the Latin-script alphabet in a sans serif font, the perceptron can determine that it matches, equally well, the numeral “1”, the lower-case letter “l”, or an upper-case letter “I”.

The figure at right is considerably more complex. Here our perceptron is still working with a low-resolution grid, but pixels can be not only “on” or “off” – black or white – but various shades of grey. To complicate things further, suppose more training data has been added, in the form of hand-written letters and numerals, plus printed letters and numerals in an oblique sans serif font. The perceptron might now determine the figure is a numeral “1” or a lower-case “l” or upper-case “I”, either hand-written or printed in an oblique font, each with an equal probability. The perceptron is learning how to be an optical character recognition (OCR) system, though to be very good at the task it would need the ability to use context to the rank the probabilities of a numeral “1”, a lower-case “l”, or an upper-case “I”.

The possibilities multiply infinitely when we ask the perceptron about real-world objects. In the figure below, a bit of context, in the form of a visual ground, is added to the images. 

Images created by the author.

Depending, again, on the labelled training data already input to the computer, the perceptron may “see” the image at left as a tall tower, a bare tree trunk, or the silhouette of a person against a bright horizon. The perceptron might see, on the right, a leaning tree or a leaning building – perhaps the Leaning Tower of Pisa. With more training images and with added context in the input image – shapes of other buildings, for example – the perceptron might output with high statistical confidence that the figure is actually the Leaning Tower of Leeuwarden.

Today’s perceptrons can and do, with widely varying degrees of accuracy and reliability, identify and name faces in crowds, label the emotions shown by someone in a recorded job interview, analyse images from a surveillance drone and indicate that a person’s activities and surroundings match the “signature” of terrorist operations, or identify a crime scene by comparing an unlabelled image with photos of known settings from around the world. Whether right or wrong, the systems’ perceptions sometimes have critical consequences: people can be monitored, hired, fired, arrested – or executed in an instant by a US Air Force Reaper drone.

As we will discuss below, these capabilities have been developed with the aid of millions of hours of poorly-paid or unpaid human labour.

The Times article of 1958, however, described Dr. Rosenblatt’s invention this way: “the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience ….” The kernel of truth in that claim lies in the concept of a neural network.

Rosenblatt told the Times reporter “he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a ‘self-induced change in the wiring diagram.’”

I can empathize with that Times reporter. I still hope to find a person sufficiently intelligent to explain the machine learning process so clearly that even a simpleton like me can fully understand. However, New Yorker magazine writers in 1958 made a good attempt. As quoted in Matteo Pasquinelli’s book The Eye of the Master, the authors wrote:

“If a triangle is held up to the perceptron’s eye, the association units connected with the eye pick up the image of the triangle and convey it along a random succession of lines to the response units, where the image is registered. The next time the triangle is held up to the eye, its image will travel along the path already travelled by the earlier image. Significantly, once a particular response has been established, all the connections leading to that response are strengthened, and if a triangle of a different size and shape is held up to the perceptron, its image will be passed along the track that the first triangle took.”2

With hundreds, thousands, millions and eventually billions of steps in the perception process, the computer gets better and better at interpreting visual inputs.

Yet this improvement in machine perception comes at a high ecological cost. A September 2021 article entitled “Deep Learning’s Diminishing Returns” explained:

“[I]n 2012 AlexNet, the model that first showed the power of training deep-learning systems on graphics processing units (GPUs), was trained for five to six days using two GPUs. By 2018, another model, NASNet-A, had cut the error rate of AlexNet in half, but it used more than 1,000 times as much computing to achieve this.”

The authors concluded that, “Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools.”3

The steep increase in the computing demands of AI is illustrated in a graph by Anil Ananthaswamy.

“The Drive to Bigger AI Models” shows that AI models used for language and image generation have grown in size by several orders of magnitude since 2010.  Graphic from “In AI, is Bigger Better?”, by Anil Ananthaswamy, Nature, 9 March 2023.

Behold the Mechanical Turk

In the decades since Rosenblatt built the first perceptron, there were periods when progress in this field seemed stalled. Additional theoretical advances in machine learning, a many orders-of-magnitude increase in computer processing capability, and vast quantities of training data were all prerequisites for today’s headline-making AI systems. In Atlas of AI, Kate Crawford gives a fascinating account of the struggle to acquire that data.

Up to the 1980s artificial intelligence researchers didn’t have access to large quantities of digitized text or digitized images, and the type of machine learning that makes news today was not yet possible. The lengthy antitrust proceedings against IBM provided an unexpected boost to AI research, in the form of a hundred million digital words from legal proceedings. In the 1990s, court proceedings against Enron collected more than half a million email messages sent among Enron employees. This provided text exchanges in everyday English, though Crawford notes wording “represented the gender, race, and professional skews of those 158 workers.”

And the data floodgates were just beginning to open. As Crawford describes the change,

“The internet, in so many ways, changed everything; it came to be seen in the AI research field as something akin to a natural resource, there for the taking. As more people began to upload their images to websites, to photo-sharing services, and ultimately to social media platforms, the pillaging began in earnest. Suddenly, training sets could reach a size that scientists in the 1980s could never have imagined.”4

It took two decades for that data flood to become a tsunami. Even then, although images were often labelled and classified for free by social media users, the labels and classifications were not always consistent or even correct. There remained a need for humans to look at millions of images and create or check the labels and classifications.

Developers of the image database ImageNet collected 14 million images and eventually organized them into over twenty thousand categories. They initially hired students in the US for labelling work, but concluded that even at $10/hour, this work force would quickly exhaust the budget.

Enter the Mechanical Turk.

The original Mechanical Turk was a chess-playing scam originally set up in 1770 by a Hungarian inventor. An apparently autonomous mechanical human model, dressed in the Ottoman fashion of the day, moved chess pieces and could beat most human chess players. Decades went by before it was revealed that a skilled human chess player was concealed inside the machine for each exhibition, controlling all the motions.

In the early 2000s, Amazon developed a web platform by which AI developers, among others, could contract gig workers for many tasks that were ostensibly being done by artificial intelligence. These tasks might include, for example, labelling and classifying photographic images, or making judgements about outputs from AI-powered chat experiments. In a rare fit of honesty, Amazon labelled the process “artificial artificial intelligence”5 and launched its service, Amazon Mechanical Turk, in 2005.

screen shot taken 3 February 2024, from opening page at mturk.com.

Crawford writes,

“ImageNet would become, for a time, the world’s largest academic user of Amazon’s Mechanical Turk, deploying an army of piecemeal workers to sort an average of fifty images a minute into thousands of categories.”6

Chloe Xiang described this organization of work for Motherboard in an article entitled “AI Isn’t Artificial or Intelligent”:

“[There is a] large labor force powering AI, doing jobs that include looking through large datasets to label images, filter NSFW content, and annotate objects in images and videos. These tasks, deemed rote and unglamorous for many in-house developers, are often outsourced to gig workers and workers who largely live in South Asia and Africa ….”7

Laura Forlano, Associate Professor of Design at Illinois Institute of Technology, told Xiang “what human labor is compensating for is essentially a lot of gaps in the way that the systems work.”

Xiang concluded,

“Like other global supply chains, the AI pipeline is greatly imbalanced. Developing countries in the Global South are powering the development of AI systems by doing often low-wage beta testing, data annotating and labeling, and content moderation jobs, while countries in the Global North are the centers of power benefiting from this work.”

In a study published in late 2022, Kelle Howson and Hannah Johnston described why “platform capitalism”, as embodied in Mechanical Turk, is an ideal framework for exploitation, given that workers bear nearly all the costs while contractors take no responsibility for working conditions. The platforms are able to enroll workers from many countries in large numbers, so that workers are constantly low-balling to compete for ultra-short-term contracts. Contractors are also able to declare that the work submitted is “unsatisfactory” and therefore will not be paid, knowing the workers have no effective recourse and can be replaced by other workers for the next task. Workers are given an estimated “time to complete” before accepting a task, but if the work turns out to require two or three times as many hours, the workers are still only paid for the hours specified in the initial estimate.8

A survey of 700 cloudwork employees (or “independent contractors” in the fictive lingo of the gig work platforms) found about 34% of the time they spent on these platforms was unpaid. “One key outcome of these manifestations of platform power is pervasive unpaid labour and wage theft in the platform economy,” Howson and Johnston wrote.9 From the standpoint of major AI ventures at the top of the extraction pyramid, pervasive wage theft is not a bug in the system, it is a feature.

The apparently dazzling brilliance of AI-model creators and semi-conductor engineers gets the headlines in western media. But without low-paid or unpaid work by employees in the Global South, “AI systems won’t function,” Crawford writes. “The technical AI research community relies on cheap, crowd-sourced labor for many tasks that can’t be done by machines.”10

Whether vacuuming up data that has been created by the creative labour of hundreds of millions of people, or relying on tens of thousands of low-paid workers to refine the perception process for reputedly super-intelligent machines, the AI value chain is another example of extractivism.

“AI image and text generation is pure primitive accumulation,” James Bridle writes, “expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners.”11

“All seven emotions”

New AI implementations don’t usually start with a clean slate, Crawford says – they typically borrow classification systems from earlier projects.

“The underlying semantic structure of ImageNet,” Crawford writes, “was imported from WordNet, a database of word classifications first developed at Princeton University’s Cognitive Science Laboratory in 1985 and funded by the U.S. Office of Naval Research.”12

But classification systems are unavoidably political when it comes to slotting people into categories. In the ImageNet groupings of pictures of humans, Crawford says, “we see many assumptions and stereotypes, including race, gender, age, and ability.”

She explains,

“In ImageNet the category ‘human body’ falls under the branch Natural Object → Body → Human Body. Its subcategories include ‘male body,’ ‘person,’ ‘juvenile body,’ ‘adult body,’ and ‘female body.’ The ‘adult body’ category contains the subclasses ‘adult female body’ and ‘adult male body.’ There is an implicit assumption here that only ‘male’ and ‘female’ bodies are recognized as ‘natural.’”13

Readers may have noticed that US military agencies were important funders of some key early AI research: Frank Rosenblatt’s perceptron in the 1950s, and the WordNet classification scheme in the 1980s, were both funded by the US Navy.

For the past six decades, the US Department of Defense has also been interested in systems that might detect and measure the movements of muscles in the human face, and in so doing, identify emotions. Crawford writes, “Once the theory emerged that it is possible to assess internal states by measuring facial movements and the technology was developed to measure them, people willingly adopted the underlying premise. The theory fit what the tools could do.”14

Several major corporations now market services with roots in this military-funded research into machine recognition of human emotion – even though, as many people have insisted, the emotions people express on their faces don’t always match the emotions they are feeling inside.

Affectiva is a corporate venture spun out of the Media Lab at Massachusetts Institute of Technology. On their website they claim “Affectiva created and defined the new technology category of Emotion AI, and evangelized its many uses across industries.” The opening page of affectiva.com spins their mission as “Humanizing Technology with Emotion AI.”

Who might want to contract services for “Emotion AI”? Media companies, perhaps, want to “optimize content and media spend by measuring consumer emotional responses to videos, ads, movies and TV shows – unobtrusively and at scale.” Auto insurance companies, perhaps, might want to keep their (mechanical) eyes on you while you drive: “Using in-cabin cameras our AI can detect the state, emotions, and reactions of drivers and other occupants in the context of a vehicle environment, as well as their activities and the objects they use. Are they distracted, tired, happy, or angry?”

Affectiva’s capabilities, the company says, draw on “the world’s largest emotion database of more than 80,000 ads and more than 14.7 million faces analyzed in 90 countries.”15 As reported by The Guardian, the videos are screened by workers in Cairo, “who watch the footage and translate facial expressions to corresponding emotions.”6

There is a slight problem: there is no clear and generally accepted definition of an emotion, nor general agreement on just how many emotions there might be. But “emotion AI” companies don’t let those quibbles get in the way of business.

Amazon’s Rekognition service announced in 2019 “we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’)” – but they were proud to have “added a new emotion: ‘Fear’.”17

Facial- and emotion-recognition systems, with deep roots in military and intelligence agency research, are now widely employed not only by these agencies but also by local police departments. Their use is not confined to governments: they are used in the corporate world for a wide range of purposes. And their production and operation likewise crosses public-private lines; though much of the initial research was government-funded, the commercialization of the technologies today allows corporate interests to sell the resulting services to public and private clients around the world.

What is the likely impact of these AI-aided surveillance tools? Dan McQuillan sees it this way:

“We can confidently say that the overall impact of AI in the world will be gendered and skewed with respect to social class, not only because of biased data but because engines of classification are inseparable from systems of power.”18

In our next installment we’ll see that biases in data sources and classification schemes are reflected in the outputs of the GPT large language model.


Image at top of post: The Senture computer server facility in London, Ky, on July 14, 2011, photo by US Department of Agriculture, public domain, accessed on flickr.

Title credit: the title of this post quotes a lyric of “Data Inadequate”, from the 1998 album Live at Glastonbury by Banco de Gaia.


Notes

1 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

2 “Rival”, in The New Yorker, by Harding Mason, D. Stewart, and Brendan Gill, November 28, 1958, synopsis here. Quoted by Matteo Pasquinelli in The Eye of the Master: A Social History of Artificial Intelligence, Verso Books, October 2023, page 137.

 Deep Learning’s Diminishing Returns”, by Neil C. Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso, IEEE Spectrum, 24 September 2021.

4 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

5 This phrase is cited by Elizabeth Stevens and attributed to Jeff Bezos, in “The mechanical Turk: a short history of ‘artificial artificial intelligence’”, Cultural Studies, 08 March 2022.

6 Crawford, Atlas of AI.

7 Chloe Xiang, “AI Isn’t Artificial or Intelligent: How AI innovation is powered by underpaid workers in foreign countries,” Motherboard, 6 December 2022.

8 Kelle Howson and Hannah Johnston, “Unpaid labour and territorial extraction in digital value networks,” Global Network, 26 October 2022.

9 Howson and Johnston, “Unpaid labour and territorial extraction in digital value networks.”

10 Crawford, Atlas of AI.

11 James Bridle, “The Stupidity of AI”, The Guardian, 16 Mar 2023.

12 Crawford, Atlas of AI.

13 Crawford, Atlas of AI.

14 Crawford, Atlas of AI.

15 Quotes from Affectiva taken from www.affectiva.com on 5 February 2024.

16 Oscar Schwarz, “Don’t look now: why you should be worried about machines reading your emotions,” The Guardian, 6 March 2019.

17 From Amazon Web Services Rekognition website, accessed on 5 February 2024; italics added.

18 Dan McQuillan, “Post-Humanism, Mutual Aid,” in AI for Everyone? Critical Perspectives, University of Westminster Press, 2021.

Artificial Intelligence in the Material World

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part two
Also published on Resilience.

Picture a relatively simple human-machine interaction: I walk two steps, flick a switch on the wall, and a light comes on.

Now picture a more complex interaction. I say, “Alexa, turn on the light” – and, if I’ve trained my voice to match the classifications in the electronic monitoring device and its associated global network, a light comes on.

“In this fleeting moment of interaction,” write Kate Crawford and Vladan Joler, “a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization.”

“The scale of resources required,” they add, “is many magnitudes greater than the energy and labor it would take a human to … flick a switch.”1

Crawford and Joler wrote these words in 2018, at a time when “intelligent assistants” were recent and rudimentary products of AI. The industry has grown by leaps and bounds since then – and the money invested is matched by the computing resources now devoted to processing and “learning” from data.

In 2021, a much-discussed paper found that “the amount of compute used to train the largest deep learning models (for NLP [natural language processing] and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law.”2

An analysis in 2023 backed up this conclusion. Computing calculations are often measured in Floating Point OPerations. A Comment piece in the journal Nature Machine Intelligence illustrated the steep rise in the number of FLOPs used in training recent AI models.

Changes in the number of FLOPs needed for state-of-the-art AI model training, graph from “Reporting electricity consumption is essential for sustainable AI”, Charlotte Debus, Marie Piraud, Achim Streit, Fabian Theis & Markus Götz, Nature Machine Intelligence, 10 November 2023. AlexNet is a neural network model used to great effect with the image classification database ImageNet, which we will discuss in a later post. GPT-3 is a Large Language Model developed by OpenAI, for which Chat-GPT is the free consumer interface.

With the performance of individual AI-specialized computer chips now measured in TeraFLOPs, and thousands of these chips harnessed together in an AI server farm, the electricity consumption of AI is vast.

As many researchers have noted, accurate electricity consumption figures are difficult to find, making it almost impossible to calculate the worldwide energy needs of the AI Industrial Complex.

However, Josh Saul and Dina Bass reported last year that

“Artificial intelligence made up 10 to 15% of [Google’s] total electricity consumption, which was 18.3 terawatt hours in 2021. That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta.”3

However, researcher Alex de Vries reported if an AI system similar to ChatGPT were used for each Google search, electricity usage would spike to 29.2 TWh just for the search engine.4

In Scientific American, Lauren Leffer cited projections that Nvidia, manufacturer of the most sophisticated chips for AI servers, will ship “1.5 million AI server units per year by 2027.”

“These 1.5 million servers, running at full capacity,” she added, “would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year, according to the new assessment.”5

OpenAI CEO Sam Altman expects AI’s appetite for energy will continue to grow rapidly. At the Davos confab in January 2024 he told the audience, “We still don’t appreciate the energy needs of this technology.” As quoted by The Verge, he added, “There’s no way to get there without a breakthrough. We need [nuclear] fusion or we need like radically cheaper solar plus storage or something at massive scale.” Altman has invested $375 million in fusion start-up Helion Energy, which hopes to succeed soon with a technology that has stubbornly remained 50 years in the future for the past 50 years.

In the near term, at least, electricity consumption will act as a brake on widespread use of AI in standard web searches, and will restrict use of the most sophisticated AI models to paying customers. That’s because the cost of AI use can be measured not only in watts, but in dollars and cents.

Shortly after the launch of Chat-GPT,  Sam Altman was quoted as saying that Chat-GPT cost “probably single-digit cents per chat.” Pocket change – until you multiply it by perhaps 10 million users each day. Citing figures from SemiAnalysis, the Washington Post reported that by February 2023, “ChatGPT was costing OpenAI some $700,000 per day in computing costs alone.” Will Oremus concluded,

“Multiply those computing costs by the 100 million people per day who use Microsoft’s Bing search engine or the more than 1 billion who reportedly use Google, and one can begin to see why the tech giants are reluctant to make the best AI models available to the public.”6

In any case, Alex de Vries says, “NVIDIA does not have the production capacity to promptly deliver 512,821 A100 HGX servers” which would be required to pair every Google search with a state-of-the-art AI model. And even if Nvidia could ramp up that production tomorrow, purchasing the computing hardware would cost Google about $100 billion USD.

Detail from: Nvidia GeForce RTX 2080, (TU104 | Turing), (Polysilicon | 5x | External Light), photograph by Fritzchens Fritz, at Wikimedia Commons, licensed under Creative Commons CC0 1.0 Universal Public Domain Dedication

A 457,000-item supply chain

Why is AI computing hardware so difficult to produce and so expensive? To understand this it’s helpful to take a greatly simplified look at a few aspects of computer chip production.

That production begins with silicon, one of the most common elements on earth and a basic constituent of sand. The silicon must be refined to 99.9999999% purity before being sliced into wafers.

Image from Intel video From Sand to Silicon: The Making of a Microchip.

Eventually each silicon wafer will be augmented with an extraordinarily fine pattern of transistors. Let’s look at the complications involved in just one step, the photolithography that etches a microscopic pattern in the silicon.

As Chris Miller explains in Chip War, the precision of photolithography is determined by, among other factors, the wavelength of the light being used: “The smaller the wavelength, the smaller the features that could be carved onto chips.”7 By the early 1990s, chipmakers had learned to pack more than 1 million transistors onto one of the chips used in consumer-level desktop computers. To enable the constantly climbing transistor count, photolithography tool-makers were using deep ultraviolet light, with wavelengths of about 200 nanometers (compared to visible light with wavelengths of about 400 to 750 nanometers; a nanometer is one-billionth of a meter). It was clear to some industry figures, however, that the wavelength of deep ultraviolet light would soon be too long for continued increases in the precision of etching and for continued increases in transistor count.

Thus began the long, difficult, and immensely expensive development of Extreme UltraViolet (EUV) photolithography, using light with a wavelength of about 13.5 nanometers.

Let’s look at one small part of the complex EUV photolithography process: producing and focusing the light. In Miller’s words,

“[A]ll the key EUV components had to be specially created. … Producing enough EUV light requires pulverizing a small ball of tin with a laser. … [E]ngineers realized the best approach was to shoot a tiny ball of tin measuring thirty-millionths of a meter wide moving through a vacuum at a speed of around two hundred miles an hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.”8

Heating the tin droplets to that temperature, “required a carbon dioxide-based laser more powerful than any that previously existed.”9 Laser manufacturer Trumpf worked for 10 years to develop a laser powerful enough and reliable enough – and the resulting tool had “exactly 457,329 component parts.”10

Once the extremely short wavelength light could be reliably produced, it needed to be directed with great precision – and for that purpose German lens company Zeiss “created mirrors that were the smoothest objects ever made.”11

Nearly 20 years after development of EUV lithography began, this technique is standard for the production of sophisticated computer chips which now contain tens of billions of transistors each. But as of 2023, only Dutch company ASML had mastered the production of EUV photolithography machines for chip production. At more than $100 million each, Miller says “ASML’s EUV lithography tool is the most expensive mass-produced machine tool in history.”12

Landscape Destruction: Rio Tinto Kennecott Copper Mine from the top of Butterfield Canyon. Photographed in 665 nanometer infrared using an infrared converted Canon 20D and rendered in channel inverted false color infrared, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.

No, data is not the “new oil”

US semi-conductor firms began moving parts of production to Asia in the 1960s. Today much of semi-conductor manufacturing and most of computer and phone assembly is done in Asia – sometimes using technology more advanced than anything in use within the US.

The example of EUV lithography indicates how complex and energy-intensive chipmaking has become. At countless steps from mining to refining to manufacturing, chipmaking relies on an industrial infrastructure that is still heavily reliant on fossil fuels.

Consider the logistics alone. A wide variety of metals, minerals, and rare earth elements, located at sites around the world, must be extracted, refined, and processed. These materials must then be transformed into the hundreds of thousands of parts that go into computers, phones, and routers, or which go into the machines that make the computer parts.

Co-ordinating all of this production, and getting all the pieces to where they need to be for each transformation, would be difficult if not impossible if it weren’t for container ships and airlines. And though it might be possible someday to run most of those processes on renewable electricity, for now those operations have a big carbon footprint.

It has become popular to proclaim that “data is the new oil”13, or “semi-conductors are the new oil”14. This is nonsense, of course. While both data and semi-conductors are worth a lot of money and a lot of GDP growth in our current economic context, neither one produces energy – they depend on available and affordable energy to be useful.

A world temporarily rich in surplus energy can produce semi-conductors to extract economic value from data. But warehouses of semi-conductors and petabytes of data will not enable us to produce surplus energy.

Artificial Intelligence powered by semi-conductors and data could, perhaps, help us to use the surplus energy much more efficiently and rationally. But that would require a radical change in the economic religion that guides our whole economic system, including the corporations at the top of the Artificial Intelligence Industrial Complex.

Meanwhile the AI Industrial Complex continues to soak up huge amounts of money and energy.

Open AI CEO Sam Altman has been in fund-raising mode recently, seeking to finance a network of new semi-conductor fabrication plants. As reported in Fortune, “Constructing a single state-of-the-art fabrication plant can require tens of billions of dollars, and creating a network of such facilities would take years. The talks with [Abu Dhabi company] G42 alone had focused on raising $8 billion to $10 billion ….”

This round of funding would be in addition to the $10 billion Microsoft has already invested in Open AI. Why would Altman want to get into the hardware production side of the Artificial Intelligence Industrial Complex, in addition to Open AI’s leading role in software operations? According to Fortune,

“Since OpenAI released ChatGPT more than a year ago, interest in artificial intelligence applications has skyrocketed among companies and consumers. That in turn has spurred massive demand for the computing power and processors needed to build and run those AI programs. Altman has said repeatedly that there already aren’t enough chips for his company’s needs.”15

Becoming data

We face the prospect, then, of continuing rapid growth in the Artificial Intelligence Industrial Complex, accompanied by continuing rapid growth in the extraction of materials and energy – and data.

How will major AI corporations obtain and process all the data that will keep these semi-conductors busy pumping out heat?

Consider the light I turned on at the beginning of this post. If I simply flick the switch on the wall and the light goes off, the interaction will not be transformed into data. But if I speak to an Echo, asking Alexa to turn off the light, many data points are created and integrated into Amazon’s database: the time of the interaction, the IP address and physical location where this takes place, whether I speak English or some other language, whether my spoken words are unclear and the device asks me to repeat, whether the response taken appears to meet my approval, or whether I instead ask for the response to be changed. I would be, in Kate Crawford’s and Vladan Joler’s words, “simultaneously a consumer, a resource, a worker, and a product.”15

By buying into the Amazon Echo world,

“the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.”16

How will AI corporations monetize that data so they can cover their hardware and energy costs, and still return a profit on their investors’ money? We’ll turn to that question in coming installments.


Image at top of post: Bingham Canyon Open Pit Mine, Utah, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.


Notes

1 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

2 Emily M. Bender, Timnit Gebru and Angelina McMillan-Major, Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Library, March 1, 2021. Thanks to Paris Marx for introducing me to the work of Emily M. Bender on the excellent podcast Tech Won’t Save Us.

3 Artificial Intelligence Is Booming—So Is Its Carbon Footprint”, Bloomberg, 9 March 2023.

4 Alex de Vries, “The growing energy footprint of artificial intelligence,” Joule, 18 October 2023.

5 Lauren Leffer, “The AI Boom Could Use a Shocking Amount of Electricity,” Scientific American, 13 October 2023.

6 Will Oremus, “AI chatbots lose money every time you use them. That is a problem.Washington Post, 5 June 2023.

7 Chris Miller, Chip War: The Fight for the World’s Most Critical Technology, Simon & Schuster, October 2022; page 183

8 Chip War, page 226.

9 Chip War, page 227.

10 Chip War, page 228.

11 Chip War, page 228.

12 Chip War, page 230.

13 For example, in “Data Is The New Oil — And That’s A Good Thing,” Forbes, 15 Nov 2019.

14  As in, “Semi-conductors may be to the twenty-first century what oil was to the twentieth,” Lawrence Summer, former US Secretary of the Treasury, in blurb to Chip War.

15 OpenAI CEO Sam Altman is fundraising for a network of AI chips factories because he sees a shortage now and well into the future,” Fortune, 20 January 2024.

16 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.