A fragile frankenstein

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part eight
Also published on Resilience.

Is there an imminent danger that artificial intelligence will leap-frog human intelligence, go rogue, and either eliminate or enslave the human race?

You won’t find an answer to this question in an expert consensus, because there is none.

Consider the contrasting views of Geoffrey Hinton and Yann LeCun. When they and their colleague Yoshua Bengio were awarded the 2018 Turing Prize, the three were widely praised as the “godfathers of AI.”

“The techniques the trio developed in the 1990s and 2000s,” James Vincent wrote, “enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies ….”1

Yet Hinton and LeCun don’t see eye to eye on some key issues.

Hinton made news in the spring of 2023 with his highly-publicized resignation from Google. He stepped away from the company because he had become convinced AI has become an existential threat to humanity, and he felt the need to speak out freely about this danger.

In Hinton’s view, artificial intelligence is racing ahead of human intelligence and that’s not good news: “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”2

LeCun now heads Meta’s AI division while also teaching New York University. He voices a more skeptical perspective on the threat from AI. As reported last month,

“[LeCun] believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence.”3

As we dive deeper into these diverging judgements, we’ll look at a deceptively simple question: What is intelligence good for?

But here’s a spoiler alert: after reading scores of articles and books on AI over the past year, I’ve found I share the viewpoint of computer scientist Jaron Lanier.

In a New Yorker article last May Lanier wrote “The most pragmatic position is to think of A.I. as a tool, not a creature.”4 (emphasis mine) He repeated this formulation more recently:

“We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase ‘artificial intelligence’ itself, is all about the idea that we are making new creatures instead of new tools.”5

This tool might be designed and operated badly or for nefarious purposes, Lanier says, perhaps even in ways that could cause our own and many other species’ extinction. Yet as a tool made and used by humans, the harm would best be attributed to humans and not to the tool.

Common senses

How might we compare different manifestations of intelligence? For many years Hinton thought electronic neural networks were a poor imitation of the human brain. But he told Will Douglas Heaven last year that he now thinks the AI neural networks have turned out to be better than human brains in important respects. While the largest AI neural networks are still small compared to human brains, they make better use of their connections:

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Compared to people, Hinton says, the new Large Language Models learn new tasks extremely quickly.

LeCun argues that in spite of a relatively small number of neurons and connections in its brain, a cat is far smarter than the leading AI systems:

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”7

I’ve turned to a dear friend, who happens to be a cat, for further insight. When we go out for our walks together, each at one end of a leash, I notice how carefully Embers sniffs this bush, that plank, or a spot on the ground where another animal appears to have scratched. I notice how his ears turn and twitch in the wind, how he sniffs and listens before proceeding over a hill.

Embers knows hunger: he once disappeared for four months and came back emaciated and full of worms. He knows where mice might be found, and he knows it can be worth a long wait in tall grass, with ears carefully focused, until a determined pounce may yield a meal. He knows anger and fear: he has been ambushed by a larger cat, suffering injuries that took long painful weeks to heal. He knows that a strong wind, or the roar of crashing waves, make it impossible for him to determine if danger lurks just behind that next bush, and so he turns away in nervous agitation and heads back to a place where he feels safe.

Embers’ ability to “understand the physical world, plan complex actions, do some level of reasoning,” it seems to me, is deeply rooted in his experience of hunger, satiety, cold, warmth, fear, anger, love, comfort. His curiosity, too, is rooted in this sensory knowledge, as is his will – his deep determination to get out and explore his surroundings every morning and every evening. Both his will and his knowledge are rooted in biology. And given that we homo sapiens are no less biological, our own will and our own knowledge also have roots in biology.

For all their abilities to manipulate and reassemble fragments of information, however, I’ve come across nothing to indicate that any AI system will experience similar depths of sensory knowledge, and nothing to indicate they will develop wills or motivations of their own.

In other words, AI systems are not creatures, they are tools.

The elevation of abstraction

“Bodies matter to minds,” writes James Bridle. “The way we perceive and act in the world is shaped by the limbs, senses and contexts we possess and inhabit.”8

However, our human ability to conceive of things, not in their bodily connectedness but in their imagined separateness, has been the facet of intelligence at the center of much recent technological progress. Bridle writes:

“Historically, scientific progress has been measured by its ability to construct reductive frameworks for the classification of the natural world …. This perceived advancement of knowledge has involved a long process of abstraction and isolation, of cleaving one thing from another in a constant search for the atomic basis of everything ….”9

The ability to abstract, to separate into classifications, to simplify, to measure the effects of specific causes in isolation from other causes, has led to sweeping civilizational changes.

When electronic computing pioneers began to dream of “artificial intelligence”, Bridle says, they were thinking of intelligence primarily as “what humans do.” Even more narrowly, they were thinking of intelligence as something separated from and abstracted from bodies, as an imagined pure process of thought.

More narrowly still, the AI tools that have received most of the funding have been tools that are useful to corporate intelligence – the kinds that can be monetized, that can be made profitable, that can extract economic value for the benefit of corporations.

The resulting tools can be used in impressively useful ways – and as discussed in previous posts in this series, in dangerous and harmful ways. To the point of this post, however, we ask instead: Could artificially intelligent tools ever become creatures in their own right? And if they did, could they survive, thrive, take over the entire world, and conquer or eliminate biology-based creatures?

Last June, economist Blair Fix published a succinct takedown of the potential threat of a rogue artificial intelligence. 

“Humans love to talk about ‘intelligence’,” Fix wrote, “because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.”

Living creatures, he argued, must know how to acquire and digest food. From nematodes to homo sapiens we have the ability, quite beyond our conscious intelligence, to digest the food we need. But AI machines, for all their data-manipulating capacity, lack the most basic ability to care for themselves. In Fix’s words,

“Today’s machines may be ‘intelligent’, but they have none of the core competencies that make life robust. We design their metabolism (which is brittle) and we spoon feed them energy. Without our constant care and attention, these machines will do what all non-living matter does — wither against the forces of entropy.”10

Our “thinking machines”, like us, have their own bodily needs. Their needs, however, are vastly more complex and particular than ours are.

Humans, born as almost totally dependent creatures, can digest necessary nourishment from day one, and as we grow we rapidly develop the abilities to draw nourishment from a wide range of foods.

AI machines, on the other hand, are born and remain totally dependent on a single pure form of energy that only exists as produced through a sophisticated industrial complex: electricity, of a reliably steady and specific voltage and power. Learning to understand, manage and provide that sort of energy supply took almost all of human history to date.

Could the human-created AI tools learn to take over every step of their own vast global supply chains, thereby providing their own necessities of “life”, autonomously manufacturing more of their own kind, and escaping any dependence on human industry? Fix doesn’t think so:

“The gap between a savant program like ChatGPT and a robust, self-replicating machine is monumental. Let ChatGPT ‘loose’ in the wild and one outcome is guaranteed: the machine will go extinct.”

Some people have argued that today’s AI bots, or especially tomorrow’s bots, can quickly learn all they need to know to care and provide for themselves. After all, they can inhale the entire contents of the internet and, some say, can quickly learn the combined lessons of every scientific specialty.

But, as my elders used to tell me long before I became one of them, “book learning will only get you so far.” In the hypothetical case of an AI-bot striving for autonomy, digesting all the information on the internet would not grant assurance of survival.

It’s important, first, to recall that the science of robotics is nowhere near as developed as the science of AI. (See the previous post, Watching work, for a discussion of this issue.) Even if the AI-bot could both manipulate and understand all the science and engineering information needed to keep the artificial intelligence industrial complex running, that complex also requires a huge labour force of people with long experience in a vast array of physical skills.

“As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated,” Timothy B. Lee wrote in Slate last year. “But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.”11

In order to rapidly dispense with the need for a human labour force, a rogue cohort of AI-bots would need a sudden quantum leap in robotics. The AI-bots would need to be able to manipulate every type of data, but also every type of physical object. Lee summarizes the obstacles:

“Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber-optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.”

The information available on the internet, vast as it is, has a lot of holes. How many companies have thoroughly documented all of their institutional knowledge, such that an AI-bot could simply inhale all the knowledge essential to each company’s functions? To dispense with the human labour force, the AI-bot would need such documentation for every company that occupies every significant niche in the artificial intelligence industrial complex.

It seems clear, then, that a hypothetical AI overlord could not afford to get rid of a human work force, certainly not in a short time frame. And unless it could dispense with that labour force very soon, it would also need farmers, food distributors, caregivers, parents to raise and teachers to educate the next generation of workers – in short, it would need human society writ large.

But could it take full control of this global workforce and society by some combination of guile or force?

Lee doesn’t think so. “Human beings are social creatures,” he writes. “We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails. A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody.”

It’s easy to imagine a rogue AI tricking some people some of the time, just as AI-enhanced extortion scams can fool many people into handing over money or passwords. But a would-be AI overlord would need to manipulate and control all of the people involved in keeping the industrial supply chain operating smoothly, regardless of the myriad possibilities for sabotage.

Tools and their dangerous users

A frequently discussed scenario is that AI could speed up the development of new and more lethal chemical poisons, new and more lethal microbes, and new, more lethal, and remotely-targeted munitions. All of these scenarios are plausible. And all of these scenarios, to the extent that they come true, will represent further increments in our already advanced capacities to threaten all life and to risk human extinction.

At the beginning of the computer age, after all, humans invented and then constructed enough nuclear weapons to wipe out all human life. Decades ago, we started producing new lethal chemicals on a massive scale, and spreading them with abandon throughout the global ecosystem. We have only a sketchy understanding of how all these chemicals interact with existing life forms, or with new life forms we may spawn through genetic engineering.

There are already many examples of how effective AI can be as a tool for disinformation campaigns. This is a further increment in the progression of new tools which were quickly put to use for disinformation. From the dawn of writing, to the development of low-cost printed materials, to the early days of broadcast media, each technological extension of our intelligence has been used to fan genocidal flames of fear and hatred.

We are already living with, and possibly dying with, the results of a decades-long, devastatingly successful disinformation project, the well-funded campaign by fossil fuel corporations to confuse people about the climate impacts of their own lucrative products.

AI is likely to introduce new wrinkles to all these dangerous trends. But with or without AI, we have the proven capacity to ruin our own world.

And if we drive ourselves to extinction, the AI-bots we have created will also die, as soon as the power lines break and the batteries run down.


Notes

1 James Vincent, “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing,” The Verge, 27 March 2019.

2 As quoted by Timothy B. Lee in “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 9 May 2023.

3 Sissi Cao, “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” Observer, 15 February 2024.

4 Jaron Lanier, “There is No A.I.,” New Yorker, 20 April 2023.

5 Jaron Lanier, “How to Picture A.I.,” New Yorker, 1 March 2024.

6 Quoted in “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” by Will Douglas Heaven, MIT Technology Review, 2 May 2023.

7 Quoted in “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” by Sissi Cao, Observer, 15 February 2024.

8 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Picador MacMillan, 2022; page 38.

9 Bridle, Ways of Being, page 100.

10 Blair Fix, “No, AI Does Not Pose an Existential Risk to Humanity,” Economics From the Top Down, 10 June 2023.

11 Timothy B. Lee, “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 2 May 2023.


Illustration at top of post: Fragile Frankenstein, by Bart Hawkins Kreps, from: “Artificial Neural Network with Chip,” by Liam Huang, Creative Commons license, accessed via flickr; “Native wild and dangerous animals,” print by Johann Theodor de Bry, 1602, public domain, accessed at Look and Learn; drawing of robot courtesy of Judith Kreps Hawkins.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr.