A fragile frankenstein

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part eight
Also published on Resilience.

Is there an imminent danger that artificial intelligence will leap-frog human intelligence, go rogue, and either eliminate or enslave the human race?

You won’t find an answer to this question in an expert consensus, because there is none.

Consider the contrasting views of Geoffrey Hinton and Yann LeCun. When they and their colleague Yoshua Bengio were awarded the 2018 Turing Prize, the three were widely praised as the “godfathers of AI.”

“The techniques the trio developed in the 1990s and 2000s,” James Vincent wrote, “enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies ….”1

Yet Hinton and LeCun don’t see eye to eye on some key issues.

Hinton made news in the spring of 2023 with his highly-publicized resignation from Google. He stepped away from the company because he had become convinced AI has become an existential threat to humanity, and he felt the need to speak out freely about this danger.

In Hinton’s view, artificial intelligence is racing ahead of human intelligence and that’s not good news: “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”2

LeCun now heads Meta’s AI division while also teaching New York University. He voices a more skeptical perspective on the threat from AI. As reported last month,

“[LeCun] believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence.”3

As we dive deeper into these diverging judgements, we’ll look at a deceptively simple question: What is intelligence good for?

But here’s a spoiler alert: after reading scores of articles and books on AI over the past year, I’ve found I share the viewpoint of computer scientist Jaron Lanier.

In a New Yorker article last May Lanier wrote “The most pragmatic position is to think of A.I. as a tool, not a creature.”4 (emphasis mine) He repeated this formulation more recently:

“We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase ‘artificial intelligence’ itself, is all about the idea that we are making new creatures instead of new tools.”5

This tool might be designed and operated badly or for nefarious purposes, Lanier says, perhaps even in ways that could cause our own and many other species’ extinction. Yet as a tool made and used by humans, the harm would best be attributed to humans and not to the tool.

Common senses

How might we compare different manifestations of intelligence? For many years Hinton thought electronic neural networks were a poor imitation of the human brain. But he told Will Douglas Heaven last year that he now thinks the AI neural networks have turned out to be better than human brains in important respects. While the largest AI neural networks are still small compared to human brains, they make better use of their connections:

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Compared to people, Hinton says, the new Large Language Models learn new tasks extremely quickly.

LeCun argues that in spite of a relatively small number of neurons and connections in its brain, a cat is far smarter than the leading AI systems:

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”7

I’ve turned to a dear friend, who happens to be a cat, for further insight. When we go out for our walks together, each at one end of a leash, I notice how carefully Embers sniffs this bush, that plank, or a spot on the ground where another animal appears to have scratched. I notice how his ears turn and twitch in the wind, how he sniffs and listens before proceeding over a hill.

Embers knows hunger: he once disappeared for four months and came back emaciated and full of worms. He knows where mice might be found, and he knows it can be worth a long wait in tall grass, with ears carefully focused, until a determined pounce may yield a meal. He knows anger and fear: he has been ambushed by a larger cat, suffering injuries that took long painful weeks to heal. He knows that a strong wind, or the roar of crashing waves, make it impossible for him to determine if danger lurks just behind that next bush, and so he turns away in nervous agitation and heads back to a place where he feels safe.

Embers’ ability to “understand the physical world, plan complex actions, do some level of reasoning,” it seems to me, is deeply rooted in his experience of hunger, satiety, cold, warmth, fear, anger, love, comfort. His curiosity, too, is rooted in this sensory knowledge, as is his will – his deep determination to get out and explore his surroundings every morning and every evening. Both his will and his knowledge are rooted in biology. And given that we homo sapiens are no less biological, our own will and our own knowledge also have roots in biology.

For all their abilities to manipulate and reassemble fragments of information, however, I’ve come across nothing to indicate that any AI system will experience similar depths of sensory knowledge, and nothing to indicate they will develop wills or motivations of their own.

In other words, AI systems are not creatures, they are tools.

The elevation of abstraction

“Bodies matter to minds,” writes James Bridle. “The way we perceive and act in the world is shaped by the limbs, senses and contexts we possess and inhabit.”8

However, our human ability to conceive of things, not in their bodily connectedness but in their imagined separateness, has been the facet of intelligence at the center of much recent technological progress. Bridle writes:

“Historically, scientific progress has been measured by its ability to construct reductive frameworks for the classification of the natural world …. This perceived advancement of knowledge has involved a long process of abstraction and isolation, of cleaving one thing from another in a constant search for the atomic basis of everything ….”9

The ability to abstract, to separate into classifications, to simplify, to measure the effects of specific causes in isolation from other causes, has led to sweeping civilizational changes.

When electronic computing pioneers began to dream of “artificial intelligence”, Bridle says, they were thinking of intelligence primarily as “what humans do.” Even more narrowly, they were thinking of intelligence as something separated from and abstracted from bodies, as an imagined pure process of thought.

More narrowly still, the AI tools that have received most of the funding have been tools that are useful to corporate intelligence – the kinds that can be monetized, that can be made profitable, that can extract economic value for the benefit of corporations.

The resulting tools can be used in impressively useful ways – and as discussed in previous posts in this series, in dangerous and harmful ways. To the point of this post, however, we ask instead: Could artificially intelligent tools ever become creatures in their own right? And if they did, could they survive, thrive, take over the entire world, and conquer or eliminate biology-based creatures?

Last June, economist Blair Fix published a succinct takedown of the potential threat of a rogue artificial intelligence. 

“Humans love to talk about ‘intelligence’,” Fix wrote, “because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.”

Living creatures, he argued, must know how to acquire and digest food. From nematodes to homo sapiens we have the ability, quite beyond our conscious intelligence, to digest the food we need. But AI machines, for all their data-manipulating capacity, lack the most basic ability to care for themselves. In Fix’s words,

“Today’s machines may be ‘intelligent’, but they have none of the core competencies that make life robust. We design their metabolism (which is brittle) and we spoon feed them energy. Without our constant care and attention, these machines will do what all non-living matter does — wither against the forces of entropy.”10

Our “thinking machines”, like us, have their own bodily needs. Their needs, however, are vastly more complex and particular than ours are.

Humans, born as almost totally dependent creatures, can digest necessary nourishment from day one, and as we grow we rapidly develop the abilities to draw nourishment from a wide range of foods.

AI machines, on the other hand, are born and remain totally dependent on a single pure form of energy that only exists as produced through a sophisticated industrial complex: electricity, of a reliably steady and specific voltage and power. Learning to understand, manage and provide that sort of energy supply took almost all of human history to date.

Could the human-created AI tools learn to take over every step of their own vast global supply chains, thereby providing their own necessities of “life”, autonomously manufacturing more of their own kind, and escaping any dependence on human industry? Fix doesn’t think so:

“The gap between a savant program like ChatGPT and a robust, self-replicating machine is monumental. Let ChatGPT ‘loose’ in the wild and one outcome is guaranteed: the machine will go extinct.”

Some people have argued that today’s AI bots, or especially tomorrow’s bots, can quickly learn all they need to know to care and provide for themselves. After all, they can inhale the entire contents of the internet and, some say, can quickly learn the combined lessons of every scientific specialty.

But, as my elders used to tell me long before I became one of them, “book learning will only get you so far.” In the hypothetical case of an AI-bot striving for autonomy, digesting all the information on the internet would not grant assurance of survival.

It’s important, first, to recall that the science of robotics is nowhere near as developed as the science of AI. (See the previous post, Watching work, for a discussion of this issue.) Even if the AI-bot could both manipulate and understand all the science and engineering information needed to keep the artificial intelligence industrial complex running, that complex also requires a huge labour force of people with long experience in a vast array of physical skills.

“As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated,” Timothy B. Lee wrote in Slate last year. “But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.”11

In order to rapidly dispense with the need for a human labour force, a rogue cohort of AI-bots would need a sudden quantum leap in robotics. The AI-bots would need to be able to manipulate every type of data, but also every type of physical object. Lee summarizes the obstacles:

“Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber-optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.”

The information available on the internet, vast as it is, has a lot of holes. How many companies have thoroughly documented all of their institutional knowledge, such that an AI-bot could simply inhale all the knowledge essential to each company’s functions? To dispense with the human labour force, the AI-bot would need such documentation for every company that occupies every significant niche in the artificial intelligence industrial complex.

It seems clear, then, that a hypothetical AI overlord could not afford to get rid of a human work force, certainly not in a short time frame. And unless it could dispense with that labour force very soon, it would also need farmers, food distributors, caregivers, parents to raise and teachers to educate the next generation of workers – in short, it would need human society writ large.

But could it take full control of this global workforce and society by some combination of guile or force?

Lee doesn’t think so. “Human beings are social creatures,” he writes. “We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails. A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody.”

It’s easy to imagine a rogue AI tricking some people some of the time, just as AI-enhanced extortion scams can fool many people into handing over money or passwords. But a would-be AI overlord would need to manipulate and control all of the people involved in keeping the industrial supply chain operating smoothly, regardless of the myriad possibilities for sabotage.

Tools and their dangerous users

A frequently discussed scenario is that AI could speed up the development of new and more lethal chemical poisons, new and more lethal microbes, and new, more lethal, and remotely-targeted munitions. All of these scenarios are plausible. And all of these scenarios, to the extent that they come true, will represent further increments in our already advanced capacities to threaten all life and to risk human extinction.

At the beginning of the computer age, after all, humans invented and then constructed enough nuclear weapons to wipe out all human life. Decades ago, we started producing new lethal chemicals on a massive scale, and spreading them with abandon throughout the global ecosystem. We have only a sketchy understanding of how all these chemicals interact with existing life forms, or with new life forms we may spawn through genetic engineering.

There are already many examples of how effective AI can be as a tool for disinformation campaigns. This is a further increment in the progression of new tools which were quickly put to use for disinformation. From the dawn of writing, to the development of low-cost printed materials, to the early days of broadcast media, each technological extension of our intelligence has been used to fan genocidal flames of fear and hatred.

We are already living with, and possibly dying with, the results of a decades-long, devastatingly successful disinformation project, the well-funded campaign by fossil fuel corporations to confuse people about the climate impacts of their own lucrative products.

AI is likely to introduce new wrinkles to all these dangerous trends. But with or without AI, we have the proven capacity to ruin our own world.

And if we drive ourselves to extinction, the AI-bots we have created will also die, as soon as the power lines break and the batteries run down.


Notes

1 James Vincent, “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing,” The Verge, 27 March 2019.

2 As quoted by Timothy B. Lee in “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 9 May 2023.

3 Sissi Cao, “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” Observer, 15 February 2024.

4 Jaron Lanier, “There is No A.I.,” New Yorker, 20 April 2023.

5 Jaron Lanier, “How to Picture A.I.,” New Yorker, 1 March 2024.

6 Quoted in “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” by Will Douglas Heaven, MIT Technology Review, 2 May 2023.

7 Quoted in “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” by Sissi Cao, Observer, 15 February 2024.

8 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Picador MacMillan, 2022; page 38.

9 Bridle, Ways of Being, page 100.

10 Blair Fix, “No, AI Does Not Pose an Existential Risk to Humanity,” Economics From the Top Down, 10 June 2023.

11 Timothy B. Lee, “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 2 May 2023.


Illustration at top of post: Fragile Frankenstein, by Bart Hawkins Kreps, from: “Artificial Neural Network with Chip,” by Liam Huang, Creative Commons license, accessed via flickr; “Native wild and dangerous animals,” print by Johann Theodor de Bry, 1602, public domain, accessed at Look and Learn; drawing of robot courtesy of Judith Kreps Hawkins.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr. 

Beware of WEIRD Stochastic Parrots

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part four
Also published on Resilience.

A strange new species is getting a lot of press recently. The New Yorker published the poignant and profound illustrated essay “Is My Toddler a Stochastic Parrot?Wall Street Journal told us about “‘Stochastic Parrot’: A Name for AI That Sounds a Bit Less Intelligent”. And expert.ai warned of “GPT-3: The Venom-Spitting Stochastic Parrot”.

The American Dialect Society even selected “stochastic parrot” as the AI-Related Word of the Year for 2023.

Yet this species was unknown until March of 2021, when Emily Bender, Timnit Gebru, Angelina McMillan-Major, and (the slightly pseudonymous) Shmargaret Shmitchell published “On the Dangers of Stochastic Parrots.”1

The paper touched a nerve in the AI community, reportedly costing Timnit Gebru and Margaret Mitchell their jobs with Google’s Ethical AI team.2

Just a few days after Chat-GPT was released, Open AI CEO Sam Altman paid snarky tribute to the now-famous phrase by tweeting “i am a stochastic parrot, and so r u.”3

Just what, according to its namers, are the distinctive characteristics of a stochastic parrot? Why should we be wary of this species? Should we be particularly concerned about a dominant sub-species, the WEIRD stochastic parrot? (WEIRD as in: Western, Educated, Industrialized, Rich, Democratic.) We’ll look at those questions for the remainder of this installment.

Haphazardly probable

The first recognized chatbot was 1967’s Eliza, but many of the key technical developments behind today’s chatbots only came together in the last 15 years. The apparent wizardry of today’s Large Language Models rests on a foundation of algorithmic advances, the availability of vast data sets, super-computer clusters employing thousands of the latest Graphics Processing Unit (GPU) chips, and, as discussed in the last post, an international network of poorly paid gig workers providing human input to fill in gaps in the machine learning process. 

By the beginning of this decade, some AI industry figures were arguing that Large Language Models would soon exhibit “human-level intelligence”, could become sentient and conscious, and might even become the dominant new species on the planet.

The authors of the stochastic parrot paper saw things differently:

“Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”4

Let’s start by focusing on two words in that definition: “haphazardly” and “probabilistic”. How do those words apply to the output of ChatGPT or similar Large Language Models?

In a lengthy paper published last year, Stephen Wolfram offers an initial explanation:

“What ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”5

He gives the example of this partial sentence: “The best thing about AI is its ability to”. The Large Language Model will have identified many instances closely matching this phrase, and will have calculated the probability of various words being the next word to follow. The table below lists five of the most likely choices.

The element of probability, then, is clear – but in what way is ChatGPT “haphazard”?

Wolfram explains that if the chatbot always picks the next word with the highest probability, the results will be syntactically correct, sensible, but stilted and boring – and repeated identical prompts will produce repeated identical outputs.

By contrast, if at random intervals the chatbot picks a “next word” that ranks fairly high in probability but is not the highest rank, then more interesting and varied outputs result.

Here is Wolfram’s sample of an output produced by a strict “pick the next word with the highest rank” rule: 

The above output sounds like the effort of someone who is being careful with each sentence, but with no imagination, no creativity, and no real ability to develop a thought.

With a randomness setting introduced, however, Wolfram illustrates how repeated responses to the same prompt produce a wide variety of more interesting outputs:

The above summary is an over-simplification, of course, and if you want a more in-depth exposition Wolfram’s paper offers a lot of complex detail. But Wolfram’s “next word” explanation concurs with at least part of the stochastic parrot thesis: “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine ….”

What follows, in Bender and Gebru’s formulation, is equally significant. An LLM, they wrote, strings together words “without any reference to meaning.”

Do LLM’s actually understand the meaning of the words, phrases, sentences and paragraphs they have read and which they can produce? To answer that question definitively, we’d need definitive answers to questions like “What is meaning?” and “What does it mean to understand?”

A brain is not a computer, and a computer is not a brain

For the past fifty years a powerful but deceptive metaphor has become pervasive. We’ve grown accustomed to describing computers by analogy to the human brain, and vice versa. As the saying goes, these models are always wrong even though they are sometimes useful.

“The Computational Metaphor,” wrote Alexis Barria and Keith Cross, “affects how people understand computers and brains, and of more recent importance, influences interactions with AI-labeled technology.”

The concepts embedded in the metaphor, they added, “afford the human mind less complexity than is owed, and the computer more wisdom than is due.”6

The human mind is inseparable from the brain which is inseparable from the body. However much we might theorize about abstract processes of thought, our thought processes evolved with and are inextricably tangled with bodily realities of hunger, love, fear, satisfaction, suffering, mortality. We learn language as part of experiencing life, and the meanings we share (sometimes incompletely) when we communicate with others depends on shared bodily existence.

Angie Wang put it this way: “A toddler has a life, and learns language to describe it. An L.L.M. learns language, but has no life of its own to describe.”7

In other terms, wrote Bender and Gebru, “languages are systems of signs, i.e. pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning.”

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people”8 who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

“WEIRD in, WEIRD out”

There are no generic humans. As it turns out, counterfeit people aren’t generic either.

Large Language Models are created primarily by large corporations, or by university researchers who are funded by large corporations or whose best job prospects are with those corporations. It would be a fluke if the products and services growing out of these LLMs didn’t also favour those corporations.

But the bias problem embedded in chatbots goes deeper. For decades, the people who contribute the most to digitized data sets are those who have the most access to the internet, who publish the most books, research papers, magazine articles and blog posts – and these people disproportionately live in Western Educated Industrialized Rich Democratic countries. Even social media users, who provide terabytes of free data for the AI machine, are likely to live in WEIRD places.

We should not be surprised, then, when outputs from chatbots express common biases:

“As people in positions of privilege with respect to a society’s racism, misogyny, ableism, etc., tend to be overrepresented in training data for LMs, this training data thus includes encoded biases, many already recognized as harmful.”9

In 2023 a group of scholars at Harvard University investigated those biases. “Technical reports often compare LLMs’ outputs with ‘human’ performance on various tests,” they wrote. “Here, we ask, ‘Which humans?’”10

“Mainstream research on LLMs,” they added, “ignores the psychological diversity of ‘humans’ around the globe.”

Their strategy was straightforward: prompt Open AI’s GPT to answer the questions in the World Values Survey, and then compare the results to the answers that humans around the world gave to the same set of questions. The WVS documents a range of values including but not limited to issues of justice, moral principles, global governance, gender, family, religion, social tolerance, and trust. The team worked with data in the latest WVS surveys, collected from 2017 to 2022.

Recall that GPT does not give identical responses to identical prompts. To ensure that the GPT responses were representative, each of the WVS questions was posed to GPT 1000 times.11

The comparisons with human answers to the same surveys revealed striking similarities and contrasts. The article states:

“GPT was identified to be closest to the United States and Uruguay, and then to this cluster of cultures: Canada, Northern Ireland, New Zealand, Great Britain, Australia, Andorra, Germany, and the Netherlands. On the other hand, GPT responses were farthest away from cultures such as Ethiopia, Pakistan, and Kyrgyzstan.”

In other words, the GPT responses were similar to those of people in WEIRD societies.

The results are summarized in the graphic below. Countries in which humans gave WVS answers close to GPT’s answers are clustered at top left, while countries whose residents gave answers increasingly at variance with GPT’s answers trend along the line running down to the right.

“Figure 3. The scatterplot and correlation between the magnitude of GPT-human similarity and cultural distance from the United States as a highly WEIRD point of reference.” From Atari et al., “Which Humans?

The team went on to consider the WVS responses in various categories including styles of analytical thinking, degrees of individualism, and ways of expressing and understanding personal identity. In these and other domains, they wrote, “people from contemporary WEIRD populations are an outlier in terms of their psychology from a global and historical perspective.” Yet the responses from GPT tracked the WEIRD populations rather than global averages.

Anyone who asks GPT a question with hopes of getting an unbiased answer is running a fool’s errand. Because the data sets include a large over-representation of WEIRD inputs, the outputs, for better or worse, will be no less WEIRD.

As Large Language Models are increasingly incorporated into decision-making tools and processes, their WEIRD biases become increasingly significant. By learning primarily from data that encodes viewpoints of dominant sectors of global society, and then expressing those values in decisions, LLMs are likely to further empower the powerful and marginalize the marginalized.

In the next installment we’ll look at the effects of AI and LLMs on employment conditions, now and in the near future.


Notes

1 Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Association for Computing Machinery Digital Library, 1 March 2021.

2 John Naughton, “Google might ask questions about AI ethics, but it doesn’t want answers”, The Guardian, 13 March 2021.

3 As quoted in Elizabeth Weil, “You Are Not a Parrot”, New York Magazine, March 1, 2023.

4 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

5 Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?”, 14 February 2023.

6 Alexis T. Baria and Keith Cross, “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor”, arXiv, 18 July 2021.

7 Angie Wang, “Is My Toddler a Stochastic Parrot?”, The New Yorker, 15 November 2023.

8 The phrase “counterfeit people” is attributed to philosopher David Dennett, quoted by Elizabeth Weil in “You Are Not a Parrot”, New York Magazine.

9 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”

10 Mohammed Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi, and Joseph Henrich, “Which Humans?”, arXiv, 22 September 2023.

11 Specifically, the team “ran both GPT 3 and 3.5; they were similar. The paper’s plots are based on 3.5.” Email correspondence with study author Mohammed Atari.


Image at top of post: “The Evolution of Intelligence”, illustration by Bart Hawkins Kreps, posted under CC BY-SA 4.0 DEED license, adapted from “The Yin and Yang of Human Progress”, (Wikimedia Commons), and from parrot illustration courtesy of Judith Kreps Hawkins.

Artificial Intelligence in the Material World

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part two
Also published on Resilience.

Picture a relatively simple human-machine interaction: I walk two steps, flick a switch on the wall, and a light comes on.

Now picture a more complex interaction. I say, “Alexa, turn on the light” – and, if I’ve trained my voice to match the classifications in the electronic monitoring device and its associated global network, a light comes on.

“In this fleeting moment of interaction,” write Kate Crawford and Vladan Joler, “a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization.”

“The scale of resources required,” they add, “is many magnitudes greater than the energy and labor it would take a human to … flick a switch.”1

Crawford and Joler wrote these words in 2018, at a time when “intelligent assistants” were recent and rudimentary products of AI. The industry has grown by leaps and bounds since then – and the money invested is matched by the computing resources now devoted to processing and “learning” from data.

In 2021, a much-discussed paper found that “the amount of compute used to train the largest deep learning models (for NLP [natural language processing] and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law.”2

An analysis in 2023 backed up this conclusion. Computing calculations are often measured in Floating Point OPerations. A Comment piece in the journal Nature Machine Intelligence illustrated the steep rise in the number of FLOPs used in training recent AI models.

Changes in the number of FLOPs needed for state-of-the-art AI model training, graph from “Reporting electricity consumption is essential for sustainable AI”, Charlotte Debus, Marie Piraud, Achim Streit, Fabian Theis & Markus Götz, Nature Machine Intelligence, 10 November 2023. AlexNet is a neural network model used to great effect with the image classification database ImageNet, which we will discuss in a later post. GPT-3 is a Large Language Model developed by OpenAI, for which Chat-GPT is the free consumer interface.

With the performance of individual AI-specialized computer chips now measured in TeraFLOPs, and thousands of these chips harnessed together in an AI server farm, the electricity consumption of AI is vast.

As many researchers have noted, accurate electricity consumption figures are difficult to find, making it almost impossible to calculate the worldwide energy needs of the AI Industrial Complex.

However, Josh Saul and Dina Bass reported last year that

“Artificial intelligence made up 10 to 15% of [Google’s] total electricity consumption, which was 18.3 terawatt hours in 2021. That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta.”3

However, researcher Alex de Vries reported if an AI system similar to ChatGPT were used for each Google search, electricity usage would spike to 29.2 TWh just for the search engine.4

In Scientific American, Lauren Leffer cited projections that Nvidia, manufacturer of the most sophisticated chips for AI servers, will ship “1.5 million AI server units per year by 2027.”

“These 1.5 million servers, running at full capacity,” she added, “would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year, according to the new assessment.”5

OpenAI CEO Sam Altman expects AI’s appetite for energy will continue to grow rapidly. At the Davos confab in January 2024 he told the audience, “We still don’t appreciate the energy needs of this technology.” As quoted by The Verge, he added, “There’s no way to get there without a breakthrough. We need [nuclear] fusion or we need like radically cheaper solar plus storage or something at massive scale.” Altman has invested $375 million in fusion start-up Helion Energy, which hopes to succeed soon with a technology that has stubbornly remained 50 years in the future for the past 50 years.

In the near term, at least, electricity consumption will act as a brake on widespread use of AI in standard web searches, and will restrict use of the most sophisticated AI models to paying customers. That’s because the cost of AI use can be measured not only in watts, but in dollars and cents.

Shortly after the launch of Chat-GPT,  Sam Altman was quoted as saying that Chat-GPT cost “probably single-digit cents per chat.” Pocket change – until you multiply it by perhaps 10 million users each day. Citing figures from SemiAnalysis, the Washington Post reported that by February 2023, “ChatGPT was costing OpenAI some $700,000 per day in computing costs alone.” Will Oremus concluded,

“Multiply those computing costs by the 100 million people per day who use Microsoft’s Bing search engine or the more than 1 billion who reportedly use Google, and one can begin to see why the tech giants are reluctant to make the best AI models available to the public.”6

In any case, Alex de Vries says, “NVIDIA does not have the production capacity to promptly deliver 512,821 A100 HGX servers” which would be required to pair every Google search with a state-of-the-art AI model. And even if Nvidia could ramp up that production tomorrow, purchasing the computing hardware would cost Google about $100 billion USD.

Detail from: Nvidia GeForce RTX 2080, (TU104 | Turing), (Polysilicon | 5x | External Light), photograph by Fritzchens Fritz, at Wikimedia Commons, licensed under Creative Commons CC0 1.0 Universal Public Domain Dedication

A 457,000-item supply chain

Why is AI computing hardware so difficult to produce and so expensive? To understand this it’s helpful to take a greatly simplified look at a few aspects of computer chip production.

That production begins with silicon, one of the most common elements on earth and a basic constituent of sand. The silicon must be refined to 99.9999999% purity before being sliced into wafers.

Image from Intel video From Sand to Silicon: The Making of a Microchip.

Eventually each silicon wafer will be augmented with an extraordinarily fine pattern of transistors. Let’s look at the complications involved in just one step, the photolithography that etches a microscopic pattern in the silicon.

As Chris Miller explains in Chip War, the precision of photolithography is determined by, among other factors, the wavelength of the light being used: “The smaller the wavelength, the smaller the features that could be carved onto chips.”7 By the early 1990s, chipmakers had learned to pack more than 1 million transistors onto one of the chips used in consumer-level desktop computers. To enable the constantly climbing transistor count, photolithography tool-makers were using deep ultraviolet light, with wavelengths of about 200 nanometers (compared to visible light with wavelengths of about 400 to 750 nanometers; a nanometer is one-billionth of a meter). It was clear to some industry figures, however, that the wavelength of deep ultraviolet light would soon be too long for continued increases in the precision of etching and for continued increases in transistor count.

Thus began the long, difficult, and immensely expensive development of Extreme UltraViolet (EUV) photolithography, using light with a wavelength of about 13.5 nanometers.

Let’s look at one small part of the complex EUV photolithography process: producing and focusing the light. In Miller’s words,

“[A]ll the key EUV components had to be specially created. … Producing enough EUV light requires pulverizing a small ball of tin with a laser. … [E]ngineers realized the best approach was to shoot a tiny ball of tin measuring thirty-millionths of a meter wide moving through a vacuum at a speed of around two hundred miles an hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.”8

Heating the tin droplets to that temperature, “required a carbon dioxide-based laser more powerful than any that previously existed.”9 Laser manufacturer Trumpf worked for 10 years to develop a laser powerful enough and reliable enough – and the resulting tool had “exactly 457,329 component parts.”10

Once the extremely short wavelength light could be reliably produced, it needed to be directed with great precision – and for that purpose German lens company Zeiss “created mirrors that were the smoothest objects ever made.”11

Nearly 20 years after development of EUV lithography began, this technique is standard for the production of sophisticated computer chips which now contain tens of billions of transistors each. But as of 2023, only Dutch company ASML had mastered the production of EUV photolithography machines for chip production. At more than $100 million each, Miller says “ASML’s EUV lithography tool is the most expensive mass-produced machine tool in history.”12

Landscape Destruction: Rio Tinto Kennecott Copper Mine from the top of Butterfield Canyon. Photographed in 665 nanometer infrared using an infrared converted Canon 20D and rendered in channel inverted false color infrared, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.

No, data is not the “new oil”

US semi-conductor firms began moving parts of production to Asia in the 1960s. Today much of semi-conductor manufacturing and most of computer and phone assembly is done in Asia – sometimes using technology more advanced than anything in use within the US.

The example of EUV lithography indicates how complex and energy-intensive chipmaking has become. At countless steps from mining to refining to manufacturing, chipmaking relies on an industrial infrastructure that is still heavily reliant on fossil fuels.

Consider the logistics alone. A wide variety of metals, minerals, and rare earth elements, located at sites around the world, must be extracted, refined, and processed. These materials must then be transformed into the hundreds of thousands of parts that go into computers, phones, and routers, or which go into the machines that make the computer parts.

Co-ordinating all of this production, and getting all the pieces to where they need to be for each transformation, would be difficult if not impossible if it weren’t for container ships and airlines. And though it might be possible someday to run most of those processes on renewable electricity, for now those operations have a big carbon footprint.

It has become popular to proclaim that “data is the new oil”13, or “semi-conductors are the new oil”14. This is nonsense, of course. While both data and semi-conductors are worth a lot of money and a lot of GDP growth in our current economic context, neither one produces energy – they depend on available and affordable energy to be useful.

A world temporarily rich in surplus energy can produce semi-conductors to extract economic value from data. But warehouses of semi-conductors and petabytes of data will not enable us to produce surplus energy.

Artificial Intelligence powered by semi-conductors and data could, perhaps, help us to use the surplus energy much more efficiently and rationally. But that would require a radical change in the economic religion that guides our whole economic system, including the corporations at the top of the Artificial Intelligence Industrial Complex.

Meanwhile the AI Industrial Complex continues to soak up huge amounts of money and energy.

Open AI CEO Sam Altman has been in fund-raising mode recently, seeking to finance a network of new semi-conductor fabrication plants. As reported in Fortune, “Constructing a single state-of-the-art fabrication plant can require tens of billions of dollars, and creating a network of such facilities would take years. The talks with [Abu Dhabi company] G42 alone had focused on raising $8 billion to $10 billion ….”

This round of funding would be in addition to the $10 billion Microsoft has already invested in Open AI. Why would Altman want to get into the hardware production side of the Artificial Intelligence Industrial Complex, in addition to Open AI’s leading role in software operations? According to Fortune,

“Since OpenAI released ChatGPT more than a year ago, interest in artificial intelligence applications has skyrocketed among companies and consumers. That in turn has spurred massive demand for the computing power and processors needed to build and run those AI programs. Altman has said repeatedly that there already aren’t enough chips for his company’s needs.”15

Becoming data

We face the prospect, then, of continuing rapid growth in the Artificial Intelligence Industrial Complex, accompanied by continuing rapid growth in the extraction of materials and energy – and data.

How will major AI corporations obtain and process all the data that will keep these semi-conductors busy pumping out heat?

Consider the light I turned on at the beginning of this post. If I simply flick the switch on the wall and the light goes off, the interaction will not be transformed into data. But if I speak to an Echo, asking Alexa to turn off the light, many data points are created and integrated into Amazon’s database: the time of the interaction, the IP address and physical location where this takes place, whether I speak English or some other language, whether my spoken words are unclear and the device asks me to repeat, whether the response taken appears to meet my approval, or whether I instead ask for the response to be changed. I would be, in Kate Crawford’s and Vladan Joler’s words, “simultaneously a consumer, a resource, a worker, and a product.”15

By buying into the Amazon Echo world,

“the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.”16

How will AI corporations monetize that data so they can cover their hardware and energy costs, and still return a profit on their investors’ money? We’ll turn to that question in coming installments.


Image at top of post: Bingham Canyon Open Pit Mine, Utah, photo by arbyreed, part of the album Kennecott Bingham Canyon Copper Mine, on flickr, licensed via CC BY-NC-SA 2.0 DEED.


Notes

1 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

2 Emily M. Bender, Timnit Gebru and Angelina McMillan-Major, Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Library, March 1, 2021. Thanks to Paris Marx for introducing me to the work of Emily M. Bender on the excellent podcast Tech Won’t Save Us.

3 Artificial Intelligence Is Booming—So Is Its Carbon Footprint”, Bloomberg, 9 March 2023.

4 Alex de Vries, “The growing energy footprint of artificial intelligence,” Joule, 18 October 2023.

5 Lauren Leffer, “The AI Boom Could Use a Shocking Amount of Electricity,” Scientific American, 13 October 2023.

6 Will Oremus, “AI chatbots lose money every time you use them. That is a problem.Washington Post, 5 June 2023.

7 Chris Miller, Chip War: The Fight for the World’s Most Critical Technology, Simon & Schuster, October 2022; page 183

8 Chip War, page 226.

9 Chip War, page 227.

10 Chip War, page 228.

11 Chip War, page 228.

12 Chip War, page 230.

13 For example, in “Data Is The New Oil — And That’s A Good Thing,” Forbes, 15 Nov 2019.

14  As in, “Semi-conductors may be to the twenty-first century what oil was to the twentieth,” Lawrence Summer, former US Secretary of the Treasury, in blurb to Chip War.

15 OpenAI CEO Sam Altman is fundraising for a network of AI chips factories because he sees a shortage now and well into the future,” Fortune, 20 January 2024.

16 Kate Crawford and Vladan Joler, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources”, 2018.

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

Beyond computational thinking – a ‘cloud of unknowing’ for the 21st century

Also published at Resilience.org

New Dark Age: Technology and the End of the Future, by James Bridle, Verso Books, 2018

If people are to make wise decisions in our heavily technological world, is it essential that they learn how to code?

For author and artist James Bridle, that is analogous to asking whether it is essential that people be taught plumbing skills.

Of course we want and need people who know how to connect water taps, how to find and fix leaks. But,

learning to plumb a sink is not enough to understand the complex interactions between water tables, political geography, ageing infrastructure, and social policy that define, shape and produce actual life support systems in society.” (Except where otherwise noted, all quotes in this article are from New Dark Age by James Bridle, Verso Books 2018)

Likewise, we need people who can view our technological society as a system – a complex, adaptive and emergent system – which remains heavily influenced by certain motives and interests while also spawning new developments that are beyond any one group’s control.

Bridle’s 2018 book New Dark Age takes deep dives into seemingly divergent subjects including the origins of contemporary weather forecasting, mass surveillance, airline reservation systems, and Youtube autoplay lists for toddlers.  Each of these excursions is so engrossing that it is sometimes difficult to hold his central thesis in mind, and yet he weaves all the threads into a cohesive tapestry.

Bridle wants us to be aware of the strengths of what he terms “computational thinking” – but also its critical limitations. And he wants us to look at the implications of  the internet as a system, not only of power lines and routers and servers and cables, but also of people, from the spies who tap into network nodes to monitor our communications, to the business analysts who devise ways to “monetize” our clicks, to the Facebook groups who share videos backing up their favoured theories.

Wiring of the SEAC computer, which was built in 1950 for the U.S. National Bureau of Standards. It was used until 1964, for purposes including meteorology, city traffic simulations, and the wave function of  the helium atom. Image from Wikimedia Commons.

From today’s weather, predict tomorrow’s

Decades before a practical electronic computer existed, pioneering meteorologist Lewis Fry Richardson1 thought up what would become a “killer app” for computers.

Given current weather data – temperature, barometric pressure, wind speed – for a wide but evenly spaced matrix of locations, Richardson reasoned that it should be possible to calculate how each cell’s conditions would interact with the conditions in adjacent cells, describe new weather patterns that would arise, and therefore predict tomorrow’s weather for each and all of those locations.

That method became the foundation of contemporary weather forecasting, which has improved by leaps and bounds in our lifetimes. But in 1916, when Richardson first tried to test his ideas they were practically useless. The method involved so many calculations that Richardson worked for weeks, then months, then years to work out a ‘prediction’ from a single day’s weather data.

But by the end of World War II, the US military had developed early electronic computers which could begin to make Richardson’s theory a useful one. To military strategists, of course, the ability to predict weather could provide a great advantage in war. Knowing when a particular attack would be helped or hindered by the weather would be a great boon to generals. Even more tantalizingly, if it were possible to clearly understand and predict the weather, it might then also be possible to control the weather, inflicting a deluge or a sandstorm, for example, on vulnerable enemy forces.

John von Neumann, a mathematician, Manhattan Project physicist and a major figure in the development of computers, summed it up.

In what could be taken as the founding statement of computational thought, [von Neumann] wrote: ‘All stable processes we shall predict. All unstable processes we shall control.’”

Computational thinking, then, relied on the input of data about present conditions, and further data on how such conditions have been correlated in the past, in order to predict future conditions.

But because many aspects of our world are connected in one system – an adaptive and emergent system – this system spawns new trends which behave in new ways, not predictable simply from the patterns of the past. In other words, in the anthropocene age our system is not wholly computable. We need to understand, Bridle writes, that

technology’s increasing inability to predict the future – whether that’s the fluctuating markets of digital stock exchanges, the outcomes and applications of scientific research, or the accelerating instability of the global climate – stems directly from these misapprehensions about the neutrality and comprehensibility of computation.”

Take the case of climate studies and meteorology. The technological apparatus to collect all the data, crunch the numbers, and run the models is part of a huge industrial infrastructure that is itself changing the climate (with the internet itself contributing an ever-more significant share of greenhouse gas emissions). As a result the world’s weather is ever more turbulent, producing so-called ‘100 year storms’ every few years. We can make highly educated guesses about critical climatic tipping points, but we are unable to say for sure when these events will occur or how they will interact.

Age-old traditional knowledge of ways to deal with this week’s or this year’s weather is becoming less reliable. Scientists, too, should acknowledge the limits of computational thinking for their work:

In a 2016 editorial for the New York Times, computational meteorologist and past president of the American Meteorological Society William B. Gail cited a number of patterns that humanity has studied for centuries, but that are disrupted by climate change: long-term weather trends, fish spawning and migration, plant pollination, monsoon and tide cycles, the occurrence of ‘extreme’ weather events. For most of recorded history, these cycles have been broadly predictable, and we have built up vast reserves of knowledge that we can tap into in order to better sustain our ever more entangled civilisation.”

The implications are stark: “Gail foresees a time in which our grandchildren might conceivably know less about the world in which they live than we do today, with correspondingly catastrophic events for complex societies.”

World map of submarine communication cables, 2015. Cable data by Greg Mahlknecht, world map by Openstreetmap contributors. Accessed through Wikimedia Commons.

Lines of power

In many ways, Bridle says, we can be mislead by the current view of the internet as a “cloud”. Contrary to our metaphor, he writes, “The cloud is not weightless; it is not amorphous, or even invisible, if you know where to look for it.” To be clear,

It is a physical infrastructure consisting of phone lines, fibre optics, satellites, cables on the ocean floor, and vast warehouses filled with computers, which consume huge amounts of water and energy and reside within national and legal jurisdictions. The cloud is a new kind of industry, and a hungry one.”

We have already referred to the rapidly growing electricity requirements of the internet, with its inevitable impact on the world’s climate. When we hear about “cloud computing”, Bridle also wants us to bear in mind the ways in which this “cloud” both reflects and reinforces military, political and economic power relationships:

The cloud shapes itself to geographies of power and influence, and it serves to reinforce them. The cloud is a power relationship, and most people are not on top of it.”

It is no accident, he says, that maps of internet traffic trace pathways of colonial power that are hundreds of years old. And we shouldn’t be surprised that the US military-intelligence complex, which gave birth to internet protocols, have also installed wiretapping equipment and personnel at junctions where trans-oceanic cables come ashore in the US, allowing them to scoop up far more communications data than they can effectively monitor.2

These power relationships come into play in determining not only what is visible in our web applications, but what is hidden. Bridle is a keen plane-spotter, and he marvels at flight-tracking websites which show, in real time, the movements of thousands of commercial aircraft around the world. “The view of these flight trackers, like that of Google Earth and other satellite image services, is deeply seductive,” he says, but wait:

This God’s-eye view is illusory, as it also serves to block out and erase other private and state activities, from the private jets of oligarchs and politicians to covert surveillance flights and military manoeuvres. For everything that is shown, something is hidden.”

Aviation comes up frequently in the book, as its military and commercial importance is reflected in the outsize role aviation has played in the development of computing and communications infrastructure. Aviation provides compelling examples of the unintended, emergent consequences of this technology.

High anthropoclouds in the sky of Barcelona, 2010, accessed through Wikimedia Commons. The clouds created by aircraft have an outsize impact on climate change. And climate change, Bridle writes, contributes to the increasingly vexing problem of “clear air turbulence” which threatens aircraft but cannot be reliably predicted.

On the last day of October, just a few months after New Dark Age was published, I found myself at Gatwick International Airport near London. I wanted to walk to the nearby town of Crawley to pick up a cardboard packing box. Though the information clerks in the airport terminal told me there was no walking route to Crawley, I had already learned that there was in fact a multi-use cycling lane, and so I hunted around the delivery ramps and parking garage exits until I found my route.

It was a beautiful but noisy stroll, with a brook on one side, a high fence on the other, and the ear-splitting roar of jet engines rising over me every few minutes. Little did I know that in just over a month this strange setting would be a major crime scene, as the full force of the aeronautical/intelligence industry pulled out all stops to find the operators of unauthorized drones, while hundreds of thousands of passengers were stranded in the pre-Christmas rush.

Another month has passed and no perpetrators have been identified, leading some to wonder if the multiple drone sightings were all mistakes. But in any case, aviation experts have long agreed that it’s just a matter of time before “non-state actors” manage to use unmanned aerial vehicles to deadly effect. Wireless communications, robotics, and three-dimensional location systems are now so widely available and inexpensive, it is unrealistic to think that drones will always be controlled by or even tracked by military or police authorities.

The exponential advance of artificial stupidity

Bridle’s discussion of trends in artificial intelligence is at once one of the most intriguing and, to this layperson at least, one of the less satisfying sections of the book. Many of us have heard about a new programming approach, following which a computer program taught itself to play the game Go, and soon was able to beat the world’s best human players of this ancient and complex game.

Those of us who have had to deal with automated telephone-tree answering systems, as much as we may hate the experience, can recognize that voice-recognition and language processing systems have also gotten better. And Google Translate has improved by leaps and bounds in just a few years time.

Bridle’s discussion of the relevant programming approaches presupposes a basic familiarity with the concept of neural networks. Since he writes so clearly about so many other facets of computational thinking, I wish he had chosen to spell out the major approaches to artificial intelligence a bit more for those of us who do not have degrees in computer science.

When he discusses the facility of Youtube in promoting mindless videos, and the efficiency of social media in spreading conspiracy theories of every sort, his message is lucid and provocative.

Here the two-step dance between algorithms and human users of the web produces results that might be laughable if they weren’t chilling. Likewise, strange trends develop out of interplay between Google’s official “mission” – “to organize the world’s information” – and the business model by which it boosts its share price – selling ads.

The Children’s Youtube division of Google has been one of Bridle’s research interests, and those of us fortunate enough not to be acquainted with this realm of culture are likely to be shocked by what he finds.

You might ask what kind of idiot would name a video “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Surfs Kinder Play Doh Sparkle Brilho”. A clever idiot, that’s who, an idiot who may or may not be human, but who knows how to make money. Bridle explains the motive:

This unintelligible assemblage of brand names, characters and keywords points to the real audience for the descriptions: not the viewer, but the algorithms that decide who sees which videos.”

These videos are created to be seen by children too young to be reading titles. Youtube accommodates them – and parents happy to have their toddlers transfixed by a screen – by automatically assembling long reels of videos for autoplay. The videos simply need to earn their place in the playlists with titles that contain enough algorithm-matching words or phrases, and hold the toddler’s attention long enough for ads to be seen and the next video to begin.

The content factories that churn out videos by the millions, then, must keep pace with current trends while spending less on production than will be earned by the accompanying ads, which are typically sold on a “per thousand views” basis.

Is this a bit of a stretch from “organizing the world’s information”? Yes, but what’s more important, a corporation’s lofty mission statement, or its commercial raison d’être? (That is, to sell ads.)

When it comes to content aimed at adults the trends are just as troubling, as Bridle’s discussion of conspiracy theories makes clear.

According to the Diagnostic and Statistical Manual of Mental Disorders, he explains, “a belief is not a delusion when it is held by a person’s ‘culture or subculture’.”

But with today’s social media, it is easy to find people who share any particular belief, no matter how outlandish or ridiculous that belief might seem to others:

Those that the psychiatric establishment would have classified as delusional can ‘cure’ themselves of their delusions by seeking out and joining an online community of like minds. Any opposition to this worldview can be dismissed as a cover-up of the truth of their experience ….”

This pattern, as it happens, reflects the profit-motive basis of social media corporations – people give a media website their attention for much longer when it spools videos or returns search results that confirm their biases and beliefs, and that means there are more ads viewed, more ad revenue earned.

If Google and other social media giants do a splendid job of “organizing the world’s information”, then, they are equally adept at organizing the world’s misinformation:

The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world.”

Our unknowable future

After reading to the last page of a book in which the author covers a dazzling array of topics so well and weaves them together so skillfully, it would be churlish to wish he had included more. I would hope, however, that Bridle or someone with an equal gift for systemic analysis will delve into two questions that naturally arise from this work.

Bridle notes that the energy demands of our computational network are growing rapidly, to the point that this network is a significant driver of climate change. But what might happen to the network if our energy supply becomes effectively scarce due to rapidly rising energy costs?3

Major sectors of the so-called Web 2.0 are founded in a particular business model: services are provided to the mass of users “free”, while advertisers and other data-buyers pay for our attention in order to sell us more products. What might happen to this dominant model of “free services”, if an economic crash means we can’t sustain consumption on anything close to the current scale?

I suspect Bridle would say that the answers to these questions, like so many others, do not compute. Though computation can be a great tool, it will not answer many of the most important questions.

In the morass of information/misinformation in which our network engulfs us, we might find many reasons for pessimism. But Bridle urges us to accept and even welcome the deep uncertainty which has always been a condition of our existence.

As misleading as the “cloud” may be as a picture of our computer network, Bridle suggests we can find value if we take a nod from the 14th-century Christian mystic classic  “The Cloud of Unknowing.” Its anonymous author wrote, “On account of pride, knowledge may often deceive you …. Knowledge tends to breed conceit, but love builds.”

Or in Bridle’s 21st century phrasing,

It is this cloud that we have sought to conquer with computation, but that is continually undone by the reality of what we are attempting. Cloudy thinking, the embrace of unknowing, might allow us to revert from computational thinking, and it is what the network itself urges upon us.”


Photo at top: anthropogenic clouds over paper mill UPM-Kymmene, Schongau, 2013. Accessed at Wikimedia Commons.


NOTES

1 For an excellent account of the centuries-long development of contemporary meteorology, including the important role of Lewis Fry Richardson, see Bill Streever’s 2016 book And Soon I Heard a Roaring Wind: A Natural History of Moving Air.
2 More precisely, though intelligence agents can often zero in on suspicious conversations after a crime has been committed or an insurgency launched, the trillions of bits of data are unreliable sources of prediction before the fact.
3 Kris de Decker has posed some intriguing possibilities in Low-Tech Magazine. See, for example, his 2015 article “How to Build a Low-tech Internet”.

A measured response to surveillance capitalism

Also published at Resilience.org.

A flood of recent analysis discusses the abuse of personal information by internet giants such as Facebook and Google. Some of these articles zero in on the basic business models of Facebook, and occasionally Google, as inherently deceptive and unethical.

But I have yet to see a proposal for any type of regulation that seems proportional to the social problem created by these new enterprises.

So here’s my modest proposal for a legislative response to surveillance capitalism1:

No company which operates an internet social network, or an internet search engine, shall be allowed to sell advertising, nor allowed to sell data collected about the service’s users.

We should also consider an additional regulation:

No company which operates an internet social network, or an internet search engine, shall be allowed to provide this service free of charge to its users.

It may not be easy to craft an appropriate legal definition of “social network” or “search engine”, and I’m not suggesting that this proposal would address all of the surveillance issues inherent in our digitally networked societies. But regulation of companies like Facebook and Google will remain ineffectual unless their current business models are prohibited.

Core competency

The myth of “free services” is widespread in our society, of course, and most people have been willing to play along with the fantasy. Yet we can now see that when it comes to search engines and social networks, this game of pretend has dangerous consequences.

In a piece from September, 2017 entitled “Why there’s nothing to like about Facebook’s ethically-challenged, troublesome business model,” Financial Post columnist Diane Francis clearly described the trick at the root of Facebook’s success:

“Facebook’s underlying business model itself is troublesome: offer free services, collect user’s private information, then monetize that information by selling it to advertisers or other entities.”

Writing in The Guardian a few days ago, John Naughton concisely summarized the corporate histories of both Facebook and Google:

“In the beginning, Facebook didn’t really have a business model. But because providing free services costs money, it urgently needed one. This necessity became the mother of invention: although in the beginning Zuckerberg (like the two Google co-founders, incidentally) despised advertising, in the end – like them – he faced up to sordid reality and Facebook became an advertising company.”

So while Facebook has grandly phrased its mission as “to make the world more open and connected”, and Google long proclaimed its mission “to organize the world’s information”, those goals had to take a back seat to the real business: helping other companies sell us more stuff.

In Facebook’s case, it has been obvious for years that providing a valuable social networking service was a secondary focus. Over and over, Facebook introduced major changes in how the service worked, to widespread complaints from users. But as long as these changes didn’t drive too many users away, and as long as the changes made Facebook a more effective partner to advertisers, the company earned more profit and its stock price soared.

Likewise, Google found a “sweet spot” with the number of ads that could appear above and beside search results without overly annoying users – while also packaging the search data for use by advertisers across the web.

A bad combination

The sale of advertising, of course, has subsidized news and entertainment media for more than a century. In recent decades, even before online publishing became dominant, some media switched to wholly-advertising-supported “free” distribution. While that fiction had many negative consequences, I believe, the danger to society was taken to another level with search engines and social networks.

A “free” print magazine or newspaper, after all, collects no data while being read.2 No computer records if and when you turn the page, how long you linger over an article, or even whether you clip an ad and stick it to your refrigerator.

Today’s “free” online services are different. Search engines collate every search by every user, so they know what people are curious about – the closest version of mass mind-reading we have yet seen. Social media not only register every click and every “Like”, but all our digital interactions with all of our “friends”.

This surveillance-for-profit is wonderfully useful for the purpose of selling us more stuff – or, more recently, for manipulating our opinions and our votes. But we should not be surprised when they abuse our confidence, since their business model drives them to betray our trust as efficiently as possible.

Effective regulation

In the flood of commentary about Facebook following the Cambridge Analytica revelations, two themes predominate. First, there is a frequently-stated wish that Facebook “respect our privacy”. Second, there are somewhat more specific calls for regulation of Facebook’s privacy settings, terms of sale of data, or policing of “bot” accounts.

Both themes strike me as naïve. Facebook may allow users a measure of privacy in that they can be permitted to hide some posts from some other users. But it is the very essence of Facebook’s business model that no user can have any privacy from Facebook itself, and Facebook can and will use everything it learns about us to help manipulate our desires in the interests of paying customers. Likewise, it is naïve to imagine that what we post on Facebook remains “our data”, since we have given it to Facebook in exchange for a service for which we pay no monetary fee.

But regulating the terms under which Facebook acquires our consent to monetize our information? This strikes me as an endlessly complicated game of whack-a-mole. The features of computerized social networks have changed and will continue to change as fast as a coder can come up with a clever new bit of software. Regulating these internal methods and operations would be a bureaucratic boondoggle.

Much simpler and more effective, I think, would be to abolish the fiction of “free” services that forms the façade of Facebook and Google. When these companies as well as new competitors3 charge an honest fee to users of social networks and search engines, because they can no longer earn money by selling ads or our data, much of the impetus to surveillance capitalism will be gone.

It costs real money to provide a platform for billions of people to share our cat videos, pictures of grandchildren, and photos of avocado toast. It also costs real money to build a data-mining machine – to sift and sort that data to reveal useful patterns for advertisers who want to manipulate our desires and opinions.

If social networks and search engines make their money honestly through user fees, they will obviously collect data that helps them improve their service and retain or gain users. But they will have no incentive to throw financial resources at data mining for other purposes.

Under such a regulation, would we still have good social network and search engine services? I have little doubt that we would.

People willingly pay for services they truly value – look back at how quickly people adopted the costly use of cell phones. But when someone pretends to offer us a valued service “free”, we endure a host of consequences as we eagerly participate in the con.
Photos at top: Sergey Brin, co-founder of Google (left) and Mark Zuckerberg, Facebook CEO. Left photo, “A surprise guest at TED 2010, Sergey spoke openly about Google’s new posture with China,” by Steve Jurvetson, via Wikimedia Commons. Right photo, “Mark Zuckerberg, Founder and Chief Executive Officer, Facebook, USA, captured during the session ‘The Next Digital Experience’ at the Annual Meeting 2009 of the World Economic Forum in Davos, Switzerland, January 30, 2009”, by World Economic Forum, via Wikimedia Commons.

 


NOTES

1 The term “surveillance capitalism” was introduced by John Bellamy Foster and Robert W. McChesney in a perceptive article in Monthly Review, July 2014.

2 Thanks to Toronto photographer and writer Diane Boyer for this insight.

3 There would be a downside to stipulating that social networks or search engines do not provide their services to users free of charge, in that it would be difficult for a new service to break into the market. One option might be a size-based exemption, allowing, for example, a company to offer such services free until it reaches 10 million users.