Bodies, Minds, and the Artificial Intelligence Industrial Complex, part four
Also published on Resilience.
A strange new species is getting a lot of press recently. The New Yorker published the poignant and profound illustrated essay “Is My Toddler a Stochastic Parrot?” Wall Street Journal told us about “‘Stochastic Parrot’: A Name for AI That Sounds a Bit Less Intelligent”. And expert.ai warned of “GPT-3: The Venom-Spitting Stochastic Parrot”.
The American Dialect Society even selected “stochastic parrot” as the AI-Related Word of the Year for 2023.
Yet this species was unknown until March of 2021, when Emily Bender, Timnit Gebru, Angelina McMillan-Major, and (the slightly pseudonymous) Shmargaret Shmitchell published “On the Dangers of Stochastic Parrots.”1
The paper touched a nerve in the AI community, reportedly costing Timnit Gebru and Margaret Mitchell their jobs with Google’s Ethical AI team.2
Just a few days after Chat-GPT was released, Open AI CEO Sam Altman paid snarky tribute to the now-famous phrase by tweeting “i am a stochastic parrot, and so r u.”3
Just what, according to its namers, are the distinctive characteristics of a stochastic parrot? Why should we be wary of this species? Should we be particularly concerned about a dominant sub-species, the WEIRD stochastic parrot? (WEIRD as in: Western, Educated, Industrialized, Rich, Democratic.) We’ll look at those questions for the remainder of this installment.
Haphazardly probable
The first recognized chatbot was 1967’s Eliza, but many of the key technical developments behind today’s chatbots only came together in the last 15 years. The apparent wizardry of today’s Large Language Models rests on a foundation of algorithmic advances, the availability of vast data sets, super-computer clusters employing thousands of the latest Graphics Processing Unit (GPU) chips, and, as discussed in the last post, an international network of poorly paid gig workers providing human input to fill in gaps in the machine learning process.
By the beginning of this decade, some AI industry figures were arguing that Large Language Models would soon exhibit “human-level intelligence”, could become sentient and conscious, and might even become the dominant new species on the planet.
The authors of the stochastic parrot paper saw things differently:
“Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”4
Let’s start by focusing on two words in that definition: “haphazardly” and “probabilistic”. How do those words apply to the output of ChatGPT or similar Large Language Models?
In a lengthy paper published last year, Stephen Wolfram offers an initial explanation:
“What ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”5
He gives the example of this partial sentence: “The best thing about AI is its ability to”. The Large Language Model will have identified many instances closely matching this phrase, and will have calculated the probability of various words being the next word to follow. The table below lists five of the most likely choices.
The element of probability, then, is clear – but in what way is ChatGPT “haphazard”?
Wolfram explains that if the chatbot always picks the next word with the highest probability, the results will be syntactically correct, sensible, but stilted and boring – and repeated identical prompts will produce repeated identical outputs.
By contrast, if at random intervals the chatbot picks a “next word” that ranks fairly high in probability but is not the highest rank, then more interesting and varied outputs result.
Here is Wolfram’s sample of an output produced by a strict “pick the next word with the highest rank” rule:
The above output sounds like the effort of someone who is being careful with each sentence, but with no imagination, no creativity, and no real ability to develop a thought.
With a randomness setting introduced, however, Wolfram illustrates how repeated responses to the same prompt produce a wide variety of more interesting outputs:
The above summary is an over-simplification, of course, and if you want a more in-depth exposition Wolfram’s paper offers a lot of complex detail. But Wolfram’s “next word” explanation concurs with at least part of the stochastic parrot thesis: “an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine ….”
What follows, in Bender and Gebru’s formulation, is equally significant. An LLM, they wrote, strings together words “without any reference to meaning.”
Do LLM’s actually understand the meaning of the words, phrases, sentences and paragraphs they have read and which they can produce? To answer that question definitively, we’d need definitive answers to questions like “What is meaning?” and “What does it mean to understand?”
A brain is not a computer, and a computer is not a brain
For the past fifty years a powerful but deceptive metaphor has become pervasive. We’ve grown accustomed to describing computers by analogy to the human brain, and vice versa. As the saying goes, these models are always wrong even though they are sometimes useful.
“The Computational Metaphor,” wrote Alexis Barria and Keith Cross, “affects how people understand computers and brains, and of more recent importance, influences interactions with AI-labeled technology.”
The concepts embedded in the metaphor, they added, “afford the human mind less complexity than is owed, and the computer more wisdom than is due.”6
The human mind is inseparable from the brain which is inseparable from the body. However much we might theorize about abstract processes of thought, our thought processes evolved with and are inextricably tangled with bodily realities of hunger, love, fear, satisfaction, suffering, mortality. We learn language as part of experiencing life, and the meanings we share (sometimes incompletely) when we communicate with others depends on shared bodily existence.
Angie Wang put it this way: “A toddler has a life, and learns language to describe it. An L.L.M. learns language, but has no life of its own to describe.”7
In other terms, wrote Bender and Gebru, “languages are systems of signs, i.e. pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning.”
Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people”8 who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.
“WEIRD in, WEIRD out”
There are no generic humans. As it turns out, counterfeit people aren’t generic either.
Large Language Models are created primarily by large corporations, or by university researchers who are funded by large corporations or whose best job prospects are with those corporations. It would be a fluke if the products and services growing out of these LLMs didn’t also favour those corporations.
But the bias problem embedded in chatbots goes deeper. For decades, the people who contribute the most to digitized data sets are those who have the most access to the internet, who publish the most books, research papers, magazine articles and blog posts – and these people disproportionately live in Western Educated Industrialized Rich Democratic countries. Even social media users, who provide terabytes of free data for the AI machine, are likely to live in WEIRD places.
We should not be surprised, then, when outputs from chatbots express common biases:
“As people in positions of privilege with respect to a society’s racism, misogyny, ableism, etc., tend to be overrepresented in training data for LMs, this training data thus includes encoded biases, many already recognized as harmful.”9
In 2023 a group of scholars at Harvard University investigated those biases. “Technical reports often compare LLMs’ outputs with ‘human’ performance on various tests,” they wrote. “Here, we ask, ‘Which humans?’”10
“Mainstream research on LLMs,” they added, “ignores the psychological diversity of ‘humans’ around the globe.”
Their strategy was straightforward: prompt Open AI’s GPT to answer the questions in the World Values Survey, and then compare the results to the answers that humans around the world gave to the same set of questions. The WVS documents a range of values including but not limited to issues of justice, moral principles, global governance, gender, family, religion, social tolerance, and trust. The team worked with data in the latest WVS surveys, collected from 2017 to 2022.
Recall that GPT does not give identical responses to identical prompts. To ensure that the GPT responses were representative, each of the WVS questions was posed to GPT 1000 times.11
The comparisons with human answers to the same surveys revealed striking similarities and contrasts. The article states:
“GPT was identified to be closest to the United States and Uruguay, and then to this cluster of cultures: Canada, Northern Ireland, New Zealand, Great Britain, Australia, Andorra, Germany, and the Netherlands. On the other hand, GPT responses were farthest away from cultures such as Ethiopia, Pakistan, and Kyrgyzstan.”
In other words, the GPT responses were similar to those of people in WEIRD societies.
The results are summarized in the graphic below. Countries in which humans gave WVS answers close to GPT’s answers are clustered at top left, while countries whose residents gave answers increasingly at variance with GPT’s answers trend along the line running down to the right.
The team went on to consider the WVS responses in various categories including styles of analytical thinking, degrees of individualism, and ways of expressing and understanding personal identity. In these and other domains, they wrote, “people from contemporary WEIRD populations are an outlier in terms of their psychology from a global and historical perspective.” Yet the responses from GPT tracked the WEIRD populations rather than global averages.
Anyone who asks GPT a question with hopes of getting an unbiased answer is running a fool’s errand. Because the data sets include a large over-representation of WEIRD inputs, the outputs, for better or worse, will be no less WEIRD.
As Large Language Models are increasingly incorporated into decision-making tools and processes, their WEIRD biases become increasingly significant. By learning primarily from data that encodes viewpoints of dominant sectors of global society, and then expressing those values in decisions, LLMs are likely to further empower the powerful and marginalize the marginalized.
In the next installment we’ll look at the effects of AI and LLMs on employment conditions, now and in the near future.
Notes
1 Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Association for Computing Machinery Digital Library, 1 March 2021.
2 John Naughton, “Google might ask questions about AI ethics, but it doesn’t want answers”, The Guardian, 13 March 2021.
3 As quoted in Elizabeth Weil, “You Are Not a Parrot”, New York Magazine, March 1, 2023.
4 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”
5 Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?”, 14 February 2023.
6 Alexis T. Baria and Keith Cross, “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor”, arXiv, 18 July 2021.
7 Angie Wang, “Is My Toddler a Stochastic Parrot?”, The New Yorker, 15 November 2023.
8 The phrase “counterfeit people” is attributed to philosopher David Dennett, quoted by Elizabeth Weil in “You Are Not a Parrot”, New York Magazine.
9 Bender, Gebru et al, “On the Dangers of Stochastic Parrots.”
10 Mohammed Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi, and Joseph Henrich, “Which Humans?”, arXiv, 22 September 2023.
11 Specifically, the team “ran both GPT 3 and 3.5; they were similar. The paper’s plots are based on 3.5.” Email correspondence with study author Mohammed Atari.
Image at top of post: “The Evolution of Intelligence”, illustration by Bart Hawkins Kreps, posted under CC BY-SA 4.0 DEED license, adapted from “The Yin and Yang of Human Progress”, (Wikimedia Commons), and from parrot illustration courtesy of Judith Kreps Hawkins.