The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Beyond computational thinking – a ‘cloud of unknowing’ for the 21st century

Also published at Resilience.org

New Dark Age: Technology and the End of the Future, by James Bridle, Verso Books, 2018

If people are to make wise decisions in our heavily technological world, is it essential that they learn how to code?

For author and artist James Bridle, that is analogous to asking whether it is essential that people be taught plumbing skills.

Of course we want and need people who know how to connect water taps, how to find and fix leaks. But,

learning to plumb a sink is not enough to understand the complex interactions between water tables, political geography, ageing infrastructure, and social policy that define, shape and produce actual life support systems in society.” (Except where otherwise noted, all quotes in this article are from New Dark Age by James Bridle, Verso Books 2018)

Likewise, we need people who can view our technological society as a system – a complex, adaptive and emergent system – which remains heavily influenced by certain motives and interests while also spawning new developments that are beyond any one group’s control.

Bridle’s 2018 book New Dark Age takes deep dives into seemingly divergent subjects including the origins of contemporary weather forecasting, mass surveillance, airline reservation systems, and Youtube autoplay lists for toddlers.  Each of these excursions is so engrossing that it is sometimes difficult to hold his central thesis in mind, and yet he weaves all the threads into a cohesive tapestry.

Bridle wants us to be aware of the strengths of what he terms “computational thinking” – but also its critical limitations. And he wants us to look at the implications of  the internet as a system, not only of power lines and routers and servers and cables, but also of people, from the spies who tap into network nodes to monitor our communications, to the business analysts who devise ways to “monetize” our clicks, to the Facebook groups who share videos backing up their favoured theories.

Wiring of the SEAC computer, which was built in 1950 for the U.S. National Bureau of Standards. It was used until 1964, for purposes including meteorology, city traffic simulations, and the wave function of  the helium atom. Image from Wikimedia Commons.

From today’s weather, predict tomorrow’s

Decades before a practical electronic computer existed, pioneering meteorologist Lewis Fry Richardson1 thought up what would become a “killer app” for computers.

Given current weather data – temperature, barometric pressure, wind speed – for a wide but evenly spaced matrix of locations, Richardson reasoned that it should be possible to calculate how each cell’s conditions would interact with the conditions in adjacent cells, describe new weather patterns that would arise, and therefore predict tomorrow’s weather for each and all of those locations.

That method became the foundation of contemporary weather forecasting, which has improved by leaps and bounds in our lifetimes. But in 1916, when Richardson first tried to test his ideas they were practically useless. The method involved so many calculations that Richardson worked for weeks, then months, then years to work out a ‘prediction’ from a single day’s weather data.

But by the end of World War II, the US military had developed early electronic computers which could begin to make Richardson’s theory a useful one. To military strategists, of course, the ability to predict weather could provide a great advantage in war. Knowing when a particular attack would be helped or hindered by the weather would be a great boon to generals. Even more tantalizingly, if it were possible to clearly understand and predict the weather, it might then also be possible to control the weather, inflicting a deluge or a sandstorm, for example, on vulnerable enemy forces.

John von Neumann, a mathematician, Manhattan Project physicist and a major figure in the development of computers, summed it up.

In what could be taken as the founding statement of computational thought, [von Neumann] wrote: ‘All stable processes we shall predict. All unstable processes we shall control.’”

Computational thinking, then, relied on the input of data about present conditions, and further data on how such conditions have been correlated in the past, in order to predict future conditions.

But because many aspects of our world are connected in one system – an adaptive and emergent system – this system spawns new trends which behave in new ways, not predictable simply from the patterns of the past. In other words, in the anthropocene age our system is not wholly computable. We need to understand, Bridle writes, that

technology’s increasing inability to predict the future – whether that’s the fluctuating markets of digital stock exchanges, the outcomes and applications of scientific research, or the accelerating instability of the global climate – stems directly from these misapprehensions about the neutrality and comprehensibility of computation.”

Take the case of climate studies and meteorology. The technological apparatus to collect all the data, crunch the numbers, and run the models is part of a huge industrial infrastructure that is itself changing the climate (with the internet itself contributing an ever-more significant share of greenhouse gas emissions). As a result the world’s weather is ever more turbulent, producing so-called ‘100 year storms’ every few years. We can make highly educated guesses about critical climatic tipping points, but we are unable to say for sure when these events will occur or how they will interact.

Age-old traditional knowledge of ways to deal with this week’s or this year’s weather is becoming less reliable. Scientists, too, should acknowledge the limits of computational thinking for their work:

In a 2016 editorial for the New York Times, computational meteorologist and past president of the American Meteorological Society William B. Gail cited a number of patterns that humanity has studied for centuries, but that are disrupted by climate change: long-term weather trends, fish spawning and migration, plant pollination, monsoon and tide cycles, the occurrence of ‘extreme’ weather events. For most of recorded history, these cycles have been broadly predictable, and we have built up vast reserves of knowledge that we can tap into in order to better sustain our ever more entangled civilisation.”

The implications are stark: “Gail foresees a time in which our grandchildren might conceivably know less about the world in which they live than we do today, with correspondingly catastrophic events for complex societies.”

World map of submarine communication cables, 2015. Cable data by Greg Mahlknecht, world map by Openstreetmap contributors. Accessed through Wikimedia Commons.

Lines of power

In many ways, Bridle says, we can be mislead by the current view of the internet as a “cloud”. Contrary to our metaphor, he writes, “The cloud is not weightless; it is not amorphous, or even invisible, if you know where to look for it.” To be clear,

It is a physical infrastructure consisting of phone lines, fibre optics, satellites, cables on the ocean floor, and vast warehouses filled with computers, which consume huge amounts of water and energy and reside within national and legal jurisdictions. The cloud is a new kind of industry, and a hungry one.”

We have already referred to the rapidly growing electricity requirements of the internet, with its inevitable impact on the world’s climate. When we hear about “cloud computing”, Bridle also wants us to bear in mind the ways in which this “cloud” both reflects and reinforces military, political and economic power relationships:

The cloud shapes itself to geographies of power and influence, and it serves to reinforce them. The cloud is a power relationship, and most people are not on top of it.”

It is no accident, he says, that maps of internet traffic trace pathways of colonial power that are hundreds of years old. And we shouldn’t be surprised that the US military-intelligence complex, which gave birth to internet protocols, have also installed wiretapping equipment and personnel at junctions where trans-oceanic cables come ashore in the US, allowing them to scoop up far more communications data than they can effectively monitor.2

These power relationships come into play in determining not only what is visible in our web applications, but what is hidden. Bridle is a keen plane-spotter, and he marvels at flight-tracking websites which show, in real time, the movements of thousands of commercial aircraft around the world. “The view of these flight trackers, like that of Google Earth and other satellite image services, is deeply seductive,” he says, but wait:

This God’s-eye view is illusory, as it also serves to block out and erase other private and state activities, from the private jets of oligarchs and politicians to covert surveillance flights and military manoeuvres. For everything that is shown, something is hidden.”

Aviation comes up frequently in the book, as its military and commercial importance is reflected in the outsize role aviation has played in the development of computing and communications infrastructure. Aviation provides compelling examples of the unintended, emergent consequences of this technology.

High anthropoclouds in the sky of Barcelona, 2010, accessed through Wikimedia Commons. The clouds created by aircraft have an outsize impact on climate change. And climate change, Bridle writes, contributes to the increasingly vexing problem of “clear air turbulence” which threatens aircraft but cannot be reliably predicted.

On the last day of October, just a few months after New Dark Age was published, I found myself at Gatwick International Airport near London. I wanted to walk to the nearby town of Crawley to pick up a cardboard packing box. Though the information clerks in the airport terminal told me there was no walking route to Crawley, I had already learned that there was in fact a multi-use cycling lane, and so I hunted around the delivery ramps and parking garage exits until I found my route.

It was a beautiful but noisy stroll, with a brook on one side, a high fence on the other, and the ear-splitting roar of jet engines rising over me every few minutes. Little did I know that in just over a month this strange setting would be a major crime scene, as the full force of the aeronautical/intelligence industry pulled out all stops to find the operators of unauthorized drones, while hundreds of thousands of passengers were stranded in the pre-Christmas rush.

Another month has passed and no perpetrators have been identified, leading some to wonder if the multiple drone sightings were all mistakes. But in any case, aviation experts have long agreed that it’s just a matter of time before “non-state actors” manage to use unmanned aerial vehicles to deadly effect. Wireless communications, robotics, and three-dimensional location systems are now so widely available and inexpensive, it is unrealistic to think that drones will always be controlled by or even tracked by military or police authorities.

The exponential advance of artificial stupidity

Bridle’s discussion of trends in artificial intelligence is at once one of the most intriguing and, to this layperson at least, one of the less satisfying sections of the book. Many of us have heard about a new programming approach, following which a computer program taught itself to play the game Go, and soon was able to beat the world’s best human players of this ancient and complex game.

Those of us who have had to deal with automated telephone-tree answering systems, as much as we may hate the experience, can recognize that voice-recognition and language processing systems have also gotten better. And Google Translate has improved by leaps and bounds in just a few years time.

Bridle’s discussion of the relevant programming approaches presupposes a basic familiarity with the concept of neural networks. Since he writes so clearly about so many other facets of computational thinking, I wish he had chosen to spell out the major approaches to artificial intelligence a bit more for those of us who do not have degrees in computer science.

When he discusses the facility of Youtube in promoting mindless videos, and the efficiency of social media in spreading conspiracy theories of every sort, his message is lucid and provocative.

Here the two-step dance between algorithms and human users of the web produces results that might be laughable if they weren’t chilling. Likewise, strange trends develop out of interplay between Google’s official “mission” – “to organize the world’s information” – and the business model by which it boosts its share price – selling ads.

The Children’s Youtube division of Google has been one of Bridle’s research interests, and those of us fortunate enough not to be acquainted with this realm of culture are likely to be shocked by what he finds.

You might ask what kind of idiot would name a video “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Surfs Kinder Play Doh Sparkle Brilho”. A clever idiot, that’s who, an idiot who may or may not be human, but who knows how to make money. Bridle explains the motive:

This unintelligible assemblage of brand names, characters and keywords points to the real audience for the descriptions: not the viewer, but the algorithms that decide who sees which videos.”

These videos are created to be seen by children too young to be reading titles. Youtube accommodates them – and parents happy to have their toddlers transfixed by a screen – by automatically assembling long reels of videos for autoplay. The videos simply need to earn their place in the playlists with titles that contain enough algorithm-matching words or phrases, and hold the toddler’s attention long enough for ads to be seen and the next video to begin.

The content factories that churn out videos by the millions, then, must keep pace with current trends while spending less on production than will be earned by the accompanying ads, which are typically sold on a “per thousand views” basis.

Is this a bit of a stretch from “organizing the world’s information”? Yes, but what’s more important, a corporation’s lofty mission statement, or its commercial raison d’être? (That is, to sell ads.)

When it comes to content aimed at adults the trends are just as troubling, as Bridle’s discussion of conspiracy theories makes clear.

According to the Diagnostic and Statistical Manual of Mental Disorders, he explains, “a belief is not a delusion when it is held by a person’s ‘culture or subculture’.”

But with today’s social media, it is easy to find people who share any particular belief, no matter how outlandish or ridiculous that belief might seem to others:

Those that the psychiatric establishment would have classified as delusional can ‘cure’ themselves of their delusions by seeking out and joining an online community of like minds. Any opposition to this worldview can be dismissed as a cover-up of the truth of their experience ….”

This pattern, as it happens, reflects the profit-motive basis of social media corporations – people give a media website their attention for much longer when it spools videos or returns search results that confirm their biases and beliefs, and that means there are more ads viewed, more ad revenue earned.

If Google and other social media giants do a splendid job of “organizing the world’s information”, then, they are equally adept at organizing the world’s misinformation:

The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world.”

Our unknowable future

After reading to the last page of a book in which the author covers a dazzling array of topics so well and weaves them together so skillfully, it would be churlish to wish he had included more. I would hope, however, that Bridle or someone with an equal gift for systemic analysis will delve into two questions that naturally arise from this work.

Bridle notes that the energy demands of our computational network are growing rapidly, to the point that this network is a significant driver of climate change. But what might happen to the network if our energy supply becomes effectively scarce due to rapidly rising energy costs?3

Major sectors of the so-called Web 2.0 are founded in a particular business model: services are provided to the mass of users “free”, while advertisers and other data-buyers pay for our attention in order to sell us more products. What might happen to this dominant model of “free services”, if an economic crash means we can’t sustain consumption on anything close to the current scale?

I suspect Bridle would say that the answers to these questions, like so many others, do not compute. Though computation can be a great tool, it will not answer many of the most important questions.

In the morass of information/misinformation in which our network engulfs us, we might find many reasons for pessimism. But Bridle urges us to accept and even welcome the deep uncertainty which has always been a condition of our existence.

As misleading as the “cloud” may be as a picture of our computer network, Bridle suggests we can find value if we take a nod from the 14th-century Christian mystic classic  “The Cloud of Unknowing.” Its anonymous author wrote, “On account of pride, knowledge may often deceive you …. Knowledge tends to breed conceit, but love builds.”

Or in Bridle’s 21st century phrasing,

It is this cloud that we have sought to conquer with computation, but that is continually undone by the reality of what we are attempting. Cloudy thinking, the embrace of unknowing, might allow us to revert from computational thinking, and it is what the network itself urges upon us.”


Photo at top: anthropogenic clouds over paper mill UPM-Kymmene, Schongau, 2013. Accessed at Wikimedia Commons.


NOTES

1 For an excellent account of the centuries-long development of contemporary meteorology, including the important role of Lewis Fry Richardson, see Bill Streever’s 2016 book And Soon I Heard a Roaring Wind: A Natural History of Moving Air.
2 More precisely, though intelligence agents can often zero in on suspicious conversations after a crime has been committed or an insurgency launched, the trillions of bits of data are unreliable sources of prediction before the fact.
3 Kris de Decker has posed some intriguing possibilities in Low-Tech Magazine. See, for example, his 2015 article “How to Build a Low-tech Internet”.

A measured response to surveillance capitalism

Also published at Resilience.org.

A flood of recent analysis discusses the abuse of personal information by internet giants such as Facebook and Google. Some of these articles zero in on the basic business models of Facebook, and occasionally Google, as inherently deceptive and unethical.

But I have yet to see a proposal for any type of regulation that seems proportional to the social problem created by these new enterprises.

So here’s my modest proposal for a legislative response to surveillance capitalism1:

No company which operates an internet social network, or an internet search engine, shall be allowed to sell advertising, nor allowed to sell data collected about the service’s users.

We should also consider an additional regulation:

No company which operates an internet social network, or an internet search engine, shall be allowed to provide this service free of charge to its users.

It may not be easy to craft an appropriate legal definition of “social network” or “search engine”, and I’m not suggesting that this proposal would address all of the surveillance issues inherent in our digitally networked societies. But regulation of companies like Facebook and Google will remain ineffectual unless their current business models are prohibited.

Core competency

The myth of “free services” is widespread in our society, of course, and most people have been willing to play along with the fantasy. Yet we can now see that when it comes to search engines and social networks, this game of pretend has dangerous consequences.

In a piece from September, 2017 entitled “Why there’s nothing to like about Facebook’s ethically-challenged, troublesome business model,” Financial Post columnist Diane Francis clearly described the trick at the root of Facebook’s success:

“Facebook’s underlying business model itself is troublesome: offer free services, collect user’s private information, then monetize that information by selling it to advertisers or other entities.”

Writing in The Guardian a few days ago, John Naughton concisely summarized the corporate histories of both Facebook and Google:

“In the beginning, Facebook didn’t really have a business model. But because providing free services costs money, it urgently needed one. This necessity became the mother of invention: although in the beginning Zuckerberg (like the two Google co-founders, incidentally) despised advertising, in the end – like them – he faced up to sordid reality and Facebook became an advertising company.”

So while Facebook has grandly phrased its mission as “to make the world more open and connected”, and Google long proclaimed its mission “to organize the world’s information”, those goals had to take a back seat to the real business: helping other companies sell us more stuff.

In Facebook’s case, it has been obvious for years that providing a valuable social networking service was a secondary focus. Over and over, Facebook introduced major changes in how the service worked, to widespread complaints from users. But as long as these changes didn’t drive too many users away, and as long as the changes made Facebook a more effective partner to advertisers, the company earned more profit and its stock price soared.

Likewise, Google found a “sweet spot” with the number of ads that could appear above and beside search results without overly annoying users – while also packaging the search data for use by advertisers across the web.

A bad combination

The sale of advertising, of course, has subsidized news and entertainment media for more than a century. In recent decades, even before online publishing became dominant, some media switched to wholly-advertising-supported “free” distribution. While that fiction had many negative consequences, I believe, the danger to society was taken to another level with search engines and social networks.

A “free” print magazine or newspaper, after all, collects no data while being read.2 No computer records if and when you turn the page, how long you linger over an article, or even whether you clip an ad and stick it to your refrigerator.

Today’s “free” online services are different. Search engines collate every search by every user, so they know what people are curious about – the closest version of mass mind-reading we have yet seen. Social media not only register every click and every “Like”, but all our digital interactions with all of our “friends”.

This surveillance-for-profit is wonderfully useful for the purpose of selling us more stuff – or, more recently, for manipulating our opinions and our votes. But we should not be surprised when they abuse our confidence, since their business model drives them to betray our trust as efficiently as possible.

Effective regulation

In the flood of commentary about Facebook following the Cambridge Analytica revelations, two themes predominate. First, there is a frequently-stated wish that Facebook “respect our privacy”. Second, there are somewhat more specific calls for regulation of Facebook’s privacy settings, terms of sale of data, or policing of “bot” accounts.

Both themes strike me as naïve. Facebook may allow users a measure of privacy in that they can be permitted to hide some posts from some other users. But it is the very essence of Facebook’s business model that no user can have any privacy from Facebook itself, and Facebook can and will use everything it learns about us to help manipulate our desires in the interests of paying customers. Likewise, it is naïve to imagine that what we post on Facebook remains “our data”, since we have given it to Facebook in exchange for a service for which we pay no monetary fee.

But regulating the terms under which Facebook acquires our consent to monetize our information? This strikes me as an endlessly complicated game of whack-a-mole. The features of computerized social networks have changed and will continue to change as fast as a coder can come up with a clever new bit of software. Regulating these internal methods and operations would be a bureaucratic boondoggle.

Much simpler and more effective, I think, would be to abolish the fiction of “free” services that forms the façade of Facebook and Google. When these companies as well as new competitors3 charge an honest fee to users of social networks and search engines, because they can no longer earn money by selling ads or our data, much of the impetus to surveillance capitalism will be gone.

It costs real money to provide a platform for billions of people to share our cat videos, pictures of grandchildren, and photos of avocado toast. It also costs real money to build a data-mining machine – to sift and sort that data to reveal useful patterns for advertisers who want to manipulate our desires and opinions.

If social networks and search engines make their money honestly through user fees, they will obviously collect data that helps them improve their service and retain or gain users. But they will have no incentive to throw financial resources at data mining for other purposes.

Under such a regulation, would we still have good social network and search engine services? I have little doubt that we would.

People willingly pay for services they truly value – look back at how quickly people adopted the costly use of cell phones. But when someone pretends to offer us a valued service “free”, we endure a host of consequences as we eagerly participate in the con.
Photos at top: Sergey Brin, co-founder of Google (left) and Mark Zuckerberg, Facebook CEO. Left photo, “A surprise guest at TED 2010, Sergey spoke openly about Google’s new posture with China,” by Steve Jurvetson, via Wikimedia Commons. Right photo, “Mark Zuckerberg, Founder and Chief Executive Officer, Facebook, USA, captured during the session ‘The Next Digital Experience’ at the Annual Meeting 2009 of the World Economic Forum in Davos, Switzerland, January 30, 2009”, by World Economic Forum, via Wikimedia Commons.

 


NOTES

1 The term “surveillance capitalism” was introduced by John Bellamy Foster and Robert W. McChesney in a perceptive article in Monthly Review, July 2014.

2 Thanks to Toronto photographer and writer Diane Boyer for this insight.

3 There would be a downside to stipulating that social networks or search engines do not provide their services to users free of charge, in that it would be difficult for a new service to break into the market. One option might be a size-based exemption, allowing, for example, a company to offer such services free until it reaches 10 million users.