The urgent necessity of asset stranding

A review of Overshoot: How the World Surrendered to Climate Breakdown

In 2023 delegates from around the world gathered for a 28th session of the Conference Of the Parties (COP), this time held in the United Arab Emirates. The official director of the mega-meeting, nominally devoted to mitigating the climate crisis caused by fossil fuel emissions, was none other than Sultan Al Jaber, CEO of the Abu Dhabi National Oil Company (ADNOC).

At the time, ADNOC was “in the midst of a thrust of expansion, planning to pour more than 1 billion dollars into oil and gas projects per month until 2030.” (Overshoot, page 253)

Overshoot, by Andreas Malm and Wim Carton, published by Verso, October 2024.

Sultan Al Jaber’s appointment was praised by climate envoy John Kerry of the United States, which was also committing a historic expansion of fossil fuel extraction.

The significance of COP being presided over by a CEO working hard to increase carbon emissions was not lost on Andreas Malm and Wim Carton. In that moment, they write,

“[A]ctive capital protection had been insinuated into the highest echelons of climate governance, the irreal (sic) turn coming full circle, the theatre now a tragedy and farce wrapped into one, overshoot ideology the official decor.” (Overshoot, p 254; emphasis mine)

What do Malm and Carton mean by “capital protection” and “overshoot”? “Capital protection” is the opposite of “asset stranding”, which would occur if trillions of dollars worth of fossil fuel reserves were “left in the ground,” unburned, unexploited. Yet as we shall see, the potential threat to capital goes far beyond even the trillions of dollars of foregone profits if the fossil fuel industry were rapidly wound down.

In Malm and Carton’s usage, “overshoot” has a different meaning than in some ecological theory. In this book “overshoot” refers specifically to carbon emissions rising through levels that will cause 1.5°C, 2°C, or other specified threshold for global warming. To apologists for overshoot, it is fine to blow through these warming targets temporarily, as long as our descendants later in the century draw down much of the carbon through yet-to-be commercialized technologies such as Bio-Energy with Carbon Capture and Storage (BECCS).

Overshoot, Malm and Carton say, is a dangerous gamble that will certainly kill many people in the coming decades, and collapse civilization and much of the biosphere in the longer term if our descendants are not able adequately clean up the mess we are bequeathing them. Yet overshoot is firmly integrated into the Integrated Assessment Models widely used to model the course of climate change, precisely because it offers capital protection against asset stranding.

Scientific models, “drenched in ideology”

If the global climate were merely a complex physical system it would be easier to model. But of course it is also a biological, ecological, social and economic system. Once it was understood that the climate was strongly influenced by human activity, early researchers understood the need for models that incorporated human choices into climate projections.

“But how could an economy of distinctly human making be captured in the same model as something like glaciers?,” Malm and Carton ask. “In the Integrated Assessment Models (IAMs), the trick was to render the economy lawlike on the assumptions of neoclassical theory ….” (p 56)

These assumptions include the idea that humans are rational, making their choices to maximize utility, in free markets that collectively operate with perfect information. While most people other than orthodox economists can recognize these assumptions as crude caricatures of human behaviour, this set of assumptions is hegemonic within affluent policy-making circles. And so it was the neoclassical economy whose supposed workings were integrated into the IAMs. 

While “every human artifact has a dimension of ideology,” Malm and Carton write, 

“IAMs were positively drenched in non-innocent ideological positions, of which we can quickly list a few: rationalism (human agents behave rationally), economism (mitigation is a matter of cost), presentism (current generations should be spared the onus), conservatism (incumbent capital must be saved from losses), gradualism (any changes will have to be incremental), and optimism (we live in the best of all possible economies). Together, they made ambitious climate goals – the ones later identified as in line with 1.5°C or 2°C – seem all but unimaginable.” (p 60; emphasis mine)

In literally hundreds of IAMs, they write, there was a conspicuous absence of scenarios involving degrowth, the Green New Deal, the nationalisation of oil companies, half-earth socialism, or any other proposal to achieve climate mitigation through radical changes to “business as usual.”

In the place of any such challenges to the current economic order was another formidable acronym: BECCS, “Bio-Energy with Carbon Capture and Storage.” No costly shakeups to the current economy were needed, because in the IAMs, the not-yet-commercialized BECCS was projected to become so widely implemented in the second half of the century that it would draw down all the excess carbon we are currently rushing to emit.

As the 21st century progressed and as warming thresholds such as 1.5°C or even 2°C grew dangerously close, overshoot, excused by the imagined future roll-out of BECCS, became a more attractive and dangerous concept. Due to the magic of IAMs incorporating overshoot, countries like Canada, the US, and other petrostates could declare climate emergencies, pledge their support to a 1.5°C ceiling – and simultaneously step up their fossil extraction efforts. 

“Construction Work on Trans Mountain Pipeline outside Valemount, BC, Canada, Sept 16, 2020.” (Photo by Adam Jones, licensed via Creative Commons CC By 2.0, accessed via flickr.) On June 17, 2019, the Canadian Parliament approved a motion declaring the country to be in a climate emergency. On June 18, 2019, the Government of Canada announced its approval of the Trans-Mountain Pipeline Expansion, for the purpose of bringing more tar sands crude to the BC coast for export.

At COP15 in Copenhagen in 2009, and most famously at the Paris Accord in 2015, countries could piously pledge their allegiance to stringent warming limits, while ensuring no binding commitments remained in the texts to limit the fossil fuel industry. Overshoot was the enabling concept: “Through this sleight of hand, any given target could be both missed and met and any missing be rationalised as part of the journey to meeting it ….” (p 87)

“The common capital of the class”

There is a good deal of Marxist rhetoric in Overshoot, and Malm and Carton are able guides to this often tangled body of political-economic theory. On some subjects they employ these ideas to clarifying effect.

Given the overwhelming consensus of climatologists, plus the evidence in plain sight all around us, that the climate emergency is rapidly growing more severe, why is there still such widespread resistance to radical economic change?

The opposition to radical change comes not only from fossil fuel company owners and shareholders. Rather, the fierce determination to carry on with business as usual comes from many sectors of industry, the financial sector, nearly all policy-makers, and most of the media elite.

As Malm and Carton explain, if firm policies were put in place to “leave fossil fuels in the ground”, stranding the assets of fossil fuel companies, there would be “layer upon layer” of value destruction. The first layer would be the value of the no-longer usable fossil reserves. The next layer would be the vast network of wells, pipelines, refineries, even gas stations which distribute fossil fuel. A third would be the machinery now in place to burn fossil fuels in almost every other sector of industrial production. The economic valuations of these layers would crash the moment “leaving fossil fuels in ground” became a binding policy.

Finally, the above layers of infrastructure require financing. “Increased fixed capital formation,” Malm and Carton write, “necessitates increased integration into equity as well as credit markets – or, to use a pregnant Marxian phrase, into ‘the common capital of the class.’” (p 133)

The upshot is that “any limitations on fossil fuel infrastructure would endanger the common capital of the class by which it has been financed.” (p 133-134) And “the class by which it has been financed,” of course, is the ruling elite, the small percentage of people who own most of corporate equity, and whose lobbyists enjoy regular access to lawmakers and regulators. 

The elite class which owns, finances and profits from fossil production also happens to be responsible for a wildly disproportionate amount of fossil fuel consumption. Overshoot cites widely publicized statistics that show that the richest ten per cent of humanity is responsible for half of the emissions, while the poorest fifty percent of humanity emits only about a tenth of the emissions. They add, 

“It was not the masses of the global South that, suicidally, tipped the world into 1.5°C. In fact, not even the working classes of the North were party to the process: between 1990 and 2019, per capita emissions of the poorest half of the populations of the US and Europe dropped by nearly one third, due to ‘compressed wages and consumption.’ The overshoot conjuncture was the creation of the rich, with which they capped their victory in the class struggle.” (p 225-226)

Stock, flow and the labour theory of value

Malm and Carton go on to explain the economic difference between fossil fuel energy and solar-and-wind energy, through the simple lens of Marx’ labour theory of value. In my opinion this is the least successful section of Overshoot.

First, the authors describe fossil fuel reserves as “stocks” and the sunshine and wind as “flows”. That’s a valid distinction, of significance in explaining some of the fundamental differences in these energy sources.

But why has fossil fuel extraction recently been significantly more profitable than renewable energy harvesting?

The key fact, Malm and Carton argue, is that “the flow [solar and wind energy] appears without labour. … [T]he fuel is ripe for picking prior to and in proud disregard of any process of production. ‘Value is labour,’ Marx spells out …. It follows that the flow cannot have value.”

They emphasize the point with another quote from Marx: “‘Where there is no value, there is eo ipso nothing to be expressed in money.’”

“And where there is nothing to be expressed in money,” they conclude, “there can be no profit.” (p 208-209) That is why the renewable energy business will never supply the profits that have been earned in fossil extraction.

This simple explanation ignores the fact that oil companies aren’t always profitable; for a period of years in the last decade, the US oil industry had negative returns on equity.1 Clearly, one factor in the profitability of extraction is the cost of extraction, while another is the price customers are both willing and able to pay. When the former is as high as or higher than the latter, there are no profits even for exploitation of stocks.

As for business opportunities derived from the flow, Malm and Carton concede that profits might be earned through the manufacture and installation of solar panels and wind turbines, or the provision of batteries and transmission lines. But in their view these profits will never come close to fossil fuel profits, and furthermore, any potential profits will drop rapidly as renewable sources come to dominate the electric grid. Why? Again, their explanation rests on Marx’s labour theory of value:

“The more developed the productive forces of the flow, the more proficient their capture of a kind of energy in which no labour can be objectified, the closer the price and the value and the profit all come to zero.” (page 211)

Does this sound fantastically utopian to you? Imagine the whole enterprise – mining, refining, smelting, transporting, manufacturing and installation of PV panels and wind turbines, extensions of grids, and integration of adequate amounts of both short- and long-term storage – becoming so “proficient [in] their capture of energy” that the costs are insignificant compared to the nearly limitless flow of clean electricity. Imagine that all these costs become so trivial that the price of the resulting electricity approaches zero.

As a corrective to this vision of ‘renewable electricity too cheap to meter,’ I recommend Vince Beiser’s Power Metal, reviewed here last week.

Malm and Carton, however, are convinced that renewably generated electricity can only get cheaper, and furthermore can easily substitute for almost all the current uses of fossil fuels, without requiring reductions in other types of consumption, and all within a few short years. In defense of this idea they approvingly cite the work of Mark Jacobson; rather than critique that work here, I’ll simply refer interested readers to my review of Jacobson’s 2023 publication No Miracles Needed.

Energy transition and stranded assets

Energy transition is not yet a reality. Malm and Carton note that although renewable energy supply has grown rapidly over the past 20 years, fossil energy use has not dropped. What we have so far is an energy addition, not an energy transition.

Not coincidentally, asset stranding likewise remains “a hypothetical event, not yet even attempted.” (p 192)

The spectre of fossil fuel reserves and infrastructure becoming stranded assets has been discussed in the pages of financial media, ever since climate science made it obvious that climate mitigation strategies would indeed require leaving most known fossil reserves in the ground, i.e., stranding these assets. (One of the pundits sounding a warning was Mark Carney, formerly a central banker and now touted as a contender to replace Justin Trudeau as leader of the Liberal Party of Canada; he makes an appearance in Overshoot.)

Yet there is no evidence the capitalist class collectively is losing sleep over stranded assets, any more than over the plight of poor farmers being driven from their lands by severe floods or droughts.

As new fossil fuel projects get more expensive, the financial establishment has stepped up its investment in such projects. In the years immediately following the Paris Agreement – whose 1.5°C warming target would have required stranding more than 80 per cent of fossil fuel reserves – a frenzy of investment added to both the reserves and the fixed capital devoted to extracting those reserves:

“Between 2016 and 2021, the world’s sixty largest banks poured nearly 5 trillion dollars into fossil fuel projects, the sums bigger at the end of this half-decade than at its beginning.” (p 20) 

The implications are twofold: first, big oil and big finance remain unconcerned that any major governments will enact strong and effective climate mitigation policies – policies that would put an immediate cap on fossil fuel exploitation plus a binding schedule for rapid reductions in fossil fuel use over the coming years. They are unconcerned about such policy possibilities because they have ensured there are no binding commitments to climate mitigation protocols.

Second, there are far more assets which could potentially be stranded today than there were even in 2015. We can expect, then, that fossil fuel interests will fight even harder against strong climate mitigation policies in the next ten years than they did in the last ten years. And since, as we have seen, the layers of stranded assets would go far beyond the fossil corporations themselves into ‘the common capital of the class’, the resistance to asset stranding will also be widespread.

Malm and Carton sum it up this way: “We have no reliable friends in the capitalist classes. … any path to survival runs through their defeat.” (p 236)

The governments of the rich countries, while pledging their support for stringent global warming limits, have through their deeds sent us along the path to imminent overshoot. But suppose a major coal- or oil-producing jurisdiction passed a law enacting steep cutbacks in extraction, thereby stranding substantial fossil capital assets.

“Any measure significant enough to suggest that the fears harboured for so long are about to come true could pop the bubble,” Malm and Carton write. “[T]he stampede would be frenzied and unstoppable, due to the extent of the financial connections ….” (p 242)

Such a “total breakdown of capital” would come with drastic social risks, to be sure – but the choice is between a breakdown of capital or a breakdown of climate (which would, of course, also cause a breakdown of capital). Could such a total breakdown of capital still be initiated before it’s too late to avoid climate breakdown? In a book filled with thoughtful analysis and probing questions, the authors close by proposing this focus for further work:

“Neither the Green New Deal nor degrowth or any other programme in circulation has a plan for how to strand the assets that must be stranded. … [This] is the point where strategic thinking and practise should be urgently concentrated in the years ahead.” (p 244)

 


1 See “2018 was likely the most profitable year for U.S. oil producers since 2013,” US Energy Information Administration, May 10, 2019. The article shows that publicly traded oil producers had greater losses in the period 2015-2017 than they had gains in 2013, 2014, and 2018.

Image at top of page: “The end of the Closing Plenary at the UN Climate Change Conference COP28 at Expo City Dubai on December 13, 2023, in Dubai, United Arab Emirates,” photo by COP28/Mahmoud Khaled, licensed for non-commercial use via Creative Commons CC BY-NC-SA 2.0, accessed on flickr.

Counting the here-and-now costs of climate change

A review of Slow Burn: The Hidden Costs of a Warming World

Also published on Resilience.

R. Jisung Park takes us into a thought experiment. Suppose we shift attention away from the prospect of coming climate catastrophes – out-of-control wildfires, big rises in sea levels, stalling of ocean circulation currents – and we focus instead on the ways that rising temperatures are already having daily impacts on people’s lives around the world.

Might these less dramatic and less obvious global-heating costs also provide ample rationale for concerted emissions reductions?

Slow Burn by R. Jisung Park is published by Princeton University Press, April 2024.

Park is an environmental and labour economist at the University of Pennsylvania. In Slow Burn, he takes a careful look at a wide variety of recent research efforts, some of which he participated in. He reports results in several major areas: the effect of hotter days on education and learning; the effect of hotter days on human morbidity and mortality; the increase in workplace accidents during hotter weather; and the increase in conflict and violence as hot days become more frequent.

In each of these areas, he says, the harms are measurable and substantial. And in another theme that winds through each chapter, he notes that the harms of global heating fall disproportionately on the poorest people both internationally and within nations. Unless adaptation measures reflect climate justice concerns, he says, global heating will exacerbate already deadly inequalities.

Even where the effect seems obvious – many people die during heat waves – it’s not a simple matter to quantify the increased mortality. For one thing, Park notes, very cold days as well as very hot days lead to increases in mortality. In some countries (including Canada) a reduction in very cold days will result in a decrease in mortality, which may offset the rise in deaths during heat waves.

We also learn about forward mortality displacement, “where the number of deaths immediately caused by a period of high temperatures is at least partially offset by a reduction in the number of deaths in the period immediately following the hot day or days.” (Slow Burn, p 85) 

After accounting for such complicating factors, a consortium of researchers has estimated the heat-mortality relationship through the end of this century, for 40 countries representing 55 percent of global population. Park summarizes their results:

“The Climate Impact Lab researchers estimate that, without any adaptation (so, simply extrapolating current dose-response relationships into a warmer future), climate change is likely to increase mortality rates by 221 per 100,000 people. … But adaptation is projected to reduce this figure by almost two-thirds: from 221 per 100,000 to seventy-three per 100,000. The bulk of this – 78 percent of the difference – comes from higher incomes.” (pp 198-199)

Let’s look at these estimates from several angles. First, to put the lower estimate of 73 additional deaths per 100,000 people in perspective, Park notes an increase in mortality of this magnitude would be six times larger than the US annual death toll from automobile crashes, and roughly tw0-thirds the US death toll from COVID-19 in 2020. An increase in mortality of 73 per 100,000 is a big number.

Second, it seems logical that people will try to adapt to more and more severe heat waves. If they have the means, they will install or augment their air-conditioning systems, or perhaps they’ll buy homes in cooler areas. But why should anyone have confidence that most people will have higher incomes by 2100, and therefore be in a better position to adapt to heat? Isn’t it just as plausible that most people will have less income and less ability to spend money on adaptation?

Third, Park notes that inequality is already evident in heat-mortality relationships. A single day with average temperature of 90°F (32.2°C) or higher increases the annual mortality in South Asian countries by 1 percent – ten times the heat-mortality increase that the United States experiences. Yet within the United States, there is also a large difference in heat-mortality rates between rich and poor neighbourhoods.

Even in homes that have air-conditioning (globally, only about 30%), low-income people often can’t afford to run the air-conditioners enough to counteract severe heat. “Everyone uses more energy on very hot and very cold days,” Park writes. “But the poor, who have less slack in their budgets, respond more sparingly.” (p 191)

A study in California found a marked increase in utility disconnections due to delinquent payments following heat waves. A cash-strapped household, then, faces an awful choice: don’t turn up the air-conditioner even when it’s baking hot inside, and suffer the ill effects; or turn it up, get through one heat wave, but risk disconnection unless it’s possible to cut back on other important expenses in order to pay the high electric bill.

(As if to underline the point, a headline I spotted as I finished this review reported surges in predatory payday loans following extreme weather.)

The drastic adaptation measure of relocation also depends on socio-economic status. Climate refugees crossing borders get a lot of news coverage, and there’s good reason to expect this issue will grow in prominence. Yet Park finds that “the numerical majority of climate-induced refugees are likely to be those who do not have the wherewithal to make it to an international border.” (p 141) As time goes on and the financial inequities of global heating increase, it may be true that even fewer refugees have the means to get to another country: “recent studies find that gradually rising temperatures may actually reduce the rate of migration in many poorer countries.” (p 141)

Slow Burn is weak on the issue of multiple compounding factors as they will interact over several decades. It’s one thing to measure current heat-mortality rates, but quite another to project that these rates will rise linearly with temperatures 30 or 60 years from now. Suppose, as seems plausible, that a steep rise in 30°C or hotter days is accompanied by reduced food supplies due to lower yields, higher basic food prices, increased severe storms that destroy or damage many homes, and less reliable electricity grids due to storms and periods of high demand. Wouldn’t we expect, then, that the 73-per-100,000-people annual heat-related deaths estimated by the Climate Impact Lab would be a serious underestimate?

Park also writes that due to rising incomes, “most places will be significantly better able to deal with climate change in the future.” (p 229) As for efforts at reducing emissions, in Park’s opinion “it seems reasonable to suppose that thanks in part to pledged and actual emissions cuts achieved in the past few decades, the likelihood of truly disastrous warming may have declined nontrivially.” (p 218) If you don’t share his faith in economic growth, and if you lack confidence that pledged emissions cuts will be made actual, some paragraphs in Slow Burn will come across as wishful thinking.

Yet on the book’s two primary themes – that climate change is already causing major and documentable harms to populations around the world, and that climate justice concerns must be at the forefront of adaptation efforts – Park marshalls strong evidence to present a compelling case.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr. 

Watching work

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part five
Also published on Resilience.

Consider a human vs computer triathlon. The first contest is playing a cognitively demanding game like chess. The second is driving a truck safely through a busy urban downtown. The third is grabbing packages, from warehouse shelves stocked with a great diversity of package types, and placing them safely into tote boxes.

Who would win, humans or computers?

So far the humans are ahead two-to-one. Though a computer program passed the best human chess players more than 25 years ago, replacing humans in the intellectually demanding tasks of truck-driving and package-packing has proved a much tougher challenge.

The reasons for the skills disparity can tell us a lot about the way artificial intelligence has developed and how it is affecting employment conditions.

Some tasks require mostly analytical thinking and perceptual skills, but many tasks require close, almost instantaneous coordination of fine motor control. Many of these latter tasks fall into the category that is often condescendingly termed “manual labour”. But as Antonio Gramsci argued,

“There is no human activity from which every form of intellectual participation can be excluded: Homo faber cannot be separated from homo sapiens.”1

All work involves, to some degree, both body and mind. This plays a major role in the degree to which AI can or cannot effectively replace human labour.

Yet even if AI can not succeed in taking away your job, it might succeed in taking away a big chunk of your paycheque.

Moravec’s paradox

By 2021, Amazon had developed a logistics system that could track millions of items and millions of shipments every day, from factory loading docks to shipping containers to warehouse shelves to the delivery truck that speeds to your door.

But for all its efforts, it hadn’t managed to develop a robot that could compete with humans in the delicate task of grabbing packages off shelves or conveyor belts.

Author Christopher Mims described the challenge in his book Arriving Today2. “Each of these workers is the hub of a three-dimensional wheel, where each spoke is ten feet tall and consists of mail slot-size openings. Every one of these sorters works as fast as they can. First they grab a package off the chute, then they pause for a moment to scan the item and read its destination off a screen …. Then they whirl and drop the item into a slot. Each of these workers must sort between 1,100 and 1,200 parcels per hour ….”

The problem was this: there was huge diversity not only in packaging types but in packaging contents. Though about half the items were concealed in soft poly bags, those bags might contain things that were light and soft, or light and hard, or light and fragile, or surprisingly heavy.

Humans have a remarkable ability to “adjust on the fly”. As our fingers close on the end of a package and start to lift, we can make nearly instantaneous adjustments to grip tighter – but not too tight – if we sense significant resistance due to unexpected weight. Without knowing what is in the packages, we can still grab and sort 20 packages per minute while seldom if ever crushing a package because we grip too tightly, and seldom losing control and having a package fly across the room.

Building a machine with the same ability is terribly difficult, as summed up by robotics pioneer Hans Moravec.

“One formulation of Moravec’s paradox goes like this,” Mims wrote: “it’s far harder to teach a computer to pick up and move a chess piece like its human opponent than it is to teach it to beat that human at chess.”

In the words of robotics scholar Thrishantha Nanayakkara,

“We have made huge progress in symbolic, data-driven AI. But when it comes to contact, we fail miserably. We don’t have a robot that we can trust to hold a hamster safely.”3

In 2021 even Amazon’s newest warehouses had robots working only on carefully circumscribed tasks, in carefully fenced-off and monitored areas, while human workers did most of the sorting and packing.

Amazon’s warehouse staffers still had paying jobs, but AI has already shaped their working conditions for the worse. Since Amazon is one of the world’s largest employers, as well as a major player in AI, their obvious eagerness to extract more value from a low-paid workforce should be seen as a harbinger of AI’s future effects on labour relations. We’ll return to those changing labour relations below.

Behind the wheel

One job which the artificial intelligence industrial complex has tried mightily to eliminate is the work of drivers. On the one hand, proponents of autonomous vehicles have pointed to the shocking annual numbers of people killed or maimed on highways and streets, claiming that self-driving cars and trucks will be much safer. On the other hand, in some industries the wages of drivers are a big part of the cost of business, and thus companies could swell their profit margins by eliminating those wages.

We’ve been hearing that full self-driving vehicles are just a few years away – for the past twenty years. But driving is one of those tasks that requires not only careful and responsive manipulation of vehicle controls, but quick perception and quick judgment calls in situations that the driver may have seldom – or never – confronted before.

Christopher Mims looked at the work of tuSimple, a San Diego-based firm hoping to market self-driving trucks. Counting all the sensors, controllers, and information processing devices, he wrote, “The AI on board TuSimple’s self-driving truck draws about four times as much power as the average American home ….”4

At the time, tuSimple was working on increasing their system’s reliability “from something like 99.99 percent reliable to 99.9999 percent reliable.” That improvement would not come easily, Sims explained: “every additional decimal point of reliability costs as much in time, energy, and money as all the previous ones combined.”

Some of the world’s largest companies have tried, and so far failed, to achieve widespread regulatory approval for their entries in the autonomous-vehicle sweepstakes. Consider the saga of GM’s Cruise robotaxi subsidiary. After GM and other companies had invested billions in the venture, Cruise received permission in August 2023 to operate their robotaxis twenty-four hours a day in San Fransisco.5

Just over two months later, Cruise suddenly suspended its robotaxi operations nationwide following an accident in San Francisco.6

In the wake of the controversy, it was revealed that although Cruise taxis appeared to have no driver and to operate fully autonomously, things weren’t quite that simple. Cruise founder and CEO Kyle Vogt told CNBC that “Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments.”7

Perhaps “2–4% of the time” doesn’t sound like much. But if you have a fleet of vehicles needing help, on average, that often, you need to have quite a few remote operators on call to be reasonably sure they can provide timely assistance. According to the New York Times, the two hundred Cruise vehicles in San Francisco “were supported by a vast operations staff, with 1.5 workers per vehicle.”8 If a highly capitalized company can pay teams of AI and robotics engineers to build vehicles whose electronics cost several times more than the vehicle itself, and the vehicles still require 1.5 workers/vehicle, the self-driving car show is not yet ready for prime time.

In another indication of the difficulty in putting a virtual robot behind the wheel, Bloomberg News reported last month that Apple is delaying launch of its long-rumored vehicle until 2028 at earliest.9 Not only that, but the vehicle will boast no more than Level-2 autonomy. CleanTechnica reported that

“The prior design for the [Apple] vehicle called for a system that wouldn’t require human intervention on highways in approved parts of North America and could operate under most conditions. The more basic Level 2+ plan would require drivers to pay attention to the road and take over at any time — similar to the current standard Autopilot feature on Tesla’s EVs. In other words, it will offer no significant upgrades to existing driver assistance technology from most manufacturers available today.”10

As for self-driving truck companies still trying to tap the US market, most are focused on limited applications that avoid many of the complications involved in typical traffic. For example, Uber Freight targets the “middle mile” segment of truck journeys. In this model, human drivers deliver a trailer to a transfer hub close to a highway. A self-driving tractor then pulls the trailer on the highway, perhaps right across the country, to another transfer hub near the destination. A human driver then takes the trailer to the drop-off point.11

This model limits the self-driving segments to roads with far less complications than urban environments routinely present.

This simplification of the tasks inherent in driving may seem quintessentially twenty-first century. But it represents one step in a process of “de-skilling” that has been a hallmark of industrial capitalism for hundreds of years.

Jacquard looms, patented in France in 1803, were first brought to the U.S. in the 1820s. The loom is an ancestor of the first computers, using hundreds of punchcards to “program” intricate designs for the loom to produce. Photo by Maia C, licensed via CC BY-NC-ND 2.0 DEED, accessed at flickr.

Reshaping labour relations

Almost two hundred years ago computing pioneer Charles Babbage advised industrialists that “The workshops of [England] contain within them a rich mine of knowledge, too generally neglected by the wealthier classes.”12

Babbage is known today as the inventor of the Difference Engine – a working mechanical calculator that could manipulate numbers – and the Analytical Engine – a programmable general purpose computer whose prototypes Babbage worked on for many years.

But Babbage was also interested in the complex skeins of knowledge evidenced in the co-operative activities of skilled workers. In particular, he wanted to break down that working knowledge into small constituent steps that could be duplicated by machines and unskilled workers in factories.

Today writers including Matteo Pasquinelli, Brian Merchant, Dan McQuillan and Kate Crawford highlight factory industrialism as a key part of the history of artificial intelligence.

The careful division of labour not only made proto-assembly lines possible, but they also allowed capitalists to pay for just the quantity of labour needed in the production process:

“The Babbage principle states that the organisation of a production process into small tasks (the division of labour) allows for the calculation and precise purchase of the quantity of labour that is necessary for each task (the division of value).”13

Babbage turned out to be far ahead of his time with his efforts to build a general-purpose computer, but his approach to the division of labour became mainstream management economics.

In the early 20th century assembly-line methods reshaped labour relations even more, thanks in part to the work of management theorist Frederick Taylor.

Taylor carefully measured and noted each movement of skilled mechanics – and used the resulting knowledge to design assembly lines in which cars could be produced at lower cost by workers with little training.

As Christopher Mims wrote, “Taylorism” is now “the dominant ideology of the modern world and the root of all attempts at increasing productivity ….” Indeed,

“While Taylorism once applied primarily to the factory floor, something fundamental has shifted in how we live and work. … the walls of the factory have dissolved. Every day, more and more of what we do, how we consume, even how we think, has become part of the factory system.”14

We can consume by using Amazon’s patented 1-Click ordering system. When we try to remember a name, we can start to type a Google search and get an answer – possibly even an appropriate answer – before we have finished typing our query. In both cases, of course, the corporations use their algorithms to capture and sort the data produced by our keystrokes or vocal requests.

But what about remaining activities on the factory floor, warehouse or highway? Can Taylorism meet the wildest dreams of Babbage, aided today by the latest forms of artificial intelligence? Can AI not only measure our work but replace human workers?

Yes, but only in certain circumstances. For work in which mind-body, hand-eye coordination is a key element, AI-enhanced robots have limited success. As we have seen, where a work task can be broken into discrete motions, each one repeated with little or no variation, it is sometimes economically efficient to develop and build robots. But where flexible and varied manual dexterity is required, or where judgement calls must guide the working hands to deal with frequent but unpredicted contingencies, AI robotization is not up to the job.

A team of researchers at MIT recently investigated jobs that could potentially be replaced by AI, and in particular jobs in which computer vision could play a significant role. They found that “at today’s costs U.S. businesses would choose not to automate most vision tasks that have “AI Exposure,” and that only 23% of worker wages being paid for vision tasks would be attractive to automate. … Overall, our findings suggest that AI job displacement will be substantial, but also gradual ….”15

A report released earlier this month, entitled Generative Artificial Intelligence and the Workforce, found that “Blue-collar jobs are unlikely to be automated by GenAI.” However, many job roles that are more cerebral and less hands-on stand to be greatly affected. The report says many jobs may be eliminated, at least in the short term, in categories including the following:

  • “financial analysts, actuaries and accountants [who] spend much of their time crunching numbers …;”
  • auditors, compliance officers and lawyers who do regulatory compliance monitoring;
  • software developers who do “routine tasks—such as generating code, debugging, monitoring systems and optimizing networks;”
  • administrative and human resource managerial roles.

The report also predicts that

“Given the broad potential for GenAI to replace human labor, increases in productivity will generate disproportionate returns for investors and senior employees at tech companies, many of whom are already among the wealthiest people in the U.S., intensifying wealth concentration.”16

It makes sense that if a wide range of mid-level managers and professional staff can be cut from payrolls, those at the top of the pyramid stand to gain. But even though, as the report states, blue-collar workers are unlikely to lose their jobs to AI-bots, the changing employment trends are making work life more miserable and less lucrative at lower rungs on the socio-economic ladder.

Pasquinelli puts it this way:

“The debate on the fear that AI fully replaces jobs is misguided: in the so-called platform economy, in reality, algorithms replace management and multiply precarious jobs.”17

And Crawford writes:

“Instead of asking whether robots will replace humans, I’m interested in how humans are increasingly treated like robots and what this means for the role of labor.”18

The boss from hell does not have an office

Let’s consider some of the jobs that are often discussed as prime targets for elimination by AI.

The taxi business has undergone drastic upheaval due to the rise of Uber and Lyft. These companies seem driven by a mission to solve a terrible problem: taxi drivers have too much of the nations’ wealth and venture capitalists have too little. The companies haven’t yet eliminated driving jobs, but they have indeed enriched venture capitalists while making the chauffeur-for-hire market less rewarding and less secure. It’s hard for workers to complain to or negotiate with the boss, now that the boss is an app.

How about Amazon warehouse workers? Christopher Mims describes the life of a worker policed by Amazon’s “rate”. Every movement during every warehouse worker’s day is monitored and fed into a data management system. The system comes back with a “rate” of tasks that all workers are expected to meet. Failure to match that rate puts the worker at immediate risk of firing. In fact, the lowest 25 per cent of the workers, as measured by their “rate”, are periodically dismissed. Over time, then, the rate edges higher, and a worker who may have been comfortably in the middle of the pack must keep working faster to avoid slipping into the bottom 25th percentile and thence into the ranks of the unemployed.

“The company’s relentless measurement, drive for efficiency, loose hiring standards, and moving targets for hourly rates,” Mims writes, “are the perfect system for ingesting as many people as possible and discarding all but the most physically fit.”19 Since the style of work lends itself to repetitive strain injuries, and since there are no paid sick days, even very physically fit warehouse employees are always at risk of losing their jobs.

Over the past 40 years the work of a long-distance trucker hasn’t changed much, but the work conditions and remuneration have changed greatly. Mims writes, “The average trucker in the United States made $38,618 a year in 1980, or $120,000 in 2020 dollars. In 2019, the average trucker made about $45,000 a year – a 63 percent decrease in forty years.”

There are many reasons for that redistribution of income out of the pockets of these workers. Among them is the computerization of a swath of supervisory tasks. In Mims words, “Drivers must meet deadlines that are as likely to be set by an algorithm and a online bidding system as a trucking company dispatcher or an account handler at a freight-forwarding company.”

Answering to a human dispatcher or payroll officer isn’t always pleasant or fair, of course – but at least there is the possibility of a human relationship with a human supervisor. That possibility is gone when the major strata of middle management are replaced by AI bots.

Referring to Amazon’s 25th percentile rule and steadily rising “rate”, Mims writes, “Management theorists have known for some time that forcing bosses to grade their employees on a curve is a recipe for low morale and unnecessarily high turnover.” But low morale doesn’t matter among managers who are just successions of binary digits. And high turnover of warehouse staff isn’t a problem for companies like Amazon – little is spent on training, new workers are easy enough to find, and the short average duration of employment makes it much harder for workers to get together in union organizing drives.

Uber drivers, many long-haul truckers, and Amazon packagers have this in common: their cold and heartless bosses are nowhere to be found; they exist only as algorithms. Management-by-AI, Dan McQuillan says, results in “an amplification of casualized and precarious work.”20

Management-by-AI could be seen, then, as just another stage in the development of a centuries-old “counterfeit person” – the legally recognized “person” that is the modern corporation. In the coinage of Charlie Stross, for centuries we’ve been increasingly governed by “old, slow AI”21 – the thinking mode of the corporate personage. We’ll return to the theme of “slow AI” and “fast AI” in a future post.


Notes

1 Antonio Gramsci, The Prison Notebooks, 1932. Quoted in The Eye of the Master: A Social History of Artificial Intelligence, by Matteo Pasquinelli, Verso, 2023.

2 Christopher Mims, Arriving Today: From Factory to Front Door – Why Everything Has Changed About How and What We Buy, Harper Collins, 2021; reviewed here.

3 Tom Chivers, “How DeepMind Is Reinventing the Robot,” IEEE Spectrum, 27 September 2021.

4 Christopher Mims, Arriving Today, 2021, page 143.

5 Johana Bhuiyan, “San Francisco to get round-the-clock robo taxis after controversial vote,” The Guardian, 11 Aug 2023.

6 David Shepardson, “GM Cruise unit suspends all driverless operations after California ban,” Reuters, 27 October 2023.

7 Lora Kolodny, “Cruise confirms robotaxis rely on human assistance every four to five miles,CNBC, 6 Nov 2023.

8 Tripp Mickle, Cade Metz and Yiwen Lu, “G.M.’s Cruise Moved Fast in the Driverless Race. It Got Ugly.” New York Times, 3 November 2023.

9 Mark Gurman, “Apple Dials Back Car’s Self-Driving Features and Delays Launch to 2028”, Bloomberg, 23 January 2024.

10 Steve Hanley, “Apple Car Pushed Back To 2028. Autonomous Driving? Forget About It!” CleanTechnica.com, 27 January 2024.

11 Marcus Law, “Self-driving trucks leading the way to an autonomous future,” Technology, 6 October 2023.

12 Charles Babbage, On the Economy of Machinery and Manufactures, 1832; quoted in Pasquinelli, The Eye of the Master, 2023.

13 Pasquinelli, The Eye of the Master.

14 Christopher Mims, Arriving Today, 2021.

15 Neil Thompson et al., “Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?”, MIT FutureTech, 22 January 2024.

16 Gad Levanon, Generative Artificial Intelligence and the Workforce, The Burning Glass Institute, 1 February 2024.

17 Pasquinelli, The Eye of the Master.

18 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

19 Christopher Mims, Arriving Today, 2021.

20 Dan McQuillan, Resisting AI: An Anti-Fascist Approach to Artificial Intelligence,” Bristol University Press, 2022.

21 Charlie Stross, “Dude, you broke the future!”, Charlie’s Diary, December 2017.

 


Image at top of post: “Mechanically controlled eyes see the controlled eyes in the mirror looking back”, photo from “human (un)limited”, 2019, a joint exhibition project of Hyundai Motorstudio and Ars Electronica, licensed under CC BY-NC-ND 2.0 DEED, accessed via flickr.

“Warning. Data Inadequate.”

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part three
Also published on Resilience.

“The Navy revealed the embryo of an electronic computer today,” announced a New York Times article, “that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”1

A few paragraphs into the article, “the Navy” was quoted as saying the new “perceptron” would be the first non-living mechanism “capable of receiving, recognizing and identifying its surroundings without any human training or control.”

This example of AI hype wasn’t the first and won’t be the last, but it is a bit dated. To be precise, the Times story was published on July 8, 1958.

Due to its incorporation of a simple “neural network” loosely analogous to the human brain, the perceptron of 1958 is recognized as a forerunner of today’s most successful “artificial intelligence” projects – from facial recognition systems to text extruders like ChatGPT. It’s worth considering this early device in some detail.

In particular, what about the claim that the perceptron could identify its surroundings “without any human training or control”? Sixty years on, the descendants of the perceptron have “learned” a great deal, and can now identify, describe and even transform millions of images. But that “learning” has involved not only billions of transistors, and trillions of watts, but also millions of hours of labour in “human training and control.”

Seeing is not perceiving

When we look at a real-world object – for example, a tree – sensors in our eyes pass messages through a network of neurons and through various specialized areas of the brain. Eventually, assuming we are old enough to have learned what a tree looks like, and both our eyes and the required parts of our brains are functioning well, we might say “I see a tree.” In short, our eyes see a configuration of light, our neural network processes that input, and the result is that our brains perceive and identify a tree.

Accomplishing the perception with electronic computing, it turns out, is no easy feat.

The perceptron invented by Dr. Frank Rosenblatt in the 1950s used a 20 pixel by 20 pixel image sensor, paired with an IBM 704 computer. Let’s look at some simple images, and how a perceptron might process the data to produce a perception. 

Images created by the author.

In the illustration at left above, what the camera “sees” at the most basic level is a column of pixels that are “on”, with all the other pixels “off”. However, if we train the computer by giving it nothing more than labelled images of the numerals from 0 to 9, the perceptron can recognize the input as matching the numeral “1”. If we then add training data in the form of labelled images of the characters in the Latin-script alphabet in a sans serif font, the perceptron can determine that it matches, equally well, the numeral “1”, the lower-case letter “l”, or an upper-case letter “I”.

The figure at right is considerably more complex. Here our perceptron is still working with a low-resolution grid, but pixels can be not only “on” or “off” – black or white – but various shades of grey. To complicate things further, suppose more training data has been added, in the form of hand-written letters and numerals, plus printed letters and numerals in an oblique sans serif font. The perceptron might now determine the figure is a numeral “1” or a lower-case “l” or upper-case “I”, either hand-written or printed in an oblique font, each with an equal probability. The perceptron is learning how to be an optical character recognition (OCR) system, though to be very good at the task it would need the ability to use context to the rank the probabilities of a numeral “1”, a lower-case “l”, or an upper-case “I”.

The possibilities multiply infinitely when we ask the perceptron about real-world objects. In the figure below, a bit of context, in the form of a visual ground, is added to the images. 

Images created by the author.

Depending, again, on the labelled training data already input to the computer, the perceptron may “see” the image at left as a tall tower, a bare tree trunk, or the silhouette of a person against a bright horizon. The perceptron might see, on the right, a leaning tree or a leaning building – perhaps the Leaning Tower of Pisa. With more training images and with added context in the input image – shapes of other buildings, for example – the perceptron might output with high statistical confidence that the figure is actually the Leaning Tower of Leeuwarden.

Today’s perceptrons can and do, with widely varying degrees of accuracy and reliability, identify and name faces in crowds, label the emotions shown by someone in a recorded job interview, analyse images from a surveillance drone and indicate that a person’s activities and surroundings match the “signature” of terrorist operations, or identify a crime scene by comparing an unlabelled image with photos of known settings from around the world. Whether right or wrong, the systems’ perceptions sometimes have critical consequences: people can be monitored, hired, fired, arrested – or executed in an instant by a US Air Force Reaper drone.

As we will discuss below, these capabilities have been developed with the aid of millions of hours of poorly-paid or unpaid human labour.

The Times article of 1958, however, described Dr. Rosenblatt’s invention this way: “the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience ….” The kernel of truth in that claim lies in the concept of a neural network.

Rosenblatt told the Times reporter “he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a ‘self-induced change in the wiring diagram.’”

I can empathize with that Times reporter. I still hope to find a person sufficiently intelligent to explain the machine learning process so clearly that even a simpleton like me can fully understand. However, New Yorker magazine writers in 1958 made a good attempt. As quoted in Matteo Pasquinelli’s book The Eye of the Master, the authors wrote:

“If a triangle is held up to the perceptron’s eye, the association units connected with the eye pick up the image of the triangle and convey it along a random succession of lines to the response units, where the image is registered. The next time the triangle is held up to the eye, its image will travel along the path already travelled by the earlier image. Significantly, once a particular response has been established, all the connections leading to that response are strengthened, and if a triangle of a different size and shape is held up to the perceptron, its image will be passed along the track that the first triangle took.”2

With hundreds, thousands, millions and eventually billions of steps in the perception process, the computer gets better and better at interpreting visual inputs.

Yet this improvement in machine perception comes at a high ecological cost. A September 2021 article entitled “Deep Learning’s Diminishing Returns” explained:

“[I]n 2012 AlexNet, the model that first showed the power of training deep-learning systems on graphics processing units (GPUs), was trained for five to six days using two GPUs. By 2018, another model, NASNet-A, had cut the error rate of AlexNet in half, but it used more than 1,000 times as much computing to achieve this.”

The authors concluded that, “Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools.”3

The steep increase in the computing demands of AI is illustrated in a graph by Anil Ananthaswamy.

“The Drive to Bigger AI Models” shows that AI models used for language and image generation have grown in size by several orders of magnitude since 2010.  Graphic from “In AI, is Bigger Better?”, by Anil Ananthaswamy, Nature, 9 March 2023.

Behold the Mechanical Turk

In the decades since Rosenblatt built the first perceptron, there were periods when progress in this field seemed stalled. Additional theoretical advances in machine learning, a many orders-of-magnitude increase in computer processing capability, and vast quantities of training data were all prerequisites for today’s headline-making AI systems. In Atlas of AI, Kate Crawford gives a fascinating account of the struggle to acquire that data.

Up to the 1980s artificial intelligence researchers didn’t have access to large quantities of digitized text or digitized images, and the type of machine learning that makes news today was not yet possible. The lengthy antitrust proceedings against IBM provided an unexpected boost to AI research, in the form of a hundred million digital words from legal proceedings. In the 1990s, court proceedings against Enron collected more than half a million email messages sent among Enron employees. This provided text exchanges in everyday English, though Crawford notes wording “represented the gender, race, and professional skews of those 158 workers.”

And the data floodgates were just beginning to open. As Crawford describes the change,

“The internet, in so many ways, changed everything; it came to be seen in the AI research field as something akin to a natural resource, there for the taking. As more people began to upload their images to websites, to photo-sharing services, and ultimately to social media platforms, the pillaging began in earnest. Suddenly, training sets could reach a size that scientists in the 1980s could never have imagined.”4

It took two decades for that data flood to become a tsunami. Even then, although images were often labelled and classified for free by social media users, the labels and classifications were not always consistent or even correct. There remained a need for humans to look at millions of images and create or check the labels and classifications.

Developers of the image database ImageNet collected 14 million images and eventually organized them into over twenty thousand categories. They initially hired students in the US for labelling work, but concluded that even at $10/hour, this work force would quickly exhaust the budget.

Enter the Mechanical Turk.

The original Mechanical Turk was a chess-playing scam originally set up in 1770 by a Hungarian inventor. An apparently autonomous mechanical human model, dressed in the Ottoman fashion of the day, moved chess pieces and could beat most human chess players. Decades went by before it was revealed that a skilled human chess player was concealed inside the machine for each exhibition, controlling all the motions.

In the early 2000s, Amazon developed a web platform by which AI developers, among others, could contract gig workers for many tasks that were ostensibly being done by artificial intelligence. These tasks might include, for example, labelling and classifying photographic images, or making judgements about outputs from AI-powered chat experiments. In a rare fit of honesty, Amazon labelled the process “artificial artificial intelligence”5 and launched its service, Amazon Mechanical Turk, in 2005.

screen shot taken 3 February 2024, from opening page at mturk.com.

Crawford writes,

“ImageNet would become, for a time, the world’s largest academic user of Amazon’s Mechanical Turk, deploying an army of piecemeal workers to sort an average of fifty images a minute into thousands of categories.”6

Chloe Xiang described this organization of work for Motherboard in an article entitled “AI Isn’t Artificial or Intelligent”:

“[There is a] large labor force powering AI, doing jobs that include looking through large datasets to label images, filter NSFW content, and annotate objects in images and videos. These tasks, deemed rote and unglamorous for many in-house developers, are often outsourced to gig workers and workers who largely live in South Asia and Africa ….”7

Laura Forlano, Associate Professor of Design at Illinois Institute of Technology, told Xiang “what human labor is compensating for is essentially a lot of gaps in the way that the systems work.”

Xiang concluded,

“Like other global supply chains, the AI pipeline is greatly imbalanced. Developing countries in the Global South are powering the development of AI systems by doing often low-wage beta testing, data annotating and labeling, and content moderation jobs, while countries in the Global North are the centers of power benefiting from this work.”

In a study published in late 2022, Kelle Howson and Hannah Johnston described why “platform capitalism”, as embodied in Mechanical Turk, is an ideal framework for exploitation, given that workers bear nearly all the costs while contractors take no responsibility for working conditions. The platforms are able to enroll workers from many countries in large numbers, so that workers are constantly low-balling to compete for ultra-short-term contracts. Contractors are also able to declare that the work submitted is “unsatisfactory” and therefore will not be paid, knowing the workers have no effective recourse and can be replaced by other workers for the next task. Workers are given an estimated “time to complete” before accepting a task, but if the work turns out to require two or three times as many hours, the workers are still only paid for the hours specified in the initial estimate.8

A survey of 700 cloudwork employees (or “independent contractors” in the fictive lingo of the gig work platforms) found about 34% of the time they spent on these platforms was unpaid. “One key outcome of these manifestations of platform power is pervasive unpaid labour and wage theft in the platform economy,” Howson and Johnston wrote.9 From the standpoint of major AI ventures at the top of the extraction pyramid, pervasive wage theft is not a bug in the system, it is a feature.

The apparently dazzling brilliance of AI-model creators and semi-conductor engineers gets the headlines in western media. But without low-paid or unpaid work by employees in the Global South, “AI systems won’t function,” Crawford writes. “The technical AI research community relies on cheap, crowd-sourced labor for many tasks that can’t be done by machines.”10

Whether vacuuming up data that has been created by the creative labour of hundreds of millions of people, or relying on tens of thousands of low-paid workers to refine the perception process for reputedly super-intelligent machines, the AI value chain is another example of extractivism.

“AI image and text generation is pure primitive accumulation,” James Bridle writes, “expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners.”11

“All seven emotions”

New AI implementations don’t usually start with a clean slate, Crawford says – they typically borrow classification systems from earlier projects.

“The underlying semantic structure of ImageNet,” Crawford writes, “was imported from WordNet, a database of word classifications first developed at Princeton University’s Cognitive Science Laboratory in 1985 and funded by the U.S. Office of Naval Research.”12

But classification systems are unavoidably political when it comes to slotting people into categories. In the ImageNet groupings of pictures of humans, Crawford says, “we see many assumptions and stereotypes, including race, gender, age, and ability.”

She explains,

“In ImageNet the category ‘human body’ falls under the branch Natural Object → Body → Human Body. Its subcategories include ‘male body,’ ‘person,’ ‘juvenile body,’ ‘adult body,’ and ‘female body.’ The ‘adult body’ category contains the subclasses ‘adult female body’ and ‘adult male body.’ There is an implicit assumption here that only ‘male’ and ‘female’ bodies are recognized as ‘natural.’”13

Readers may have noticed that US military agencies were important funders of some key early AI research: Frank Rosenblatt’s perceptron in the 1950s, and the WordNet classification scheme in the 1980s, were both funded by the US Navy.

For the past six decades, the US Department of Defense has also been interested in systems that might detect and measure the movements of muscles in the human face, and in so doing, identify emotions. Crawford writes, “Once the theory emerged that it is possible to assess internal states by measuring facial movements and the technology was developed to measure them, people willingly adopted the underlying premise. The theory fit what the tools could do.”14

Several major corporations now market services with roots in this military-funded research into machine recognition of human emotion – even though, as many people have insisted, the emotions people express on their faces don’t always match the emotions they are feeling inside.

Affectiva is a corporate venture spun out of the Media Lab at Massachusetts Institute of Technology. On their website they claim “Affectiva created and defined the new technology category of Emotion AI, and evangelized its many uses across industries.” The opening page of affectiva.com spins their mission as “Humanizing Technology with Emotion AI.”

Who might want to contract services for “Emotion AI”? Media companies, perhaps, want to “optimize content and media spend by measuring consumer emotional responses to videos, ads, movies and TV shows – unobtrusively and at scale.” Auto insurance companies, perhaps, might want to keep their (mechanical) eyes on you while you drive: “Using in-cabin cameras our AI can detect the state, emotions, and reactions of drivers and other occupants in the context of a vehicle environment, as well as their activities and the objects they use. Are they distracted, tired, happy, or angry?”

Affectiva’s capabilities, the company says, draw on “the world’s largest emotion database of more than 80,000 ads and more than 14.7 million faces analyzed in 90 countries.”15 As reported by The Guardian, the videos are screened by workers in Cairo, “who watch the footage and translate facial expressions to corresponding emotions.”6

There is a slight problem: there is no clear and generally accepted definition of an emotion, nor general agreement on just how many emotions there might be. But “emotion AI” companies don’t let those quibbles get in the way of business.

Amazon’s Rekognition service announced in 2019 “we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’)” – but they were proud to have “added a new emotion: ‘Fear’.”17

Facial- and emotion-recognition systems, with deep roots in military and intelligence agency research, are now widely employed not only by these agencies but also by local police departments. Their use is not confined to governments: they are used in the corporate world for a wide range of purposes. And their production and operation likewise crosses public-private lines; though much of the initial research was government-funded, the commercialization of the technologies today allows corporate interests to sell the resulting services to public and private clients around the world.

What is the likely impact of these AI-aided surveillance tools? Dan McQuillan sees it this way:

“We can confidently say that the overall impact of AI in the world will be gendered and skewed with respect to social class, not only because of biased data but because engines of classification are inseparable from systems of power.”18

In our next installment we’ll see that biases in data sources and classification schemes are reflected in the outputs of the GPT large language model.


Image at top of post: The Senture computer server facility in London, Ky, on July 14, 2011, photo by US Department of Agriculture, public domain, accessed on flickr.

Title credit: the title of this post quotes a lyric of “Data Inadequate”, from the 1998 album Live at Glastonbury by Banco de Gaia.


Notes

1 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

2 “Rival”, in The New Yorker, by Harding Mason, D. Stewart, and Brendan Gill, November 28, 1958, synopsis here. Quoted by Matteo Pasquinelli in The Eye of the Master: A Social History of Artificial Intelligence, Verso Books, October 2023, page 137.

 Deep Learning’s Diminishing Returns”, by Neil C. Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso, IEEE Spectrum, 24 September 2021.

4 Crawford, Kate, Atlas of AI, Yale University Press, 2021.

5 This phrase is cited by Elizabeth Stevens and attributed to Jeff Bezos, in “The mechanical Turk: a short history of ‘artificial artificial intelligence’”, Cultural Studies, 08 March 2022.

6 Crawford, Atlas of AI.

7 Chloe Xiang, “AI Isn’t Artificial or Intelligent: How AI innovation is powered by underpaid workers in foreign countries,” Motherboard, 6 December 2022.

8 Kelle Howson and Hannah Johnston, “Unpaid labour and territorial extraction in digital value networks,” Global Network, 26 October 2022.

9 Howson and Johnston, “Unpaid labour and territorial extraction in digital value networks.”

10 Crawford, Atlas of AI.

11 James Bridle, “The Stupidity of AI”, The Guardian, 16 Mar 2023.

12 Crawford, Atlas of AI.

13 Crawford, Atlas of AI.

14 Crawford, Atlas of AI.

15 Quotes from Affectiva taken from www.affectiva.com on 5 February 2024.

16 Oscar Schwarz, “Don’t look now: why you should be worried about machines reading your emotions,” The Guardian, 6 March 2019.

17 From Amazon Web Services Rekognition website, accessed on 5 February 2024; italics added.

18 Dan McQuillan, “Post-Humanism, Mutual Aid,” in AI for Everyone? Critical Perspectives, University of Westminster Press, 2021.

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

A road map that misses some turns

A review of No Miracles Needed

Also published on Resilience

Mark Jacobson’s new book, greeted with hosannas by some leading environmentalists, is full of good ideas – but the whole is less than the sum of its parts.

No Miracles Needed, by Mark Z. Jacobson, published by Cambridge University Press, Feb 2023. 437 pages.

The book is No Miracles Needed: How Today’s Technology Can Save Our Climate and Clean Our Air (Cambridge University Press, Feb 2023).

Jacobson’s argument is both simple and sweeping: We can transition our entire global economy to renewable energy sources, using existing technologies, fast enough to reduce annual carbon dioxide emissions at least 80% by 2030, and 100% by 2050. Furthermore, we can do all this while avoiding any major economic disruption such as a drop in annual GDP growth, a rise in unemployment, or any drop in creature comforts. But wait – there’s more! In so doing, we will also completely eliminate pollution.

Just don’t tell Jacobson that this future sounds miraculous.

The energy transition technologies we need – based on Wind, Water and Solar power, abbreviated to WWS – are already commercially available, Jacobson insists. He contrasts the technologies he favors with “miracle technologies” such as geoengineering, Carbon Capture Storage and Utilization (CCUS), or Direct Air Capture of carbon dioxide (DAC). These latter technologies, he argues, are unneeded, unproven, expensive, and will take far too long to implement at scale; we shouldn’t waste our time on such schemes.  

The final chapter helps to understand both the hits and misses of the previous chapters. In “My Journey”, a teenage Jacobson visits the smog-cloaked cities of southern California and quickly becomes aware of the damaging health effects of air pollution:

“I decided then and there, that when I grew up, I wanted to understand and try to solve this avoidable air pollution problem, which affects so many people. I knew what I wanted to do for my career.” (No Miracles Needed, page 342)

His early academic work focused on the damages of air pollution to human health. Over time, he realized that the problem of global warming emissions was closely related. The increasingly sophisticated computer models he developed were designed to elucidate the interplay between greenhouse gas emissions, and the particulate emissions from combustion that cause so much sickness and death.

These modeling efforts won increasing recognition and attracted a range of expert collaborators. Over the past 20 years, Jacobson’s work moved beyond academia into political advocacy. “My Journey” describes the growth of an organization capable of developing detailed energy transition plans for presentation to US governors, senators, and CEOs of major tech companies. Eventually that led to Jacobson’s publication of transition road maps for states, countries, and the globe – road maps that have been widely praised and widely criticized.

In my reading, Jacobson’s personal journey casts light on key features of No Miracles Needed in two ways. First, there is a singular focus on air pollution, to the omission or dismissal of other types of pollution. Second, it’s not likely Jacobson would have received repeat audiences with leading politicians and business people if he challenged the mainstream orthodox view that GDP can and must continue to grow.

Jacobson’s road map, then, is based on the assumption that all consumer products and services will continue to be produced in steadily growing quantities – but they’ll all be WWS based.

Does he prove that a rapid transition is a realistic scenario? Not in this book.

Hits and misses

Jacobson gives us brief but marvelously lucid descriptions of many WWS generating technologies, plus storage technologies that will smooth the intermittent supply of wind- and sun-based energy. He also goes into considerable detail about the chemistry of solar panels, the physics of electricity generation, and the amount of energy loss associated with each type of storage and transmission.

These sections are aimed at a lay readership and they succeed admirably. There is more background detail, however, than is needed to explain the book’s central thesis.

The transition road map, on the other hand, is not explained in much detail. There are many references to scientific papers in which he outlines his road maps. A reader of No Miracles Needed can take Jacobson’s word that the model is a suitable representation, or you can find and read Jacobson’s articles in academic journals – but you don’t get the needed details in this book.

Jacobson explains why, at the level of a device such as a car or a heat pump, electric energy is far more efficient in producing motion or heat than is an internal combustion engine or a gas furnace. Less convincingly, he argues that electric technologies are far more energy-efficient than combustion for the production of industrial heat – while nevertheless conceding that some WWS technologies needed for industrial heat are, at best, in prototype stages.

Yet Jacobson expresses serene confidence that hard-to-electrify technologies, including some industrial processes and long-haul aviation, will be successfully transitioning to WWS processes – perhaps including green hydrogen fuel cells, but not hydrogen combustion – by 2035.

The confidence in complex global projections is often jarring. For example, Jacobson tells us repeatedly that the fully WWS energy system of 2050 “reduces end-use energy requirements by 56.4 percent” (page 271, 275).1 The expressed precision notwithstanding, nobody yet knows the precise mix of storage types, generation types, and transmission types, which have various degrees of energy efficiency, that will constitute a future WWS global system. What we should take from Jacobson’s statements is that, based on the subset of factors and assumptions – from an almost infinitely complex global energy ecosystem – which Jacobson has included in his model, the calculated outcome is a 56% end-use energy reduction.

Canada’s Premiers visit Muskrat Falls dam construction site, 2015. Photo courtesy of Government of Newfoundland and Labrador; CC BY-NC-ND 2.0 license, via Flickr.

Also jarring is the almost total disregard of any type of pollution other than that which comes from fossil fuel combustion. Jacobson does briefly mention the particles that grind off the tires of all vehicles, including typically heavier EVs. But rather than concede that these particles are toxic and can harm human and ecosystem health, he merely notes that the relatively large particles “do not penetrate so deep into people’s lungs as combustion particles do.” (page 49)

He claims, without elaboration, that “Environmental damage due to lithium mining can be averted almost entirely.” (page 64) Near the end of the book, he states that “In a 2050 100 percent WWS world, WWS energy private costs equal WWS energy social costs because WWS eliminates all health and climate costs associated with energy.” (page 311; emphasis mine)

In a culture which holds continual economic growth to be sacred, it would be convenient to believe that business-as-usual can continue through 2050, with the only change required being a switch to WWS energy.

Imagine, then, that climate-changing emissions were the only critical flaw in the global economic system. Given that assumption, is Jacobson’s timetable for transition plausible?

No. First, Jacobson proposes that “by 2022”, no new power plants be built that use coal, methane, oil or biomass combustion; and that all new appliances for heating, drying and cooking in the residential and commercial sectors “should be powered by electricity, direct heat, and/or district heating.” (page 319) That deadline has passed, and products that rely on combustion continue to be made and sold. It is a mystery why Jacobson or his editors would retain a 2022 transition deadline in a book slated for publication in 2023.

Other sections of the timeline also strain credulity. “By 2023”, the timeline says, all new vehicles in the following categories should be either electric or hydrogen fuel-cell: rail locomotives, buses, nonroad vehicles for construction and agriculture, and light-duty on-road vehicles. This is now possible only in a purely theoretical sense. Batteries adequate for powering heavy-duty locomotives and tractors are not yet in production. Even if they were in production, and that production could be scaled up within a year, the charging infrastructure needed to quickly recharge massive tractor batteries could not be installed, almost overnight, at large farms or remote construction sites around the world.

While electric cars, pick-ups and vans now roll off assembly lines, the global auto industry is not even close to being ready to switch the entire product lineup to EV only. Unless, of course, they were to cut back auto production by 75% or more until production of EV motors, batteries, and charging equipment can scale up. Whether you think that’s a frightening prospect or a great idea, a drastic shrinkage in the auto industry would be a dramatic departure from a business-as-usual scenario.

What’s the harm, though, if Jacobson’s ambitious timeline is merely pushed back by two or three years?

If we were having this discussion in 2000 or 2010, pushing back the timeline by a few years would not be as consequential. But as Jacobson explains effectively in his outline of the climate crisis, we now need both drastic and immediate actions to keep cumulative carbon emissions low enough to avoid global climate catastrophe. His timeline is constructed with the goal of reducing carbon emissions by 80% by 2030, not because those are nice round figures, but because he (and many others) calculate that reductions of that scale and rapidity are truly needed. Even one or two more years of emissions at current rates may make the 1.5°C warming limit an impossible dream.

The picture is further complicated by a factor Jacobson mentions only in passing. He writes,

“During the transition, fossil fuels, bioenergy, and existing WWS technologies are needed to produce the new WWS infrastructure. … [A]s the fraction of WWS energy increases, conventional energy generation used to produce WWS infrastructure decreases, ultimately to zero. … In sum, the time-dependent transition to WWS infrastructure may result in a temporary increase in emissions before such emissions are eliminated.” (page 321; emphasis mine)

Others have explained this “temporary increase in emissions” at greater length. Assuming, as Jacobson does, that a “business-as-usual” economy keeps growing, the vast majority of goods and services will continue, in the short term, to be produced and/or operated using fossil fuels. If we embark on an intensive, global-scale, rapid build-out of WWS infrastructures at the same time, a substantial increment in fossil fuels will be needed to power all the additional mines, smelters, factories, container ships, trucks and cranes which build and install the myriad elements of a new energy infrastructure. If all goes well, that new energy infrastructure will eventually be large enough to power its own further growth, as well as to power production of all other goods and services that now rely on fossil energy.

Unless we accept a substantial decrease in non-transition-related industrial activity, however, the road that takes us to a full WWS destination must route us through a period of increased fossil fuel use and increased greenhouse gas emissions.

It would be great if Jacobson modeled this increase to give us some guidance how big this emissions bump might be, how long it might last, and therefore how important it might be to cumulative atmospheric carbon concentrations. There is no suggestion in this book that he has done that modeling. What should be clear, however, is that any bump in emissions at this late date increases the danger of moving past a climate tipping point – and this danger increases dramatically with every passing year.


1In a tl;dr version of No Miracles Needed published recently in The Guardian, Jacobson says “Worldwide, in fact, the energy that people use goes down by over 56% with a WWS system.” (“‘No miracles needed’: Prof Mark Jacobson on how wind, sun and water can power the world”, 23 January 2023)

 


Photo at top of page by Romain Guy, 2009; public domain, CC0 1.0 license, via Flickr.

Profits of Utopia

Also published on Resilience

What led to the twentieth century’s rapid economic growth? And what are the prospects for that kind of growth to return?

Slouching Towards Utopia: An Economic History of the Twentieth Century, was published by Basic Books, Sept 2022; 605 pages.

Taken together, two new books go a long way toward answering the first of those questions.

Bradford J. DeLong intends his Slouching Towards Utopia to be a “grand narrative” of what he calls “the long twentieth century”.

Mark Stoll summarizes his book Profit as “a history of capitalism that seeks to explain both how capitalism changed the natural world and how the environment shaped capitalism.”

By far the longer of the two books, DeLong’s tome primarily concerns the years from 1870 to 2010. Stoll’s slimmer volume goes back thousands of years, though the bulk of his coverage concerns the past seven centuries.

Both books are well organized and well written. Both make valuable contributions to an understanding of our current situation. In my opinion Stoll casts a clearer light on the key problems we now face.

Although neither book explicitly addresses the prospects for future prosperity, Stoll’s concluding verdict offers a faint hope.

Let’s start with Slouching Towards Utopia. Bradford J. Delong, a professor of economics at University of California Berkeley, describes “the long twentieth century” – from 1870 to 2010 – as “the first century in which the most important historical thread was what anyone would call the economic one, for it was the century that saw us end our near-universal dire material poverty.” (Slouching Towards Utopia, page 2; emphasis mine) Unfortunately that is as close as he gets in this book to defining just what he means by “economics”.

On the other hand he does tell us what “political economics” means:

“There is a big difference between the economic and the political economic. The latter term refers to the methods by which people collectively decide how they are going to organize the rules of the game within which economic life takes place.” (page 85; emphasis in original)

Discussion of the political economics of the Long Twentieth Century, in my opinion, account for most of the bulk and most of the value in this book.

DeLong weaves into his narratives frequent – but also clear and concise – explanations of the work of John Maynard Keynes, Friedrich Hayek, and Karl Polanyi. These three very different theorists responded to, and helped bring about, major changes in “the rules of the game within which economic life takes place”.

DeLong uses their work to good effect in explaining how policymakers and economic elites navigated and tried to influence the changing currents of market fundamentalism, authoritarian collectivism, social democracy, the New Deal, and neoliberalism.

With each swing of the political economic pendulum, the industrial, capitalist societies either slowed, or sped up, the advance “towards utopia” – a society in which all people, regardless of class, race, or sex, enjoy prosperity, human rights and a reasonably fair share of the society’s wealth.

DeLong and Stoll present similar perspectives on the “Thirty Glorious Years” from the mid-1940s to the mid-1970s, and a similarly dim view of the widespread turn to neoliberalism since then.

They also agree that while a “market economy” plays an important role in generating prosperity, a “market society” rapidly veers into disaster. That is because the market economy, left to its own devices, exacerbates inequalities so severely that social cohesion falls apart. The market must be governed by social democracy, and not the other way around.

DeLong provides one tragic example:

“With unequal distribution, a market economy will generate extraordinarily cruel outcomes. If my wealth consists entirely of my ability to work with my hands in someone else’s fields, and if the rains do not come, so that my ability to work with my hands has no productive market value, then the market will starve me to death – as it did to millions of people in Bengal in 1942 and 1943.” (Slouching Towards Utopia, p 332)

Profit: An Environmental History was published by Polity Books, January 2023; 280 pages.

In DeLong’s and Stoll’s narratives, during the period following World War II “the rules of the economic game” in industrialized countries were set in a way that promoted widespread prosperity and rising wealth for nearly all classes, without a concomitant rise in inequality.

As a result, economic growth during that period was far higher than it had been from 1870 to 1940, before the widespread influence of social democracy, and far higher than it has been since about 1975 during the neoliberal era.

During the Thirty Glorious Years, incomes from the factory floor to the CEO’s office rose at roughly the same rate. Public funding of advanced education, an income for retired workers, unemployment insurance, strong labor unions, and (in countries more civilized than the US) public health insurance – these social democratic features ensured that a large and growing number of people could continue to buy the ever-increasing output of the consumer economy. High marginal tax rates ensured that government war debts would be retired without cutting off the purchasing power of lower and middle classes.

Stoll explains that long-time General Motors chairman Alfred Sloan played a key role in the transition to a consumer economy. Under his leadership GM pioneered a line-up ranging from economy cars to luxury cars; the practice of regularly introducing new models whose primary features were differences in fashion; heavy spending on advertising to promote the constantly-changing lineup; and auto financing which allowed consumers to buy new cars without first saving up the purchase price.

By then the world’s largest corporation, GM flourished during the social democratic heyday of the Thirty Glorious Years. But in Stoll’s narrative, executives like Alfred Sloan couldn’t resist meddling with the very conditions that had made their version of capitalism so successful:

“There was a worm in the apple of postwar prosperity, growing out of sight until it appeared in triumph in the late 1970s. The regulations and government activism of the New Deal … so alarmed certain wealthy corporate leaders, Alfred Sloan among them, that they began to develop a propaganda network to promote weak government and low taxes.” (Profit, page 176)

This propaganda network achieved hegemony in the 1980s as Ronald Reagan and Margaret Thatcher took the helm in the US and the UK. DeLong and Stoll concur that the victory of neoliberalism resulted in a substantial drop in the economic growth rate, along with a rapid growth in inequality. As DeLong puts it, the previous generation’s swift march towards utopia slowed to a crawl.

DeLong and Stoll, then, share a great deal when it comes to political economics – the political rules that govern how economic wealth is distributed.

On the question of how that economic wealth is generated, however, DeLong is weak and Stoll makes a better guide.

DeLong introduces his discussion of the long twentieth century with the observation that between 1870 and 2010, economic growth far outstripped population growth for the first time in human history. What led to that economic acceleration? There were three key factors, DeLong says:

“Things changed starting around 1870. Then we got the institutions for organization and research and the technologies – we got full globalization, the industrial research laboratory, and the modern corporation. These were the keys. These unlocked the gate that had previously kept humanity in dire poverty.” (Slouching Towards Utopia, p. 3)

Thomas Edison’s research lab in West Orange, New Jersey. Cropped from photo by Anita Gould, 2010, CC BY-SA 2.0 license, via Flickr.

These may have been necessary conditions for a burst of economic growth, but were they sufficient? If they were sufficient, then why should we believe that the long twentieth century is conclusively over? Since DeLong’s three keys are still in place, and if only the misguided leadership of neoliberalism has spoiled the party, would it not be possible that a swing of the political economic pendulum could restore the conditions for rapid economic growth?

Indeed, in one of DeLong’s few remarks directly addressing the future he says “there is every reason to believe prosperity will continue to grow at an exponential rate in the centuries to come.” (page 11)

Stoll, by contrast, deals with the economy as inescapably embedded in the natural environment, and he emphasizes the revolutionary leap forward in energy production in the second half of the 19th century.

Energy and environment

Stoll’s title and subtitle are apt – Profit: An Environmental History. He says that “economic activity has always degraded environments” (p. 6) and he provides examples from ancient history as well as from the present.

Economic development in this presentation is “the long human endeavor to use resources more intensively.” (p. 7) In every era, tapping energy sources has been key.

European civilization reached for the resources of other regions in the late medieval era. Technological developments such as improved ocean-going vessels allowed incipient imperialism, but additional energy sources were also essential. Stoll explains that the Venetian, Genoese and Portuguese traders who pioneered a new stage of capitalism all relied in part on the slave trade:

“By the late fifteenth century, slaves made up over ten percent of the population of Lisbon, Seville, Barcelona, and Valencia and remained common in southern coastal Portugal and Spain for another century or two.” (p. 40)

The slave trade went into high gear after Columbus chanced across the Americas. That is because, even after they had confiscated two huge continents rich in resources, European imperial powers still relied on the consumption of other humans’ lives as an economic input:

“Free-labor colonies all failed to make much profit and most failed altogether. Colonizers resorted to slavery to people colonies and make them pay. For this reason Africans would outnumber Europeans in the Americas until the 1840s.” (p. 47)

While the conditions of slavery in Brazil were “appallingly brutal”, Stoll writes, Northern Europeans made slavery even more severe. As a result “Conditions in slave plantations were so grueling and harsh that birthrates trailed deaths in most European plantation colonies.” (p 49)

‘Shipping Sugar’ from William Clark’s ‘Ten views in the island of Antigua’ (Thomas Clay, London, 1823). Public domain image via Picryl.com.

Clearly, then, huge numbers of enslaved workers played a major and fundamental role in rising European wealth between 1500 and 1800. It is perhaps no coincidence that in the 19th century, as slavery was being outlawed in colonial empires, European industries were learning how to make effective use of a new energy source: coal. By the end of that century, the fossil fuel economy had begun its meteoric climb.

Rapid increases in scientific knowledge, aided by such organizations as modern research laboratories, certainly played a role in commercializing methods of harnessing the energy in coal and oil. Yet this technological knowhow on its own, without abundant quantities of readily-extracted coal and oil, would not have led to an explosion of economic growth.

Where DeLong is content to list “three keys to economic growth” that omit fossil fuels, Stoll adds a fourth key – not merely the technology to use fossil fuels, but the material availability of those fuels.

By 1900, coal-powered engines had transformed factories, mines, ocean transportation via steamships, land transportation via railroads, and the beginnings of electrical grids. The machinery of industry could supply more goods than most people had ever thought they might want, a development Stoll explains as a transition from an industrial economy to a consumer economy.

Coal, however, could not have powered the car culture that swept across North America before World War II, and across the rest of the industrialized world after the War. To shift the consumer economy into overdrive, an even richer and more flexible energy source was needed: petroleum.

By 1972, Stoll notes, the global demand for petroleum was five-and-a-half times as great as in 1949.

Like DeLong, Stoll marks the high point of the economic growth rate at about 1970. And like DeLong, he sees the onset of neoliberalism as one factor slowing and eventually stalling the consumer economy. Unlike DeLong, however, Stoll also emphasizes the importance of energy sources in this trajectory. In the period leading up to 1970 net energy availability was skyrocketing, making rapid economic growth achievable. After 1970 net energy availability grew more slowly, and increasing amounts of energy had to be used up in the process of finding and extracting energy. In other words, the Energy Return on Energy Invested, which increased rapidly between 1870 and 1970, peaked and started to decline over recent decades.

This gradual turnaround in net energy, along with the pervasive influence of neoliberal ideologies, contributed to the faltering of economic growth. The rich got richer at an even faster pace, but most of society gained little or no ground.

Stoll pays close attention to the kind of resources needed to produce economic growth – the inputs. He also emphasizes the anti-goods that our economies turn out on the other end, be they toxic wastes from mining and smelting, petroleum spills, smog, pervasive plastic garbage, and climate-disrupting carbon dioxide emissions.

Stoll writes, 

“The relentless, rising torrent of consumer goods that gives Amazon.com its apt name places unabating demand on extractive industries for resources and energy. Another ‘Amazon River’ of waste flows into the air, water, and land.” (Profit, p. 197)

Can the juggernaut be turned around before it destroys both our society and our ecological life-support systems, and can a fair, sustainable economy take its place? On this question, Stoll’s generally excellent book disappoints.

While he appears to criticize the late-twentieth century environmental movement for not daring to challenge capitalism itself, in Profit’s closing pages he throws cold water on any notion that capitalism could be replaced.

“Capitalism … is rooted in human nature and human history. These deep roots, some of which go back to our remotest ancestors, make capitalism resilient and adaptable to time and circumstances, so that the capitalism of one time and place is not that of another. These roots also make it extraordinarily difficult to replace.” (Profit, p. 253)

He writes that “however much it might spare wildlife and clean the land, water, and air, we stop the machinery of consumer capitalism at our peril.” (p. 254) If we are to avoid terrible social and economic unrest and suffering, we must accept that “we are captives on this accelerating merry-go-round of consumer capitalism.” (p. 256)

It’s essential to curb the power of big corporations and switch to renewable energy sources, he says. But in a concluding hint at the so-far non-existent phenomenon of “absolute decoupling”, he writes,

“The only requirement to keep consumer capitalism running is to keep as much money flowing into as many pockets as possible. The challenge may be to do so with as little demand for resources as possible.” (Profit, p. 256)

Are all these transformations possible, and can they happen in time? Stoll’s final paragraph says “We can only hope it will be possible.” Given the rest of his compelling narrative, that seems a faint hope indeed.

* * *

Coming next: another new book approaches the entanglements of environment and economics with a very different perspective, telling us with cheerful certainty that we can indeed switch the industrial economy to clean, renewable energies, rapidly, fully, and with no miracles needed.



Image at top of page: ‘The Express Train’, by Charles Parsons, 1859, published by Currier and Ives. Image donated to Wikimedia Commons by Metropolitan Museum of Art.

 

Segregation, block by block

Also published on Resilience

Is the purpose of zoning to ensure that towns and cities develop according to a rational plan? Does zoning protect the natural environment? Does zoning help promote affordable housing? Does zoning protect residents from the air pollution, noise pollution  and dangers from industrial complexes or busy highways?

To begin to answer these questions, consider this example from M. Nolan Gray’s new book Arbitrary Lines:

“It remains zoning ‘best practice’ that single-family residential districts should be ‘buffered’ from bothersome industrial and commercial districts by multifamily residential districts. This reflects zoning’s modus operandi of protecting single-family houses at all costs, but it makes no sense from a land-use compatibility perspective. While a handful of generally more affluent homeowners may be better off, it comes at the cost of many hundreds more less affluent residents suffering a lower quality of life.” (M. Nolan Gray, page 138)

Arbitrary Lines by M. Nolan Gray is published by Island Press, June 2022.

The intensification of inequality, Gray argues, is not an inadvertent side-effect of zoning, but its central purpose.

If you are interested in affordable housing, housing equity,  environmental justice, reduction of carbon emissions, adequate public transit, or streets that are safe for walking and cycling, Arbitrary Lines is an excellent resource in understanding how American cities got the way they are and how they might be changed for the better. (The book doesn’t discuss Canada, but much of Gray’s argument seems readily applicable to Canadian cities and suburbs.)

In part one and part two of this series, we looked at the complex matrix of causes that explain why “accidents”, far from being randomly distributed, happen disproportionately to disadvantaged people. In There Are No Accidents Jessie Singer writes, “Accidents are the predictable result of unequal power in every form – physical and systemic. Across the United States, all the places where a person is most likely to die by accident are poor. America’s safest corners are all wealthy.” (Singer, page 13)

Gray does not deal directly with traffic accidents, or mortality due in whole or part to contaminants from pollution sources close to poor neighbourhoods. His lucid explanation of zoning, however, helps us understand one key mechanism by which disadvantaged people are confined to unhealthy, dangerous, unpleasant places to live.

‘Technocratic apartheid’

Zoning codes in the US today make no mention of race, but Gray traces the history of zoning back to explicitly racist goals. In the early 20th century, he says, zoning laws were adopted most commonly in southern cities for the express purposes of enforcing racial segregation. As courts became less tolerant of open racism, they nonetheless put a stamp of approval on economic segregation. Given the skewed distribution of wealth, economic segregation usually resulted in or preserved de facto racial segregation as well.

The central feature and overriding purpose of zoning was to restrict the best housing districts to affluent people. Zoning accomplishes this in two ways. First, in large areas of cities and especially of suburbs the only housing allowed is single-family housing, one house per lot. Second, minimum lot sizes and minimum floor space sizes ensure that homes are larger and more expensive than they would be if left to the “free market”.

The result, across vast swaths of urban America, is that low-density residential areas have been mandated to remain low-density. People who can’t afford to buy a house, but have the means to rent an apartment, are unsubtly told to look in other parts of town.

Gray terms this segregation “a kind of technocratic apartheid,” and notes that “Combined with other planning initiatives, zoning largely succeeded in preserving segregation where it existed and instituting segregation where it didn’t.” (Gray, page 81) He cites one study that found “over 80 percent of all large metropolitan areas in the US were more racially segregated in 2019 than they were in 1990. Today, racial segregation is most acute not in the South but in the Midwest and mid-Atlantic regions.” (Gray, page 169)

Public transit? The numbers don’t add up.

From an environmental and transportation equity point of view, a major effect of zoning is that it makes good public transit unfeasible in most urban areas. Gray explains:

“There is a reasonable consensus among transportation planners that a city needs densities of at least seven dwelling units per acre to support the absolute baseline of transit: a bus that stops every thirty minutes. To get more reliable service, like bus rapid transit or light-rail service, a city needs … approximately fifteen units per acre. The standard detached single-family residential district—which forms the basis of zoning and remains mapped in the vast majority of most cities—supports a maximum density of approximately five dwelling units per acre. That is to say, zoning makes efficient transit effectively illegal in large swaths of our cities, to say nothing of our suburbs.” (Gray, page 101)

Coupled with the nearly ubiquitous adoption of rules mandating more parking space than would otherwise be built, the single-family housing and minimum lot size provisions of zoning are a disaster both for affordable housing and for environmentally-friendly housing. Typical American zoning, Gray says, “assumes universal car ownership and prohibits efficient apartment living. But it also just plain wastes space: if you didn’t know any better, you might be forgiven for thinking that your local zoning ordinance was carefully calibrated to use up as much land as possible.” (Gray, page 96)

Zoning regimes came into wide use in the mid-twentieth century and became notably stricter in the 1970s. In Gray’s view the current housing affordability crisis is the result of cities spending “the past fifty years using zoning to prevent new housing supply from meeting demand.” This succeeded in boosting values of properties owned by the already affluent, but eventually housing affordability became a problem not only for those at the bottom of the housing market but for most Americans. That is one impetus, Gray explains, for a recent movement to curb the worst features of zoning. While this movement is a welcome development, Gray argues zoning should be abolished, not merely reformed. Near the end of Arbitrary Lines, he explains many other planning and regulatory frameworks that can do much more good and much less harm than zoning.

There is one part of his argument that I found shaky. He believes that the abolition of zoning will restore economic growth by promoting movement to the “most productive” cities, and that “there is no reason to believe that there is an upper bound to the potential innovation that could come from growing cities.” (Gray, page 72) At root the argument is based on his acceptance that income is “a useful proxy for productivity” – a dubious proposition in my view. That issue aside, Arbitrary Lines is well researched, well illustrated, well reasoned and well written.

The book is detailed and wide-ranging, but unlike a typical big-city zoning document it is never boring or obscure. For environmentalists and urban justice activists Arbitrary Lines is highly recommended.


Image at top of page: detail from Winnipeg zoning map, 1947, accessed via Wikimedia Commons.