The urgent necessity of asset stranding

A review of Overshoot: How the World Surrendered to Climate Breakdown

In 2023 delegates from around the world gathered for a 28th session of the Conference Of the Parties (COP), this time held in the United Arab Emirates. The official director of the mega-meeting, nominally devoted to mitigating the climate crisis caused by fossil fuel emissions, was none other than Sultan Al Jaber, CEO of the Abu Dhabi National Oil Company (ADNOC).

At the time, ADNOC was “in the midst of a thrust of expansion, planning to pour more than 1 billion dollars into oil and gas projects per month until 2030.” (Overshoot, page 253)

Overshoot, by Andreas Malm and Wim Carton, published by Verso, October 2024.

Sultan Al Jaber’s appointment was praised by climate envoy John Kerry of the United States, which was also committing a historic expansion of fossil fuel extraction.

The significance of COP being presided over by a CEO working hard to increase carbon emissions was not lost on Andreas Malm and Wim Carton. In that moment, they write,

“[A]ctive capital protection had been insinuated into the highest echelons of climate governance, the irreal (sic) turn coming full circle, the theatre now a tragedy and farce wrapped into one, overshoot ideology the official decor.” (Overshoot, p 254; emphasis mine)

What do Malm and Carton mean by “capital protection” and “overshoot”? “Capital protection” is the opposite of “asset stranding”, which would occur if trillions of dollars worth of fossil fuel reserves were “left in the ground,” unburned, unexploited. Yet as we shall see, the potential threat to capital goes far beyond even the trillions of dollars of foregone profits if the fossil fuel industry were rapidly wound down.

In Malm and Carton’s usage, “overshoot” has a different meaning than in some ecological theory. In this book “overshoot” refers specifically to carbon emissions rising through levels that will cause 1.5°C, 2°C, or other specified threshold for global warming. To apologists for overshoot, it is fine to blow through these warming targets temporarily, as long as our descendants later in the century draw down much of the carbon through yet-to-be commercialized technologies such as Bio-Energy with Carbon Capture and Storage (BECCS).

Overshoot, Malm and Carton say, is a dangerous gamble that will certainly kill many people in the coming decades, and collapse civilization and much of the biosphere in the longer term if our descendants are not able adequately clean up the mess we are bequeathing them. Yet overshoot is firmly integrated into the Integrated Assessment Models widely used to model the course of climate change, precisely because it offers capital protection against asset stranding.

Scientific models, “drenched in ideology”

If the global climate were merely a complex physical system it would be easier to model. But of course it is also a biological, ecological, social and economic system. Once it was understood that the climate was strongly influenced by human activity, early researchers understood the need for models that incorporated human choices into climate projections.

“But how could an economy of distinctly human making be captured in the same model as something like glaciers?,” Malm and Carton ask. “In the Integrated Assessment Models (IAMs), the trick was to render the economy lawlike on the assumptions of neoclassical theory ….” (p 56)

These assumptions include the idea that humans are rational, making their choices to maximize utility, in free markets that collectively operate with perfect information. While most people other than orthodox economists can recognize these assumptions as crude caricatures of human behaviour, this set of assumptions is hegemonic within affluent policy-making circles. And so it was the neoclassical economy whose supposed workings were integrated into the IAMs. 

While “every human artifact has a dimension of ideology,” Malm and Carton write, 

“IAMs were positively drenched in non-innocent ideological positions, of which we can quickly list a few: rationalism (human agents behave rationally), economism (mitigation is a matter of cost), presentism (current generations should be spared the onus), conservatism (incumbent capital must be saved from losses), gradualism (any changes will have to be incremental), and optimism (we live in the best of all possible economies). Together, they made ambitious climate goals – the ones later identified as in line with 1.5°C or 2°C – seem all but unimaginable.” (p 60; emphasis mine)

In literally hundreds of IAMs, they write, there was a conspicuous absence of scenarios involving degrowth, the Green New Deal, the nationalisation of oil companies, half-earth socialism, or any other proposal to achieve climate mitigation through radical changes to “business as usual.”

In the place of any such challenges to the current economic order was another formidable acronym: BECCS, “Bio-Energy with Carbon Capture and Storage.” No costly shakeups to the current economy were needed, because in the IAMs, the not-yet-commercialized BECCS was projected to become so widely implemented in the second half of the century that it would draw down all the excess carbon we are currently rushing to emit.

As the 21st century progressed and as warming thresholds such as 1.5°C or even 2°C grew dangerously close, overshoot, excused by the imagined future roll-out of BECCS, became a more attractive and dangerous concept. Due to the magic of IAMs incorporating overshoot, countries like Canada, the US, and other petrostates could declare climate emergencies, pledge their support to a 1.5°C ceiling – and simultaneously step up their fossil extraction efforts. 

“Construction Work on Trans Mountain Pipeline outside Valemount, BC, Canada, Sept 16, 2020.” (Photo by Adam Jones, licensed via Creative Commons CC By 2.0, accessed via flickr.) On June 17, 2019, the Canadian Parliament approved a motion declaring the country to be in a climate emergency. On June 18, 2019, the Government of Canada announced its approval of the Trans-Mountain Pipeline Expansion, for the purpose of bringing more tar sands crude to the BC coast for export.

At COP15 in Copenhagen in 2009, and most famously at the Paris Accord in 2015, countries could piously pledge their allegiance to stringent warming limits, while ensuring no binding commitments remained in the texts to limit the fossil fuel industry. Overshoot was the enabling concept: “Through this sleight of hand, any given target could be both missed and met and any missing be rationalised as part of the journey to meeting it ….” (p 87)

“The common capital of the class”

There is a good deal of Marxist rhetoric in Overshoot, and Malm and Carton are able guides to this often tangled body of political-economic theory. On some subjects they employ these ideas to clarifying effect.

Given the overwhelming consensus of climatologists, plus the evidence in plain sight all around us, that the climate emergency is rapidly growing more severe, why is there still such widespread resistance to radical economic change?

The opposition to radical change comes not only from fossil fuel company owners and shareholders. Rather, the fierce determination to carry on with business as usual comes from many sectors of industry, the financial sector, nearly all policy-makers, and most of the media elite.

As Malm and Carton explain, if firm policies were put in place to “leave fossil fuels in the ground”, stranding the assets of fossil fuel companies, there would be “layer upon layer” of value destruction. The first layer would be the value of the no-longer usable fossil reserves. The next layer would be the vast network of wells, pipelines, refineries, even gas stations which distribute fossil fuel. A third would be the machinery now in place to burn fossil fuels in almost every other sector of industrial production. The economic valuations of these layers would crash the moment “leaving fossil fuels in ground” became a binding policy.

Finally, the above layers of infrastructure require financing. “Increased fixed capital formation,” Malm and Carton write, “necessitates increased integration into equity as well as credit markets – or, to use a pregnant Marxian phrase, into ‘the common capital of the class.’” (p 133)

The upshot is that “any limitations on fossil fuel infrastructure would endanger the common capital of the class by which it has been financed.” (p 133-134) And “the class by which it has been financed,” of course, is the ruling elite, the small percentage of people who own most of corporate equity, and whose lobbyists enjoy regular access to lawmakers and regulators. 

The elite class which owns, finances and profits from fossil production also happens to be responsible for a wildly disproportionate amount of fossil fuel consumption. Overshoot cites widely publicized statistics that show that the richest ten per cent of humanity is responsible for half of the emissions, while the poorest fifty percent of humanity emits only about a tenth of the emissions. They add, 

“It was not the masses of the global South that, suicidally, tipped the world into 1.5°C. In fact, not even the working classes of the North were party to the process: between 1990 and 2019, per capita emissions of the poorest half of the populations of the US and Europe dropped by nearly one third, due to ‘compressed wages and consumption.’ The overshoot conjuncture was the creation of the rich, with which they capped their victory in the class struggle.” (p 225-226)

Stock, flow and the labour theory of value

Malm and Carton go on to explain the economic difference between fossil fuel energy and solar-and-wind energy, through the simple lens of Marx’ labour theory of value. In my opinion this is the least successful section of Overshoot.

First, the authors describe fossil fuel reserves as “stocks” and the sunshine and wind as “flows”. That’s a valid distinction, of significance in explaining some of the fundamental differences in these energy sources.

But why has fossil fuel extraction recently been significantly more profitable than renewable energy harvesting?

The key fact, Malm and Carton argue, is that “the flow [solar and wind energy] appears without labour. … [T]he fuel is ripe for picking prior to and in proud disregard of any process of production. ‘Value is labour,’ Marx spells out …. It follows that the flow cannot have value.”

They emphasize the point with another quote from Marx: “‘Where there is no value, there is eo ipso nothing to be expressed in money.’”

“And where there is nothing to be expressed in money,” they conclude, “there can be no profit.” (p 208-209) That is why the renewable energy business will never supply the profits that have been earned in fossil extraction.

This simple explanation ignores the fact that oil companies aren’t always profitable; for a period of years in the last decade, the US oil industry had negative returns on equity.1 Clearly, one factor in the profitability of extraction is the cost of extraction, while another is the price customers are both willing and able to pay. When the former is as high as or higher than the latter, there are no profits even for exploitation of stocks.

As for business opportunities derived from the flow, Malm and Carton concede that profits might be earned through the manufacture and installation of solar panels and wind turbines, or the provision of batteries and transmission lines. But in their view these profits will never come close to fossil fuel profits, and furthermore, any potential profits will drop rapidly as renewable sources come to dominate the electric grid. Why? Again, their explanation rests on Marx’s labour theory of value:

“The more developed the productive forces of the flow, the more proficient their capture of a kind of energy in which no labour can be objectified, the closer the price and the value and the profit all come to zero.” (page 211)

Does this sound fantastically utopian to you? Imagine the whole enterprise – mining, refining, smelting, transporting, manufacturing and installation of PV panels and wind turbines, extensions of grids, and integration of adequate amounts of both short- and long-term storage – becoming so “proficient [in] their capture of energy” that the costs are insignificant compared to the nearly limitless flow of clean electricity. Imagine that all these costs become so trivial that the price of the resulting electricity approaches zero.

As a corrective to this vision of ‘renewable electricity too cheap to meter,’ I recommend Vince Beiser’s Power Metal, reviewed here last week.

Malm and Carton, however, are convinced that renewably generated electricity can only get cheaper, and furthermore can easily substitute for almost all the current uses of fossil fuels, without requiring reductions in other types of consumption, and all within a few short years. In defense of this idea they approvingly cite the work of Mark Jacobson; rather than critique that work here, I’ll simply refer interested readers to my review of Jacobson’s 2023 publication No Miracles Needed.

Energy transition and stranded assets

Energy transition is not yet a reality. Malm and Carton note that although renewable energy supply has grown rapidly over the past 20 years, fossil energy use has not dropped. What we have so far is an energy addition, not an energy transition.

Not coincidentally, asset stranding likewise remains “a hypothetical event, not yet even attempted.” (p 192)

The spectre of fossil fuel reserves and infrastructure becoming stranded assets has been discussed in the pages of financial media, ever since climate science made it obvious that climate mitigation strategies would indeed require leaving most known fossil reserves in the ground, i.e., stranding these assets. (One of the pundits sounding a warning was Mark Carney, formerly a central banker and now touted as a contender to replace Justin Trudeau as leader of the Liberal Party of Canada; he makes an appearance in Overshoot.)

Yet there is no evidence the capitalist class collectively is losing sleep over stranded assets, any more than over the plight of poor farmers being driven from their lands by severe floods or droughts.

As new fossil fuel projects get more expensive, the financial establishment has stepped up its investment in such projects. In the years immediately following the Paris Agreement – whose 1.5°C warming target would have required stranding more than 80 per cent of fossil fuel reserves – a frenzy of investment added to both the reserves and the fixed capital devoted to extracting those reserves:

“Between 2016 and 2021, the world’s sixty largest banks poured nearly 5 trillion dollars into fossil fuel projects, the sums bigger at the end of this half-decade than at its beginning.” (p 20) 

The implications are twofold: first, big oil and big finance remain unconcerned that any major governments will enact strong and effective climate mitigation policies – policies that would put an immediate cap on fossil fuel exploitation plus a binding schedule for rapid reductions in fossil fuel use over the coming years. They are unconcerned about such policy possibilities because they have ensured there are no binding commitments to climate mitigation protocols.

Second, there are far more assets which could potentially be stranded today than there were even in 2015. We can expect, then, that fossil fuel interests will fight even harder against strong climate mitigation policies in the next ten years than they did in the last ten years. And since, as we have seen, the layers of stranded assets would go far beyond the fossil corporations themselves into ‘the common capital of the class’, the resistance to asset stranding will also be widespread.

Malm and Carton sum it up this way: “We have no reliable friends in the capitalist classes. … any path to survival runs through their defeat.” (p 236)

The governments of the rich countries, while pledging their support for stringent global warming limits, have through their deeds sent us along the path to imminent overshoot. But suppose a major coal- or oil-producing jurisdiction passed a law enacting steep cutbacks in extraction, thereby stranding substantial fossil capital assets.

“Any measure significant enough to suggest that the fears harboured for so long are about to come true could pop the bubble,” Malm and Carton write. “[T]he stampede would be frenzied and unstoppable, due to the extent of the financial connections ….” (p 242)

Such a “total breakdown of capital” would come with drastic social risks, to be sure – but the choice is between a breakdown of capital or a breakdown of climate (which would, of course, also cause a breakdown of capital). Could such a total breakdown of capital still be initiated before it’s too late to avoid climate breakdown? In a book filled with thoughtful analysis and probing questions, the authors close by proposing this focus for further work:

“Neither the Green New Deal nor degrowth or any other programme in circulation has a plan for how to strand the assets that must be stranded. … [This] is the point where strategic thinking and practise should be urgently concentrated in the years ahead.” (p 244)

 


1 See “2018 was likely the most profitable year for U.S. oil producers since 2013,” US Energy Information Administration, May 10, 2019. The article shows that publicly traded oil producers had greater losses in the period 2015-2017 than they had gains in 2013, 2014, and 2018.

Image at top of page: “The end of the Closing Plenary at the UN Climate Change Conference COP28 at Expo City Dubai on December 13, 2023, in Dubai, United Arab Emirates,” photo by COP28/Mahmoud Khaled, licensed for non-commercial use via Creative Commons CC BY-NC-SA 2.0, accessed on flickr.

Critical metals and the side effects of electrification

A review of Power Metal: The Race for the Resources That Will Shape The Future

Also published on Resilience.

“The energy transition from fossil fuels to renewables is a crucial part of the cure for climate change,” writes Vince Beiser on page one of his superb new book Power Metal. “But it’s a cure with brutal side effects.”

The point of Beiser’s stark warning is not to downplay the urgency of switching off fossil fuels, nor to assert that a renewable energy economy will be a greater ecological menace than our current industrial system.

Power Metal by Vince Beiser, published November 2024 by Riverhead Books.

But enthusiasm for supposedly clean and free solar and wind energy must be tempered by a realistic knowledge of the mining and refining needed to produce huge quantities of solar panels, wind turbines, transmission lines, electric motors, and batteries.

In Power Metal, Beiser explains why we would need drastic increases in mining of critical metals – including copper, nickel, cobalt, lithium, and the so-called “rare earths” – if we were to run anything like the current global economy solely on renewable electricity.

Beyond merely outlining the quantities of metals needed, however, he provides vivid glimpses of the mines and refineries where these essential materials are extracted and transformed into usable commodities. His journalistic treatment helps us understand the ecological impacts of these industries as well as the social and health impacts on the communities where this work is done, often in horrible conditions.

While cell phones and computers in all their billions each contain small quantities of many of the critical metals, the much-touted electric vehicle transition has a deeper hunger. Take nickel. “Stainless steel consumes the lion’s share of nickel output,” Beiser writes, “but batteries are gaining fast.” (page 69)

“The battery in a typical Tesla,” he adds, “is as much as 80 percent nickel by weight. The battery industry’s consumption of nickel jumped 73 percent in 2021 alone.” (p 69)

And so on, down the list: “a typical EV contains as much as one hundred seventy-five pounds of copper.” ( p 45)

“Your smartphone probably contains about a quarter ounce of cobalt; electric vehicle batteries can contain upwards of twenty-four pounds.” (p 77)

Extending current trend lines leads to the following prediction:

“By 2050, the International Energy Agency estimates, demand for cobalt from electric vehicle makers alone will surge to nearly five times what it was in 2022; nickel demand will be ten times higher; and for lithium, fifteen times higher ….” (p 4)

If those trend lines hold true – and that’s a big “if” – the energy transition will come with high ecological costs.

The historic leading producer of nickel, Norilsk in Siberia, “is one of the most ecologically ravaged places on Earth.” (p 70) Unfortunately a recent contender in Indonesia, where the nickel ore is a lower quality, may be even worse:

“Nickel processing also devours huge amounts of energy, and most of Indonesia’s electricity is generated by coal-fired plants. That’s right: huge amounts of carbon-intensive coal are being burned to make carbon-neutral batteries.” (p 74)

The Bayan Obo district in China is the world’s major producer of refined rare earths – and “not by coincidence, it is also one of the most polluted areas on the planet. …” (p 28)

Ideally we’d want the renewable energy supply chain to meet three criteria: cheap, clean, and fair. As it is, we’re lucky to get one out of three.

Mining of critical metals can only take place in particular locations – blessed or cursed? – where such elements are somewhat concentrated in the earth’s crust. When there is a choice of nations for suppliers, the global economy leans to nations with lax environmental and labour standards as well as low wages.

There are no geographic restrictions on processing, however, and that’s why China’s dominance in critical metal processing far exceeds its share of world reserves.

The Mountain Pass mine in California is rapidly expanding extraction of rare earths. But the US facility is only able to produce a commodity called bastnaesite, which contains all the rare earths mixed together. To separate the rare earth elements one from another, the mine operator tells Beiser, the bastnaesite must be shipped to China: “ There’s no processing facilities anywhere outside of China that can handle the scale we need to be producing.” (p 36)

The story is similar for other critical metals. Cobalt, for example, is mined in famously brutal conditions in the Democratic Republic of Congo, and then sent to China for processing.

Could both the mining and the processing be done in ways that respect the environment and respect the health and dignity of workers? Major improvements in these respects are no doubt possible – but will likely result in a significantly higher price for renewable energy technologies. Our ability to pay that price, in turn, will be greatly influenced by how parsimoniously or how profligately we use the resulting energy. 

Collection of circuit boards at Agbogbloshie e-waste processing plant in Ghana. Image from Fairphone under Creative Commons license accessed via flickr.

Recycling to the rescue?

Is the messy extraction and processing of critical metals just a brief blip on a rosy horizon? Proponents of recycling sometimes make the case that the raw materials for a renewable energy economy will only need to be mined once, after which recycling will take over.

Beiser presents a less optimistic view. A complex global supply chain manufactures cars and computers that are composites of many materials, and these products are then distributed to every corner of the world. Separating out and re-concentrating the various commodities so they can be recycled also requires a complex supply chain – running in reverse.

“Most businesses that call themselves metal recyclers don’t actually turn old junk into new metal,” Beiser writes. “They are primarily collectors, aggregators.” (p 130) He takes us into typical work days of metal collectors and aggregators in his hometown of Vancouver as well as in Lagos, Nigeria. In these and other locations, he says, the first levels of aggregation tend to be done by people working in the informal economy.

In Lagos, workers smash apart cell phones and computers, and manually sort the circuit boards into categories, before the bundles of parts are shipped off to China or Europe for the next stage of reverse manufacturing:

“Shredding or melting down a circuit board and separating out those tiny amounts of gold, copper, and everything else requires sophisticated and expensive equipment. There is not a single facility anywhere in Africa capable of performing this feat.” (p 145)

Because wages are low and environmental regulations lax in Nigeria and Ghana, it is economically possible to collect and aggregate almost all the e-waste components there. Meanwhile in the US and Europe, “fewer than one in six dead mobile phones is recycled.” (p 146)

Cell phones are both tiny and complicated, but what about bigger items like solar panels, wind turbine blades, and EV batteries?

Here too the complications are daunting. It is currently far cheaper in the US to send an old solar panel to landfill than it is to recycle it. There isn’t yet a cost-effective way to separate the composite materials in wind turbine blades for re-use.

Lithium batteries add explosive danger to the complications of recycling: 

“If they’re punctured, crushed, or overheated, lithium batteries can short-circuit and catch on fire or even explode. Battery fires can reach temperatures topping 1,000 degrees Fahrenheit [538°C], and they emit toxic gases. Worse, they can’t be extinguished by water or normal firefighting chemicals. (p 153)

Perhaps it’s not surprising that only 5% of lithium-ion batteries are currently recycled. (p 151)

Given the costs, dangers, and complex supply chain needed, Beiser says, recycling is not “the best alternative to using virgin materials. In fact, it’s one of the worst.” (p 16)

Far better, he argues in the book’s closing section, are two other “Rs” – “reuse” and “reduce.”

Simply using all the cell phones in Europe for one extra year before junking them, he says, would avoid 2.1 million metric tons of carbon dioxide emissions per year –comparable to taking a million cars off the road.

Speaking of taking cars off the road, Beiser writes, “the real issue isn’t how to get more metals into the global supply chain to build more cars, it’s how to get people to where they want to go with fewer cars.” (p 186)

Given the high demands for critical metals involved in auto manufacturing, Beiser concludes that “the most effective single way that we as individuals can make a difference is this: Don’t buy a car. Not even an electric one.” (p 182) He might have added: if you do buy a car, get one that’s no bigger or heavier than needed for your typical usage, instead of the ever bulkier cars the big automakers push.

In response to projections about how fast we would need to convert the current world economy to renewable energy, Beiser fears that it may not be possible to mine critical metals rapidly enough to stave off cataclysmic climate change. If we dramatically reduce our demands for energy from all sources, however, that challenge is not as daunting:

“The less we consume, the less energy we need. The less energy we use, the less metal we need to dig up …. Our future depends. in a literal sense, on metal. We need a lot of it to stave off climate change, the most dangerous threat of all. But the less of it we use, the better off we’ll all be.” (p 204-205)

  • * *

“Energy transition” is a key phrase in Power Metal – but does this transition actually exist? Andreas Malm and Wim Carton make the important point that both “energy transition” and “stranded assets” remain mere future possibilities, each either a fond dream or a nightmare depending on one’s position within capitalist society. All the renewable energy installations to date have simply been additions to fossil energy, Malm and Carton point out, because fossil fuel use, a brief drop during the pandemic aside, has only continued to rise.

We turn to Malm and Carton’s thought-provoking new book Overshoot in our next installment.


Image at top of page: “Metal worker at Hussey Copper in Leetsdale, PA melts down copper on August 8, 2015,” photo by Erikabarker, accessed on Wikimedia Commons.

Counting the here-and-now costs of climate change

A review of Slow Burn: The Hidden Costs of a Warming World

Also published on Resilience.

R. Jisung Park takes us into a thought experiment. Suppose we shift attention away from the prospect of coming climate catastrophes – out-of-control wildfires, big rises in sea levels, stalling of ocean circulation currents – and we focus instead on the ways that rising temperatures are already having daily impacts on people’s lives around the world.

Might these less dramatic and less obvious global-heating costs also provide ample rationale for concerted emissions reductions?

Slow Burn by R. Jisung Park is published by Princeton University Press, April 2024.

Park is an environmental and labour economist at the University of Pennsylvania. In Slow Burn, he takes a careful look at a wide variety of recent research efforts, some of which he participated in. He reports results in several major areas: the effect of hotter days on education and learning; the effect of hotter days on human morbidity and mortality; the increase in workplace accidents during hotter weather; and the increase in conflict and violence as hot days become more frequent.

In each of these areas, he says, the harms are measurable and substantial. And in another theme that winds through each chapter, he notes that the harms of global heating fall disproportionately on the poorest people both internationally and within nations. Unless adaptation measures reflect climate justice concerns, he says, global heating will exacerbate already deadly inequalities.

Even where the effect seems obvious – many people die during heat waves – it’s not a simple matter to quantify the increased mortality. For one thing, Park notes, very cold days as well as very hot days lead to increases in mortality. In some countries (including Canada) a reduction in very cold days will result in a decrease in mortality, which may offset the rise in deaths during heat waves.

We also learn about forward mortality displacement, “where the number of deaths immediately caused by a period of high temperatures is at least partially offset by a reduction in the number of deaths in the period immediately following the hot day or days.” (Slow Burn, p 85) 

After accounting for such complicating factors, a consortium of researchers has estimated the heat-mortality relationship through the end of this century, for 40 countries representing 55 percent of global population. Park summarizes their results:

“The Climate Impact Lab researchers estimate that, without any adaptation (so, simply extrapolating current dose-response relationships into a warmer future), climate change is likely to increase mortality rates by 221 per 100,000 people. … But adaptation is projected to reduce this figure by almost two-thirds: from 221 per 100,000 to seventy-three per 100,000. The bulk of this – 78 percent of the difference – comes from higher incomes.” (pp 198-199)

Let’s look at these estimates from several angles. First, to put the lower estimate of 73 additional deaths per 100,000 people in perspective, Park notes an increase in mortality of this magnitude would be six times larger than the US annual death toll from automobile crashes, and roughly tw0-thirds the US death toll from COVID-19 in 2020. An increase in mortality of 73 per 100,000 is a big number.

Second, it seems logical that people will try to adapt to more and more severe heat waves. If they have the means, they will install or augment their air-conditioning systems, or perhaps they’ll buy homes in cooler areas. But why should anyone have confidence that most people will have higher incomes by 2100, and therefore be in a better position to adapt to heat? Isn’t it just as plausible that most people will have less income and less ability to spend money on adaptation?

Third, Park notes that inequality is already evident in heat-mortality relationships. A single day with average temperature of 90°F (32.2°C) or higher increases the annual mortality in South Asian countries by 1 percent – ten times the heat-mortality increase that the United States experiences. Yet within the United States, there is also a large difference in heat-mortality rates between rich and poor neighbourhoods.

Even in homes that have air-conditioning (globally, only about 30%), low-income people often can’t afford to run the air-conditioners enough to counteract severe heat. “Everyone uses more energy on very hot and very cold days,” Park writes. “But the poor, who have less slack in their budgets, respond more sparingly.” (p 191)

A study in California found a marked increase in utility disconnections due to delinquent payments following heat waves. A cash-strapped household, then, faces an awful choice: don’t turn up the air-conditioner even when it’s baking hot inside, and suffer the ill effects; or turn it up, get through one heat wave, but risk disconnection unless it’s possible to cut back on other important expenses in order to pay the high electric bill.

(As if to underline the point, a headline I spotted as I finished this review reported surges in predatory payday loans following extreme weather.)

The drastic adaptation measure of relocation also depends on socio-economic status. Climate refugees crossing borders get a lot of news coverage, and there’s good reason to expect this issue will grow in prominence. Yet Park finds that “the numerical majority of climate-induced refugees are likely to be those who do not have the wherewithal to make it to an international border.” (p 141) As time goes on and the financial inequities of global heating increase, it may be true that even fewer refugees have the means to get to another country: “recent studies find that gradually rising temperatures may actually reduce the rate of migration in many poorer countries.” (p 141)

Slow Burn is weak on the issue of multiple compounding factors as they will interact over several decades. It’s one thing to measure current heat-mortality rates, but quite another to project that these rates will rise linearly with temperatures 30 or 60 years from now. Suppose, as seems plausible, that a steep rise in 30°C or hotter days is accompanied by reduced food supplies due to lower yields, higher basic food prices, increased severe storms that destroy or damage many homes, and less reliable electricity grids due to storms and periods of high demand. Wouldn’t we expect, then, that the 73-per-100,000-people annual heat-related deaths estimated by the Climate Impact Lab would be a serious underestimate?

Park also writes that due to rising incomes, “most places will be significantly better able to deal with climate change in the future.” (p 229) As for efforts at reducing emissions, in Park’s opinion “it seems reasonable to suppose that thanks in part to pledged and actual emissions cuts achieved in the past few decades, the likelihood of truly disastrous warming may have declined nontrivially.” (p 218) If you don’t share his faith in economic growth, and if you lack confidence that pledged emissions cuts will be made actual, some paragraphs in Slow Burn will come across as wishful thinking.

Yet on the book’s two primary themes – that climate change is already causing major and documentable harms to populations around the world, and that climate justice concerns must be at the forefront of adaptation efforts – Park marshalls strong evidence to present a compelling case.

A fragile frankenstein

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part eight
Also published on Resilience.

Is there an imminent danger that artificial intelligence will leap-frog human intelligence, go rogue, and either eliminate or enslave the human race?

You won’t find an answer to this question in an expert consensus, because there is none.

Consider the contrasting views of Geoffrey Hinton and Yann LeCun. When they and their colleague Yoshua Bengio were awarded the 2018 Turing Prize, the three were widely praised as the “godfathers of AI.”

“The techniques the trio developed in the 1990s and 2000s,” James Vincent wrote, “enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies ….”1

Yet Hinton and LeCun don’t see eye to eye on some key issues.

Hinton made news in the spring of 2023 with his highly-publicized resignation from Google. He stepped away from the company because he had become convinced AI has become an existential threat to humanity, and he felt the need to speak out freely about this danger.

In Hinton’s view, artificial intelligence is racing ahead of human intelligence and that’s not good news: “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”2

LeCun now heads Meta’s AI division while also teaching New York University. He voices a more skeptical perspective on the threat from AI. As reported last month,

“[LeCun] believes the widespread fear that powerful A.I. models are dangerous is largely imaginary, because current A.I. technology is nowhere near human-level intelligence—not even cat-level intelligence.”3

As we dive deeper into these diverging judgements, we’ll look at a deceptively simple question: What is intelligence good for?

But here’s a spoiler alert: after reading scores of articles and books on AI over the past year, I’ve found I share the viewpoint of computer scientist Jaron Lanier.

In a New Yorker article last May Lanier wrote “The most pragmatic position is to think of A.I. as a tool, not a creature.”4 (emphasis mine) He repeated this formulation more recently:

“We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase ‘artificial intelligence’ itself, is all about the idea that we are making new creatures instead of new tools.”5

This tool might be designed and operated badly or for nefarious purposes, Lanier says, perhaps even in ways that could cause our own and many other species’ extinction. Yet as a tool made and used by humans, the harm would best be attributed to humans and not to the tool.

Common senses

How might we compare different manifestations of intelligence? For many years Hinton thought electronic neural networks were a poor imitation of the human brain. But he told Will Douglas Heaven last year that he now thinks the AI neural networks have turned out to be better than human brains in important respects. While the largest AI neural networks are still small compared to human brains, they make better use of their connections:

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”6

Compared to people, Hinton says, the new Large Language Models learn new tasks extremely quickly.

LeCun argues that in spite of a relatively small number of neurons and connections in its brain, a cat is far smarter than the leading AI systems:

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”7

I’ve turned to a dear friend, who happens to be a cat, for further insight. When we go out for our walks together, each at one end of a leash, I notice how carefully Embers sniffs this bush, that plank, or a spot on the ground where another animal appears to have scratched. I notice how his ears turn and twitch in the wind, how he sniffs and listens before proceeding over a hill.

Embers knows hunger: he once disappeared for four months and came back emaciated and full of worms. He knows where mice might be found, and he knows it can be worth a long wait in tall grass, with ears carefully focused, until a determined pounce may yield a meal. He knows anger and fear: he has been ambushed by a larger cat, suffering injuries that took long painful weeks to heal. He knows that a strong wind, or the roar of crashing waves, make it impossible for him to determine if danger lurks just behind that next bush, and so he turns away in nervous agitation and heads back to a place where he feels safe.

Embers’ ability to “understand the physical world, plan complex actions, do some level of reasoning,” it seems to me, is deeply rooted in his experience of hunger, satiety, cold, warmth, fear, anger, love, comfort. His curiosity, too, is rooted in this sensory knowledge, as is his will – his deep determination to get out and explore his surroundings every morning and every evening. Both his will and his knowledge are rooted in biology. And given that we homo sapiens are no less biological, our own will and our own knowledge also have roots in biology.

For all their abilities to manipulate and reassemble fragments of information, however, I’ve come across nothing to indicate that any AI system will experience similar depths of sensory knowledge, and nothing to indicate they will develop wills or motivations of their own.

In other words, AI systems are not creatures, they are tools.

The elevation of abstraction

“Bodies matter to minds,” writes James Bridle. “The way we perceive and act in the world is shaped by the limbs, senses and contexts we possess and inhabit.”8

However, our human ability to conceive of things, not in their bodily connectedness but in their imagined separateness, has been the facet of intelligence at the center of much recent technological progress. Bridle writes:

“Historically, scientific progress has been measured by its ability to construct reductive frameworks for the classification of the natural world …. This perceived advancement of knowledge has involved a long process of abstraction and isolation, of cleaving one thing from another in a constant search for the atomic basis of everything ….”9

The ability to abstract, to separate into classifications, to simplify, to measure the effects of specific causes in isolation from other causes, has led to sweeping civilizational changes.

When electronic computing pioneers began to dream of “artificial intelligence”, Bridle says, they were thinking of intelligence primarily as “what humans do.” Even more narrowly, they were thinking of intelligence as something separated from and abstracted from bodies, as an imagined pure process of thought.

More narrowly still, the AI tools that have received most of the funding have been tools that are useful to corporate intelligence – the kinds that can be monetized, that can be made profitable, that can extract economic value for the benefit of corporations.

The resulting tools can be used in impressively useful ways – and as discussed in previous posts in this series, in dangerous and harmful ways. To the point of this post, however, we ask instead: Could artificially intelligent tools ever become creatures in their own right? And if they did, could they survive, thrive, take over the entire world, and conquer or eliminate biology-based creatures?

Last June, economist Blair Fix published a succinct takedown of the potential threat of a rogue artificial intelligence. 

“Humans love to talk about ‘intelligence’,” Fix wrote, “because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.”

Living creatures, he argued, must know how to acquire and digest food. From nematodes to homo sapiens we have the ability, quite beyond our conscious intelligence, to digest the food we need. But AI machines, for all their data-manipulating capacity, lack the most basic ability to care for themselves. In Fix’s words,

“Today’s machines may be ‘intelligent’, but they have none of the core competencies that make life robust. We design their metabolism (which is brittle) and we spoon feed them energy. Without our constant care and attention, these machines will do what all non-living matter does — wither against the forces of entropy.”10

Our “thinking machines”, like us, have their own bodily needs. Their needs, however, are vastly more complex and particular than ours are.

Humans, born as almost totally dependent creatures, can digest necessary nourishment from day one, and as we grow we rapidly develop the abilities to draw nourishment from a wide range of foods.

AI machines, on the other hand, are born and remain totally dependent on a single pure form of energy that only exists as produced through a sophisticated industrial complex: electricity, of a reliably steady and specific voltage and power. Learning to understand, manage and provide that sort of energy supply took almost all of human history to date.

Could the human-created AI tools learn to take over every step of their own vast global supply chains, thereby providing their own necessities of “life”, autonomously manufacturing more of their own kind, and escaping any dependence on human industry? Fix doesn’t think so:

“The gap between a savant program like ChatGPT and a robust, self-replicating machine is monumental. Let ChatGPT ‘loose’ in the wild and one outcome is guaranteed: the machine will go extinct.”

Some people have argued that today’s AI bots, or especially tomorrow’s bots, can quickly learn all they need to know to care and provide for themselves. After all, they can inhale the entire contents of the internet and, some say, can quickly learn the combined lessons of every scientific specialty.

But, as my elders used to tell me long before I became one of them, “book learning will only get you so far.” In the hypothetical case of an AI-bot striving for autonomy, digesting all the information on the internet would not grant assurance of survival.

It’s important, first, to recall that the science of robotics is nowhere near as developed as the science of AI. (See the previous post, Watching work, for a discussion of this issue.) Even if the AI-bot could both manipulate and understand all the science and engineering information needed to keep the artificial intelligence industrial complex running, that complex also requires a huge labour force of people with long experience in a vast array of physical skills.

“As consumers, we’re used to thinking of services like electricity, cellular networks, and online platforms as fully automated,” Timothy B. Lee wrote in Slate last year. “But they’re not. They’re extremely complex and have a large staff of people constantly fixing things as they break. If everyone at Google, Amazon, AT&T, and Verizon died, the internet would quickly grind to a halt—and so would any superintelligent A.I. connected to it.”11

In order to rapidly dispense with the need for a human labour force, a rogue cohort of AI-bots would need a sudden quantum leap in robotics. The AI-bots would need to be able to manipulate every type of data, but also every type of physical object. Lee summarizes the obstacles:

“Today there are far fewer industrial robots in the world than human workers, and the vast majority of them are special-purpose robots designed to do a specific job at a specific factory. There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber-optic cables, drive delivery trucks, replace failing servers, and so forth. Robots also need human beings to repair them when they break, so without people the robots would eventually stop functioning too.”

The information available on the internet, vast as it is, has a lot of holes. How many companies have thoroughly documented all of their institutional knowledge, such that an AI-bot could simply inhale all the knowledge essential to each company’s functions? To dispense with the human labour force, the AI-bot would need such documentation for every company that occupies every significant niche in the artificial intelligence industrial complex.

It seems clear, then, that a hypothetical AI overlord could not afford to get rid of a human work force, certainly not in a short time frame. And unless it could dispense with that labour force very soon, it would also need farmers, food distributors, caregivers, parents to raise and teachers to educate the next generation of workers – in short, it would need human society writ large.

But could it take full control of this global workforce and society by some combination of guile or force?

Lee doesn’t think so. “Human beings are social creatures,” he writes. “We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves. In-person conversations tend to be more persuasive than phone calls or emails. A superintelligent A.I. would have no friends or family and would be incapable of having an in-person conversation with anybody.”

It’s easy to imagine a rogue AI tricking some people some of the time, just as AI-enhanced extortion scams can fool many people into handing over money or passwords. But a would-be AI overlord would need to manipulate and control all of the people involved in keeping the industrial supply chain operating smoothly, regardless of the myriad possibilities for sabotage.

Tools and their dangerous users

A frequently discussed scenario is that AI could speed up the development of new and more lethal chemical poisons, new and more lethal microbes, and new, more lethal, and remotely-targeted munitions. All of these scenarios are plausible. And all of these scenarios, to the extent that they come true, will represent further increments in our already advanced capacities to threaten all life and to risk human extinction.

At the beginning of the computer age, after all, humans invented and then constructed enough nuclear weapons to wipe out all human life. Decades ago, we started producing new lethal chemicals on a massive scale, and spreading them with abandon throughout the global ecosystem. We have only a sketchy understanding of how all these chemicals interact with existing life forms, or with new life forms we may spawn through genetic engineering.

There are already many examples of how effective AI can be as a tool for disinformation campaigns. This is a further increment in the progression of new tools which were quickly put to use for disinformation. From the dawn of writing, to the development of low-cost printed materials, to the early days of broadcast media, each technological extension of our intelligence has been used to fan genocidal flames of fear and hatred.

We are already living with, and possibly dying with, the results of a decades-long, devastatingly successful disinformation project, the well-funded campaign by fossil fuel corporations to confuse people about the climate impacts of their own lucrative products.

AI is likely to introduce new wrinkles to all these dangerous trends. But with or without AI, we have the proven capacity to ruin our own world.

And if we drive ourselves to extinction, the AI-bots we have created will also die, as soon as the power lines break and the batteries run down.


Notes

1 James Vincent, “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing,” The Verge, 27 March 2019.

2 As quoted by Timothy B. Lee in “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 9 May 2023.

3 Sissi Cao, “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” Observer, 15 February 2024.

4 Jaron Lanier, “There is No A.I.,” New Yorker, 20 April 2023.

5 Jaron Lanier, “How to Picture A.I.,” New Yorker, 1 March 2024.

6 Quoted in “Geoffrey Hinton tells us why he’s now scared of the tech he helped build,” by Will Douglas Heaven, MIT Technology Review, 2 May 2023.

7 Quoted in “Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.,” by Sissi Cao, Observer, 15 February 2024.

8 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Picador MacMillan, 2022; page 38.

9 Bridle, Ways of Being, page 100.

10 Blair Fix, “No, AI Does Not Pose an Existential Risk to Humanity,” Economics From the Top Down, 10 June 2023.

11 Timothy B. Lee, “Artificial Intelligence Is Not Going to Kill Us All,” Slate, 2 May 2023.


Illustration at top of post: Fragile Frankenstein, by Bart Hawkins Kreps, from: “Artificial Neural Network with Chip,” by Liam Huang, Creative Commons license, accessed via flickr; “Native wild and dangerous animals,” print by Johann Theodor de Bry, 1602, public domain, accessed at Look and Learn; drawing of robot courtesy of Judith Kreps Hawkins.

The existential threat of artificial stupidity

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part seven
Also published on Resilience.

One headline about artificial intelligence gave me a rueful laugh the first few times I saw it.

With minor variations headline writers have posed the question, “What if AI falls into the wrong hands?”

But AI is already in the wrong hands. AI is in the hands of a small cadre of ultra-rich influencers affiliated with corporations and governments, organizations which collectively are driving us straight towards a cliff of ecological destruction.

This does not mean, of course, that every person working on the development of artificial intelligence is a menace, nor that every use of artificial intelligence will be destructive.

But we need to be clear about the socio-economic forces behind the AI boom. Otherwise we may buy the illusion that our linear, perpetual-growth-at-all-costs economic system has somehow given birth to a magically sustainable electronic saviour.

The artificial intelligence industrial complex is an astronomically expensive enterprise, pushing its primary proponents to rapidly implement monetized applications. As we will see, those monetized applications are either already in widespread use, or are being promised as just around the corner. First, though, we’ll look at why AI is likely to be substantially controlled by those with the deepest pockets.

“The same twenty-five billionaires”

CNN host Fareed Zakaria asked the question “What happens if AI gets into the wrong hands?” in a segment in January. Interviewing Mustafa Suleyman, Inflection AI founder and Google DeepMind co-founder, Zakaria framed the issue this way:

“You have kind of a cozy elite of a few of you guys. It’s remarkable how few of you there are, and you all know each other. You’re all funded by the same twenty-five billionaires. But once you have a real open source revolution, which is inevitable … then it’s out there, and everyone can do it.”1

Some of this is true. OpenAI was co-founded by Sam Altman and Elon Musk. Their partnership didn’t last long and Musk has founded a competitor, x.AI. OpenAI has received $10 billion from Microsoft, while Amazon has invested $4 billion and Alphabet (Google) has invested $300 million in AI startup Anthropic. Year-old company Inflection AI has received $1.3 billion from Microsoft and chip-maker Nvidia.2

Meanwhile Mark Zuckerberg says Meta’s biggest area of investment is now AI, and the company is expected to spend about $9 billion this year just to buy chips for its AI computer network.3 Companies including Apple, Amazon, and Alphabet are also investing heavily in AI divisions within their respective corporate structures.

Microsoft, Amazon and Alphabet all earn revenue from their web services divisions which crunch data for many other corporations. Nvidia sells the chips that power the most computation-intensive AI applications.

But whether an AI startup rents computer power in the “cloud”, or builds its own supercomputer complex, creating and training new AI models is expensive. As Fortune reported in January, 

“Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was ‘model-forward’ to one that’s ‘product-forward,’ where companies are primarily tapping existing models and skipping right to the product roadmap.”4

The huge expense of building AI models also has implications for claims about “open source” code. As Cory Doctorow has explained,

“Not only is the material that ‘open AI’ companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.”5

Doctorow’s aim in the above-cited article was to debunk the claim that the AI complex is democratising access to its products and services. Yet this analysis also has implications for Fareed Zaharia’s fears of unaffiliated rogue actors doing terrible things with AI.

Individuals or small organizations may indeed use a major company’s AI engine to create deepfakes and spread disinformation, or perhaps even to design dangerously mutated organisms. Yet the owners of the AI models determine who has access to which models and under which terms. Thus unaffiliated actors can be barred from using particular models, or charged sufficiently high fees that using a given AI engine is not feasible.

So while the danger from unaffiliated rogue actors is real, I think the more serious danger is from the owners and funders of large AI enterprises. In other words, the biggest dangers come not from those into whose hands AI might fall, but from those whose hands are already all over AI.

Command and control

As discussed earlier in this series, the US military funded some of the earliest foundational projects in artificial intelligence, including the “perceptron” in 19566 and WordNet semantic database beginning in 1985.7

To this day military and intelligence agencies remain major revenue sources for AI companies. Kate Crawford writes that the intentions and methods of intelligence agencies continue to shape the AI industrial complex:

“The AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance.”8

As Crawford points out, the goals and methods of high-level intelligence agencies “have spread to many other state functions, from local law enforcement to allocating benefits.” China-made surveillance cameras, for example, were installed in New Jersey and paid for under a COVID relief program.9 Artificial intelligence bots can enforce austerity policies by screening – and disallowing – applications for government aid. Facial-recognition cameras and software, meanwhile, are spreading rapidly and making it easier for police forces to monitor people who dare to attend political protests.

There is nothing radically new, of course, in the use of electronic communications tools for surveillance. Eleven years ago, Edward Snowden famously revealed the expansive plans of the “Five Eyes” intelligence agencies to monitor all internet communications.10 Decades earlier, intelligence agencies were eagerly tapping undersea communications cables.11

Increasingly important, however, is the partnership between private corporations and state agencies – a partnership that extends beyond communications companies to include energy corporations.

This public/private partnership has placed particular emphasis on suppressing activists who fight against expansions of fossil fuel infrastructure. To cite three North American examples, police and corporate teams have worked together to surveil and jail opponents of the Line 3 tar sands pipeline in Minnesota,12 protestors of the Northern Gateway pipeline in British Columbia,13 and Water Protectors trying to block a pipeline through the Standing Rock Reservation in North Dakota.14

The use of enhanced surveillance techniques in support of fossil fuel infrastructure expansions has particular relevance to the artificial intelligence industrial complex, because that complex has a fierce appetite for stupendous quantities of energy.

Upping the demand for energy

“Smashed through the forest, gouged into the soil, exploded in the grey light of dawn,” wrote James Bridle, “are the tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth.”

Bridle was describing sudden changes in the landscape of north-west Greece after the Spanish oil company Repsol was granted permission to drill exploratory oil wells. Repsol teamed up with IBM’s Watson division “to leverage cognitive technologies that will help transform the oil and gas industry.”

IBM was not alone in finding paying customers for nascent AI among fossil fuel companies. In 2018 Google welcomed oil companies to its Cloud Next conference, and in 2019 Microsoft hosted the Oil and Gas Leadership Summit in Houston. Not to be outdone, Amazon has eagerly courted petroleum prospectors for its cloud infrastructure.

As Bridle writes, the intent of the oil companies and their partners includes “extracting every last drop of oil from under the earth” – regardless of the fact that if we burn all the oil already discovered we will push the climate system past catastrophic tipping points. “What sort of intelligence seeks not merely to support but to escalate and optimize such madness?”

The madness, though, is eminently logical:

“Driven by the logic of contemporary capitalism and the energy requirements of computation itself, the deepest need of an AI in the present era is the fuel for its own expansion. What it needs is oil, and it increasingly knows where to find it.”15

AI runs on electricity, not oil, you might say. But as discussed at greater length in Part Two of this series, the mining, refining, manufacturing and shipping of all the components of AI servers remains reliant on the fossil-fueled industrial supply chain. Furthermore, the electricity that powers the data-gathering cloud is also, in many countries, produced in coal- or gas-fired generators.

Could artificial intelligence be used to speed a transition away from reliance on fossil fuels? In theory perhaps it could. But in the real world, the rapid growth of AI is making the transition away from fossil fuels an even more daunting challenge.

“Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow,” Evan Halper reported in the Washington Post earlier this month. Why the sudden spike?

“A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing.”

The jump in demand from AI is in addition to – and greatly complicates – the move to electrify home heating and car-dependent transportation:

“It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels.”

The effort to maintain and increase overall energy consumption, while paying lip-service to transition away from fossil fuels, is having a predictable outcome: “The situation … threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online.”16

The motive forces of the artificial industrial intelligence complex, then, include the extension of surveillance, and the extension of climate- and biodiversity-destroying fossil fuel extraction and combustion. But many of those data centres are devoted to a task that is also central to contemporary capitalism: the promotion of consumerism.

Thou shalt consume more today than yesterday

As of March 13, 2024, both Alphabet (parent of Google) and Meta (parent of Facebook) ranked among the world’s ten biggest corporations as measured by either market capitalization or earnings.17 Yet to an average computer user these companies are familiar primarily for supposedly “free” services including Google Search, Gmail, Youtube, Facebook and Instagram.

These services play an important role in the circulation of money, of course – their function is to encourage people to spend more money than they otherwise would, for all types of goods or services, whether or not they actually need or even desire more goods and services. This function is accomplished through the most elaborate surveillance infrastructures yet invented, harnessed to an advertising industry that uses the surveillance data to better target ads and to better sell products.

This role in extending consumerism is a fundamental element of the artificial intelligence industrial complex.

In 2011, former Facebook employee Jeff Hammerbacher summed it up: “The best minds of my generation are thinking about how to make people click ads. That sucks.”18

Working together, many of the world’s most skilled behavioural scientists, software engineers and hardware engineers devote themselves to nudging people to spend more time online looking at their phones, tablets and computers, clicking ads, and feeding the data stream.

We should not be surprised that the companies most involved in this “knowledge revolution” are assiduously promoting their AI divisions. As noted earlier, both Google and Facebook are heavily invested in AI. And Open AI, funded by Microsoft and famous for making ChatGPT almost a household name, is looking at ways to make  their investment pay off.

By early 2023, Open AI’s partnership with “strategy and digital application delivery” company Bain had signed up its first customer: The Coca-Cola Company.19

The pioneering effort to improve the marketing of sugar water was hailed by Zack Kass, Head of Go-To-Market at OpenAI: “Coca-Cola’s vision for the adoption of OpenAI’s technology is the most ambitious we have seen of any consumer products company ….”

On its website, Bain proclaimed:

“We’ve helped Coca-Cola become the first company in the world to combine GPT-4 and DALL-E for a new AI-driven content creation platform. ‘Create Real Magic’ puts the power of generative AI in consumers’ hands, and is one example of how we’re helping the company augment its world-class brands, marketing, and consumer experiences in industry-leading ways.”20

The new AI, clearly, has the same motive as the old “slow AI” which is corporate intelligence. While a corporation has been declared a legal person, and therefore might be expected to have a mind, this mind is a severely limited, sociopathic entity with only one controlling motive – the need to increase profits year after year with no end. (This is not to imply that all or most employees of a corporation are equally single-minded, but any noble motives  they may have must remain subordinate to the profit-maximizing legal charter of the corporation.) To the extent that AI is governed by corporations, we should expect that AI will retain a singular, sociopathic fixation with increasing profits.

Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next installment.


Notes

1  GPS Web Extra: What happens if AI gets into the wrong hands?”, CNN, 7 January 2024.

2 Mark Sweney, “Elon Musk’s AI startup seeks to raise $1bn in equity,” The Guardian, 6 December 2023.

3 Jonathan Vanian, “Mark Zuckerberg indicates Meta is spending billions of dollars on Nvidia AI chips,” CNBC, 18 January 2024.

4 Fortune Eye On AI newsletter, 25 January 2024.

5 Cory Doctorow, “‘Open’ ‘AI’ isn’t”, Pluralistic, 18 August 2023.

6 “New Navy Device Learns By Doing,” New York Times, July 8, 1958, page 25.

7 “WordNet,” on Scholarly Community Encyclopedia, accessed 11 March 2024.

8 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021.

9 Jason Koehler, “New Jersey Used COVID Relief Funds to Buy Banned Chinese Surveillance Cameras,” 404 Media, 3 January 2024.

10 Glenn Greenwald, Ewen MacAskill and Laura Poitras, “Edward Snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, 11 June 2013.

11 The Creepy, Long-Standing Practice of Undersea Cable Tapping,” The Atlantic, Olga Kazhan, 16 July 2013

12 Alleen Brown, “Pipeline Giant Enbridge Uses Scoring System to Track Indigenous Opposition,” 23 January, 2022, part one of the seventeen-part series “Policing the Pipeline” in The Intercept.

13 Jeremy Hainsworth, “Spy agency CSIS allegedly gave oil companies surveillance data about pipeline protesters,” Vancouver Is Awesome, 8 July 2019.

14 Alleen Brown, Will Parrish, Alice Speri, “Leaked Documents Reveal Counterterrorism Tactics Used at Standing Rock to ‘Defeat Pipeline Insurgencies’”, The Intercept, 27 May 2017.

15 James Bridle, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, Farrar, Straus and Giroux, 2023; pages 3–7.

16 Evan Halper, “Amid explosive demand, America is running out of power,” Washington Post, 7 March 2024.

17 Source: https://companiesmarketcap.com/, 13 March 2024.

18 As quoted in Fast Company, “Why Data God Jeffrey Hammerbacher Left Facebook To Found Cloudera,” 18 April 2013.

19 PRNewswire, “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI,” 21 February 2023.

20 Bain & Company website, accessed 13 March 2024.


Image at top of post by Bart Hawkins Kreps from public domain graphics.

Farming on screen

Bodies, Minds, and the Artificial Intelligence Industrial Complex, part six
Also published on Resilience.

What does the future of farming look like? To some pundits the answer is clear: “Connected sensors, the Internet of Things, autonomous vehicles, robots, and big data analytics will be essential in effectively feeding tomorrow’s world. The future of agriculture will be smart, connected, and digital.”1

Proponents of artificial intelligence in agriculture argue that AI will be key to limiting or reversing biodiversity loss, reducing global warming emissions, and restoring resilience to ecosystems that are stressed by climate change.

There are many flavours of AI and thousands of potential applications for AI in agriculture. Some of them may indeed prove helpful in restoring parts of ecosystems.

But there are strong reasons to expect that AI in agriculture will be dominated by the same forces that have given the world a monoculture agri-industrial complex overwhelmingly dependent on fossil fuels. There are many reasons why we might expect that agri-industrial AI will lead to more biodiversity loss, more food insecurity, more socio-economic inequality, more climate vulnerability. To the extent that AI in agriculture bears fruit, many of these fruits are likely to be bitter.

Optimizing for yield

A branch of mathematics known as optimization has played a large role in the development of artificial intelligence. Author Coco Krumme, who earned a PhD in mathematics from MIT, traces optimization’s roots back hundreds of years and sees optimization in the development of contemporary agriculture.

In her book Optimal Illusions: The False Promise of Optimization, she writes,

“Embedded in the treachery of optimals is a deception. An optimization, whether it’s optimizing the value of an acre of land or the on-time arrival rate of an airline, often involves collapsing measurement into a single dimension, dollars or time or something else.”2

The “single dimensions” that serve as the building blocks of optimization are the result of useful, though simplistic, abstractions of the infinite complexities of our world. In agriculture, for example, how can we identify and describe the factors of soil fertility? One way would be to describe truly healthy soil as soil that contains a diverse microbial community, thriving among networks of fungal mycelia, plant roots, worms, and insect larvae. Another way would be to note that the soil contains sufficient amounts of at least several chemical elements including carbon, nitrogen, phosphorus, potassium. The second method is an incomplete abstraction, but it has the big advantage that it lends itself to easy quantification, calculation, and standardized testing. Coupled with the availability of similar simple quantified fertilizers, this method also allows for quick, “efficient,” yield-boosting soil amendments.

In deciding what are the optimal levels of certain soil nutrients, of course, we must also give an implicit or explicit answer to this question: “Optimal for what?” If the answer is, “optimal for soya production”, we are likely to get higher yields of soya – even if the soil is losing many of the attributes of health that we might observe through a less abstract lens. Krumme describes the gradual and eventual results of this supposedly scientific agriculture:

“It was easy to ignore, for a while, the costs: the chemicals harming human health, the machinery depleting soil, the fertilizer spewing into the downstream water supply.”3

The social costs were no less real than the environmental costs: most farmers, in countries where industrial agriculture took hold, were unable to keep up with the constant pressure to “go big or go home”. So they sold their land to the fewer remaining farmers who farmed bigger farms, and rural agricultural communities were hollowed out.

“But just look at those benefits!”, proponents of industrialized agriculture can say. Certainly yields per hectare of commodity crops climbed dramatically, and this food was raised by a smaller share of the work force.

The extent to which these changes are truly improvements is murky, however, when we look beyond the abstractions that go into the optimization models. We might want to believe that “if we don’t count it, it doesn’t count” – but that illusion won’t last forever.

Let’s start with social and economic factors. Coco Krumme quotes historian Paul Conkin on this trend in agricultural production: “Since 1950, labor productivity per hour of work in the non-farm sectors has increased 2.5 fold; in agriculture, 7-fold.”4

Yet a recent paper by Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause finds:

“Industrial farming discourse promotes the perception that there is a positive relationship—the larger the farm, the greater the productivity. Our objective is to demonstrate that based on the data at the centre of this debate, on average, small farms actually produce more food on less land ….”5

Here’s the nub of the problem: productivity statistics depend on what we count, and what we don’t count, when we tally input and output. Labour productivity in particular is usually calculated in reference to Gross Domestic Product, which is the sum of all monetary transactions.

Imagine this scenario, which has analogs all over the world. Suppose I pick a lot of apples, I trade a bushel of them with a neighbour, and I receive a piglet in return. The piglet eats leftover food scraps and weeds around the yard, while providing manure that fertilizes the vegetable garden. Several months later I butcher the pig and share the meat with another neighbour who has some chickens and who has been sharing the eggs. We all get delicious and nutritious food – but how much productivity is tallied? None, because none of these transactions are measured in dollars nor counted in GDP.

In many cases, of course, some inputs and outputs are counted while others are not. A smallholder might buy a few inputs such as feed grain, and might sell some products in a market “official” enough to be included in economic statistics. But much of the smallholder’s output will go to feeding immediate family or neighbours without leaving a trace in GDP.

If GDP had been counted when this scene was depicted, the sale of Spratt’s Pure Fibrine poultry feed may have been the only part of the operation that would “count”. Image: “Spratts patent “pure fibrine” poultry meal & poultry appliances”, from Wellcome Collection, circa 1880–1889, public domain.

Knezevic et al. write, “As farm size and farm revenue can generally be objectively measured, the productivist view has often used just those two data points to measure farm productivity.” However, other statisticians have put considerable effort into quantifying output in non-monetary terms, by estimating all agricultural output in terms of kilocalories.

This too is an abstraction, since a kilocalorie from sugar beets does not have the same nutritional impact as a kilocalorie from black beans or a kilocalorie from chicken – and farm output might include non-food values such as fibre for clothing, fuel for fireplaces, or animal draught power. Nevertheless, counting kilocalories instead of dollars or yuan makes possible more realistic estimates of how much food is produced by small farmers on the edge of the formal economy.

The proportions of global food supply produced on small vs. large farms is a matter of vigorous debate, and Knezevic et al. discuss some of widely discussed estimates. They defend their own estimate:

“[T]he data indicate that family farmers and smallholders account for 81% of production and food supply in kilocalories on 72% of the land. Large farms, defined as more than 200 hectares, account for only 15 and 13% of crop production and food supply by kilocalories, respectively, yet use 28% of the land.”6

They also argue that the smallest farms – 10 hectares (about 25 acres) or less – “provide more than 55% of the world’s kilocalories on about 40% of the land.” This has obvious importance in answering the question “How can we feed the world’s growing population?”7

Of equal importance to our discussion on the role of AI in agriculture, are these conclusions of Knezevic et al.: “industrialized and non-industrialized farming … come with markedly different knowledge systems,” and “smaller farms also have higher crop and non-crop biodiversity.”

Feeding the data machine

As discussed at length in previous installments, the types of artificial intelligence currently making waves require vast data sets. And in their paper advocating “Smart agriculture (SA)”, Jian Zhang et al. write, “The focus of SA is on data exploitation; this requires access to data, data analysis, and the application of the results over multiple (ideally, all) farm or ranch operations.”8

The data currently available from “precision farming” comes from large, well-capitalized farms that can afford tractors and combines equipped with GPS units, arrays of sensors tracking soil moisture, fertilizer and pesticide applications, and harvested quantities for each square meter. In the future envisioned by Zhang et al., this data collection process should expand dramatically through the incorporation of Internet of Things sensors on many more farms, plus a network allowing the funneling of information to centralized AI servers which will “learn” from data analysis, and which will then guide participating farms in achieving greater productivity at lower ecological cost. This in turn will require a 5G cellular network throughout agricultural areas.

Zhang et al. do not estimate the costs – in monetary terms, or in up-front carbon emissions and ecological damage during the manufacture, installation and operation of the data-crunching networks. An important question will be: will ecological benefits be equal to or greater than the ecological harms?

There is also good reason to doubt that the smallest farms – which produce a disproportionate share of global food supply – will be incorporated into this “smart agriculture”. Such infrastructure will have heavy upfront costs, and the companies that provide the equipment will want assurance that their client farmers will have enough cash outputs to make the capital investments profitable – if not for the farmers themselves, then at least for the big corporations marketing the technology.

A team of scholars writing in Nature Machine Intelligence concluded,

“[S]mall-scale farmers who cultivate 475 of approximately 570 million farms worldwide and feed large swaths of the so-called Global South are particularly likely to be excluded from AI-related benefits.”9

On the subject of what kind of data is available to AI systems, the team wrote,

“[T]ypical agricultural datasets have insufficiently considered polyculture techniques, such as forest farming and silvo-pasture. These techniques yield an array of food, fodder and fabric products while increasing soil fertility, controlling pests and maintaining agrobiodiversity.”

They noted that the small number of crops which dominate commodity crop markets – corn, wheat, rice, and soy in particular – also get the most research attention, while many crops important to subsistence farmers are little studied. Assuming that many of the small farmers remain outside the artificial intelligence agri-industrial complex, the data-gathering is likely to perpetuate and strengthen the hegemony of major commodities and major corporations.

Montreal Nutmeg. Today it’s easy to find images of hundreds varieties of fruit and vegetables that were popular more than a hundred years ago – but finding viable seeds or rootstock is another matter. Image: “Muskmelon, the largest in cultivation – new Montreal Nutmeg. This variety found only in Rice’s box of choice vegetables. 1887”, from Boston Public Library collection “Agriculture Trade Collection” on flickr.

Large-scale monoculture agriculture has already resulted in a scarcity of most traditional varieties of many grains, fruits and vegetables; the seed stocks that work best in the cash-crop nexus now have overwhelming market share. An AI that serves and is led by the same agribusiness interests is not likely, therefore, to preserve the crop diversity we will need to cope with an unstable climate and depleted ecosystems.

It’s marvellous that data servers can store and quickly access the entire genomes of so many species and sub-species. But it would be better if rare varieties are not only preserved but in active use, by communities who keep alive the particular knowledge of how these varieties respond to different weather, soil conditions, and horticultural techniques.

Finally, those small farmers who do step into the AI agri-complex will face new dangers:

“[A]s AI becomes indispensable for precision agriculture, … farmers will bring substantial croplands, pastures and hayfields under the influence of a few common ML [Machine Learning] platforms, consequently creating centralized points of failure, where deliberate attacks could cause disproportionate harm. [T]hese dynamics risk expanding the vulnerability of agrifood supply chains to cyberattacks, including ransomware and denial-of-service attacks, as well as interference with AI-driven machinery, such as self-driving tractors and combine harvesters, robot swarms for crop inspection, and autonomous sprayers.”10

The quantified gains in productivity due to efficiency, writes Coco Krumme, have come with many losses – and “we can think of these losses as the flip side of what we’ve gained from optimizing.” She adds,

“We’ll call [these losses], in brief: slack, place, and scale. Slack, or redundancy, cushions a system from outside shock. Place, or specific knowledge, distinguishes a farm and creates the diversity of practice that, ultimately, allows for both its evolution and preservation. And a sense of scale affords a connection between part and whole, between a farmer and the population his crop feeds.”11

AI-led “smart agriculture” may allow higher yields from major commodity crops, grown in monoculture fields on large farms all using the same machinery, the same chemicals, the same seeds and the same methods. Such agriculture is likely to earn continued profits for the major corporations already at the top of the complex, companies like John Deere, Bayer-Monsanto, and Cargill.

But in a world facing combined and manifold ecological, geopolitical and economic crises, it will be even more important to have agricultures with some redundancy to cushion from outside shock. We’ll need locally-specific knowledge of diverse food production practices. And we’ll need strong connections between local farmers and communities who are likely to depend on each other more than ever.

In that context, putting all our eggs in the artificial intelligence basket doesn’t sound like smart strategy.


Notes

1 Achieving the Rewards of Smart Agriculture,” by Jian Zhang, Dawn Trautman, Yingnan Liu, Chunguang Bi, Wei Chen, Lijun Ou, and Randy Goebel, Agronomy, 24 February 2024.

2 Coco Krumme, Optimal Illusions: The False Promise of Optimization, Riverhead Books, 2023, pg 181 A hat tip to Mark Hurst, whose podcast Techtonic introduced me to the work of Coco Krumme.

3 Optimal Illusions, pg 23.

4 Optimal Illusions, pg 25, quoting Paul Conkin, A Revolution Down on the Farm.

5 Irena Knezevic, Alison Blay-Palmer and Courtney Jane Clause, “Recalibrating Data on Farm Productivity: Why We Need Small Farms for Food Security,” Sustainability, 4 October 2023.

6 Knezevic et al., “Recalibrating the Data on Farm Productivity.”

7 Recommended reading: two farmer/writers who have conducted more thorough studies of the current and potential productivity of small farms are Chris Smaje and Gunnar Rundgren.

8 Zhang et al., “Achieving the Rewards of Smart Agriculture,” 24 February 2024.

Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh, “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities,” Nature Machine Intelligence, 23 February 2022.

10 Asaf Tzachor et al., “Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities.”

11 Coco Krumme, Optimal Illusions, pg 34.


Image at top of post: “Alexander Frick, Jr. in his tractor/planter planting soybean seeds with the aid of precision agriculture systems and information,” in US Dep’t of Agriculture album “Frick Farms gain with Precision Agriculture and Level Fields”, photo for USDA by Lance Cheung, April 2021, public domain, accessed via flickr. 

Bodies, Minds, and the Artificial Intelligence Industrial Complex

Also published on Resilience.

This year may or may not be the year the latest wave of AI-hype crests and subsides. But let’s hope this is the year mass media slow their feverish speculation about the future dangers of Artificial Intelligence, and focus instead on the clear and present, right-now dangers of the Artificial Intelligence Industrial Complex.

Lost in most sensational stories about Artificial Intelligence is that AI does not and can not exist on its own, any more than other minds, including human minds, can exist independent of bodies. These bodies have evolved through billions of years of coping with physical needs, and intelligence is linked to and inescapably shaped by these physical realities.

What we call Artificial Intelligence is likewise shaped by physical realities. Computing infrastructure necessarily reflects the properties of physical materials that are available to be formed into computing machines. The infrastructure is shaped by the types of energy and the amounts of energy that can be devoted to building and running the computing machines. The tasks assigned to AI reflect those aspects of physical realities that we can measure and abstract into “data” with current tools. Last but certainly not least, AI is shaped by the needs and desires of all the human bodies and minds that make up the Artificial Intelligence Industrial Complex.

As Kate Crawford wrote in Atlas of AI,

“AI can seem like a spectral force — as disembodied computation — but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.”1

The metaphors we use for high-tech phenomena influence how we think of these phenomena. Take, for example, “the Cloud”. When we store a photo “in the Cloud” we imagine that photo as floating around the ether, simultaneously everywhere and nowhere, unconnected to earth-bound reality.

But as Steven Gonzalez Monserrate reminded us, “The Cloud is Material”. The Cloud is tens of thousands of kilometers of data cables, tens of thousands of server CPUs in server farms, hydroelectric and wind-turbine and coal-fired and nuclear generating stations, satellites, cell-phone towers, hundreds of millions of desktop computers and smartphones, plus all the people working to make and maintain the machinery: “the Cloud is not only material, but is also an ecological force.”2

It is possible to imagine “the Cloud” without an Artificial Intelligence Industrial Complex, but the AIIC, at least in its recent news-making forms, could not exist without the Cloud.

The AIIC relies on the Cloud as a source of massive volumes of data used to train Large Language Models and image recognition models. It relies on the Cloud to sign up thousands of low-paid gig workers for work on crucial tasks in refining those models. It relies on the Cloud to rent out computing power to researchers and to sell AI services. And it relies on the Cloud to funnel profits into the accounts of the small number of huge corporations at the top of the AI pyramid.

So it’s crucial that we reimagine both the Cloud and AI to escape from mythological nebulous abstractions, and come to terms with the physical, energetic, flesh-and-blood realities. In Crawford’s words,

“[W]e need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.”3

Through a series of posts we’ll take a deeper look at key aspects of the Artificial Intelligence Industrial Complex, including:

  • the AI industry’s voracious and growing appetite for energy and physical resources;
  • the AI industry’s insatiable need for data, the types and sources of data, and the continuing reliance on low-paid workers to make that data useful to corporations;
  • the biases that come with the data and with the classification of that data, which both reflect and reinforce current social inequalities;
  • AI’s deep roots in corporate efforts to measure, control, and more effectively extract surplus value from human labour;
  • the prospect of “superintelligence”, or an AI that is capable of destroying humanity while living on without us;
  • the results of AI “falling into the wrong hands” – that is, into the hands of the major corporations that dominate AI, and which, as part of our corporate-driven economy, are driving straight towards the cliff of ecological suicide.

One thing this series will not attempt is providing a definition of “Artificial Intelligence”, because there is no workable single definition. The phrase “artificial intelligence” has come into and out of favour as different approaches prove more or less promising, and many computer scientists in recent decades have preferred to avoid the phrase altogether. Different programming and modeling techniques have shown useful benefits and drawbacks for different purposes, but it remains debatable whether any of these results are indications of intelligence.

Yet “artificial intelligence” keeps its hold on the imaginations of the public, journalists, and venture capitalists. Matteo Pasquinelli cites a popular Twitter quip that sums it up this way:

“When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”4

Computers, be they boxes on desktops or the phones in pockets, are the most complex of tools to come into common daily use. And the computer network we call the Cloud is the most complex socio-technical system in history. It’s easy to become lost in the detail of any one of a billion parts in that system, but it’s important to also zoom out from time to time to take a global view.

The Artificial Intelligence Industrial Complex sits at the apex of a pyramid of industrial organization. In the next installment we’ll look at the vast physical needs of that complex.


Notes

1 Kate Crawford, Atlas of AI, Yale University Press, 2021.

Steven Gonzalez Monserrate, “The Cloud is Material” Environmental Impacts of Computation and Data Storage”, MIT Schwarzman College of Computing, January 2022.

3 Crawford, Atlas of AI, Yale University Press, 2021.

Quoted by Mateo Pasquinelli in “How A Machine Learns And Fails – A Grammar Of Error For Artificial Intelligence”, Spheres, November 2019.


Image at top of post: Margaret Henschel in Intel wafer fabrication plant, photo by Carol M. Highsmith, part of a collection placed in the public domain by the photographer and donated to the Library of Congress.

How parking ate North American cities

Also published on Resilience

Forty-odd years ago when I moved from a small village to a big city, I got a lesson in urbanism from a cat who loved to roam. Navigating the streets late at night, he moved mostly under parked cars or in their shadows, intently watching and listening before quickly crossing an open lane of pavement. Parked cars helped him avoid many frightening hazards, including the horrible danger of cars that weren’t parked.

The lesson I learned was simple but naïve: the only good car is a parked car.

Yet as Henry Grabar’s new book makes abundantly clear, parking is far from a benign side-effect of car culture.

The consequences of car parking include the atrophy of many inner-city communities; a crisis of affordable housing; environmental damages including but not limited to greenhouse gas emissions; and the continued incentivization of suburban sprawl.

Paved Paradise is published by Penguin Random House, May 9, 2023

Grabar’s book is titled Paved Paradise: How Parking Explains the World. The subtitle is slightly hyperbolic, but Grabar writes that “I have been reporting on cities for more than a decade, and I have never seen another subject that is simultaneously so integral to the way things work and so overlooked.”

He illustrates his theme with stories from across the US, from New York to Los Angeles, from Chicago to Charlotte to Corvallis.

Paved Paradise is as entertaining as it is enlightening, and it should help ensure that parking starts to get the attention it deserves.

Consider these data points:

  • “By square footage, there is more housing for each car in the United States than there is housing for each person.” (page 71; all quotes in this article are from Paved Paradise)
  • “The parking scholar Todd Litman estimates it costs $4,400 to supply parking for each vehicle for a year, with drivers directly contributing just 20 percent of that – mostly in the form of mortgage payments on a home garage.” (p 81)
  • “Many American downtowns, such as Little Rock, Newport News, Buffalo, and Topeka, have more land devoted to parking than to buildings.” (p 75)
  • Parking scholar Donald Shoup estimated that in 1998, “there existed $12,000 in parking for every one of the country’s 208 million cars. Because of depreciation, the average value of each of those vehicles was just $5,500 …. Therefore, Shoup concluded, the parking stock cost twice as much as the actual vehicles themselves. (p 150)

How did American cities come to devote vast amounts of valuable real estate to car storage? Grabar goes back to basics: “Every trip must begin and end with a parking space ….” A driver needs a parking space at home, and another one at work, another one at the grocery store, and another one at the movie theatre. There are six times as many parking spaces in the US as there are cars, and the multiple is much higher in some cities.

This isn’t a crippling problem in sparsely populated areas – but most Americans live or work or shop in relatively crowded areas. As cars became the dominant mode of transportation the “parking problem” became an obsession. It took another 60 or 70 years for many urban planners to reluctantly conclude that the parking problem can not be solved by building more parking spaces.

By the dawn of the twenty-first century parking had eaten American cities. (And though Grabar limits his story to the US, parking has eaten Canadian cities too.)

Grabar found that “Just one in five cities zoned for parking in 1950. By 1970, 95 percent of U.S. cities with over twenty-five thousand people had made the parking spot as legally indispensable as the front door.” (p 69)

The Institute of Transportation Engineers theorized that every building “generated traffic”, and therefore every type of building should be required to provide at least a specified number of parking spaces. So-called “parking minimums” became a standard feature of the urban planning rulebook, with wide-ranging and long-lasting consequences.

Previously common building types could no longer be built in most areas of most American cities:

“Parking requirements helped trigger an extinction-level event for bite-size, infill apartment buildings …; the production of buildings with two to four units fell more than 90 percent between 1971 and 2021.” (p 180)

On a small lot, even if a duplex or quadplex was theoretically permitted, the required parking would eat up too much space or require the construction of unaffordable underground parking.

Commercial construction, too, was inexorably bent to the will of the parking god:

“Fast-food architecture – low-slung, compact structures on huge lots – is really the architecture of parking requirements. Buildings that repel each other like magnets of the same pole.” (p 181)

While suburban development was subsidized through vast expenditures on highways and multi-lane arterial roads, parking minimums were hollowing out urban cores. New retail developments and office complexes moved to urban edges where big tracts of land could be affordably devoted to “free” parking.

Coupled with separated land use rules – keeping workplaces away from residential or retail areas – parking minimums resulted in sprawling development. Fewer Americans lived within safe walking or cycling distance from work, school or stores. Since few people had a good alternative to driving, there needed to be lots of parking. Since new developments needed lots of extra land for that parking, they had to be built further apart – making people even more car-dependent.

As Grabar explains, the almost universal application of parking minimums does not indicate that there is no market for real estate with little or no parking. To the contrary, the combination of high demand and minimal supply means that neighbourhoods offering escape from car-dependency are priced out of reach of most Americans:

“The most expensive places to live in the country were, by and large, densely populated and walkable neighborhoods. If the market was sending a signal for more of anything, it was that.” (p 281)

Is the solution the elimination of minimum parking requirements? In some cases that has succeeded – but reversing a 70- or 80-year-old development pattern has proven more difficult in other areas. 

Resident parking on Wellington Street, South End, Boston, Massachusetts. Photo by Billy Wilson, September 2022, licensed through Creative Commons BY-NC 2.0, accessed at Flickr.

The high cost of free parking

Paved Paradise acknowledges an enormous debt to the work of UCLA professor Donald Shoup. Published in 2005, Shoup’s 773-page book The High Cost of Free Parking continues to make waves.

As Grabar explains, Shoup “rode his bicycle to work each day through the streets of Los Angeles,” and he “had the cutting perspective of an anthropologist in a foreign land.” (p 149)

While Americans get exercised about the high price they occasionally pay for parking, in fact most people park most of the time for “free.” Their parking space is paid for by tax dollars, or by store owners, or by landlords. Most of the cost of parking is shared between those who drive all the time and those who seldom or never use a car.

By Shoup’s calculations, “the annual American subsidy to parking was in the hundreds of billions of dollars.” Whether or not you had a car,

“You paid [for the parking subsidy] in the rent, in the check at the restaurant, in the collection box at church. It was hidden on your receipt from Foot Locker and buried in your local tax bill. You paid for parking with every breath of dirty air, in the flood damage from the rain that ran off the fields of asphalt, in the higher electricity bills from running an air conditioner through the urban heat-island effect, in the vanishing natural land on the outskirts of the city. But you almost never paid for it when you parked your car ….” (p 150)

Shoup’s book hit a nerve. Soon passionate “Shoupistas” were addressing city councils across the country. Some cities moved toward charging market prices for the valuable public real estate devoted to private car storage. Many cities also started to remove parking minimums from zoning codes, and some cities established parking maximums – upper limits on the number of parking spaces a developer was allowed to build.

In some cases the removal of parking minimums has had immediate positive effects. Los Angeles became a pioneer in doing away with parking minimums. A 2010 survey looked at downtown LA projects constructed following the removal of parking requirements. Without exception, Grabar writes, these projects “had constructed fewer parking spaces than would have been required by [the old] law. Developers built what buyers and renters wanted ….” (p 193) Projects which simply wouldn’t have been built under old parking rules came to market, offering buyers and tenants a range of more affordable options.

In other cities, though, the long habit of car-dependency was more tenacious. Grabar writes:

“Starting around 2015, parking minimums began to fall in city after city. But for every downtown LA, where parking-free architecture burst forth, there was another place where changing the law hadn’t changed much at all.” (p 213)

In neighbourhoods with few stores or employment prospects within a walking or cycling radius, and in cities with poor public transit, there remains a weak market for buildings with little or no parking. After generations of heavily subsidized, zoning-incentivized car-dependency,

“There were only so many American neighborhoods that even had the bones to support a car-free life …. Parking minimums were not the only thing standing between the status quo and the revival of vibrant, walkable cities.” (p 214)

There are many strands to car culture: streets that are unsafe for people outside a heavy armoured box; an acute shortage of affordable housing except at the far edges of cities; public transit that is non-existent or so infrequent that it can’t compete with driving; residential neighbourhoods that fail to provide work, shopping, or education opportunities close by. All of these factors, along with the historical provision of heavily subsidized parking, must be changed in tandem if we want safe, affordable, environmentally sustainable cities.

Though it is an exaggeration to say “parking explains the world”, Grabar makes it clear that you can’t explain the world of American cities without looking at parking.

In the meantime, sometimes it works to use parked cars to promote car-free ways of getting around. Grabar writes,

“One of [Janette] Sadik-Khan’s first steps as transportation commissioner was taking a trip to Copenhagen, where she borrowed an idea for New York: use the parked cars to protect the bike riders. By putting the bike lanes between the sidewalk and the parking lane, you had an instant wall between cyclists and speeding traffic. Cycling boomed; injuries fell ….” (p 256)

A street-wise cat I knew forty years ago would have understood.


Photo at top of page: Surface parking lot adjacent to Minneapolis Armory, adapted from photo by Zach Korb, August 2006. Licensed via Creative Commons BY-NC-2.0, accessed via Flickr. Part of his 116-photo series “Downtown Minneapolis Parking.”

A road map that misses some turns

A review of No Miracles Needed

Also published on Resilience

Mark Jacobson’s new book, greeted with hosannas by some leading environmentalists, is full of good ideas – but the whole is less than the sum of its parts.

No Miracles Needed, by Mark Z. Jacobson, published by Cambridge University Press, Feb 2023. 437 pages.

The book is No Miracles Needed: How Today’s Technology Can Save Our Climate and Clean Our Air (Cambridge University Press, Feb 2023).

Jacobson’s argument is both simple and sweeping: We can transition our entire global economy to renewable energy sources, using existing technologies, fast enough to reduce annual carbon dioxide emissions at least 80% by 2030, and 100% by 2050. Furthermore, we can do all this while avoiding any major economic disruption such as a drop in annual GDP growth, a rise in unemployment, or any drop in creature comforts. But wait – there’s more! In so doing, we will also completely eliminate pollution.

Just don’t tell Jacobson that this future sounds miraculous.

The energy transition technologies we need – based on Wind, Water and Solar power, abbreviated to WWS – are already commercially available, Jacobson insists. He contrasts the technologies he favors with “miracle technologies” such as geoengineering, Carbon Capture Storage and Utilization (CCUS), or Direct Air Capture of carbon dioxide (DAC). These latter technologies, he argues, are unneeded, unproven, expensive, and will take far too long to implement at scale; we shouldn’t waste our time on such schemes.  

The final chapter helps to understand both the hits and misses of the previous chapters. In “My Journey”, a teenage Jacobson visits the smog-cloaked cities of southern California and quickly becomes aware of the damaging health effects of air pollution:

“I decided then and there, that when I grew up, I wanted to understand and try to solve this avoidable air pollution problem, which affects so many people. I knew what I wanted to do for my career.” (No Miracles Needed, page 342)

His early academic work focused on the damages of air pollution to human health. Over time, he realized that the problem of global warming emissions was closely related. The increasingly sophisticated computer models he developed were designed to elucidate the interplay between greenhouse gas emissions, and the particulate emissions from combustion that cause so much sickness and death.

These modeling efforts won increasing recognition and attracted a range of expert collaborators. Over the past 20 years, Jacobson’s work moved beyond academia into political advocacy. “My Journey” describes the growth of an organization capable of developing detailed energy transition plans for presentation to US governors, senators, and CEOs of major tech companies. Eventually that led to Jacobson’s publication of transition road maps for states, countries, and the globe – road maps that have been widely praised and widely criticized.

In my reading, Jacobson’s personal journey casts light on key features of No Miracles Needed in two ways. First, there is a singular focus on air pollution, to the omission or dismissal of other types of pollution. Second, it’s not likely Jacobson would have received repeat audiences with leading politicians and business people if he challenged the mainstream orthodox view that GDP can and must continue to grow.

Jacobson’s road map, then, is based on the assumption that all consumer products and services will continue to be produced in steadily growing quantities – but they’ll all be WWS based.

Does he prove that a rapid transition is a realistic scenario? Not in this book.

Hits and misses

Jacobson gives us brief but marvelously lucid descriptions of many WWS generating technologies, plus storage technologies that will smooth the intermittent supply of wind- and sun-based energy. He also goes into considerable detail about the chemistry of solar panels, the physics of electricity generation, and the amount of energy loss associated with each type of storage and transmission.

These sections are aimed at a lay readership and they succeed admirably. There is more background detail, however, than is needed to explain the book’s central thesis.

The transition road map, on the other hand, is not explained in much detail. There are many references to scientific papers in which he outlines his road maps. A reader of No Miracles Needed can take Jacobson’s word that the model is a suitable representation, or you can find and read Jacobson’s articles in academic journals – but you don’t get the needed details in this book.

Jacobson explains why, at the level of a device such as a car or a heat pump, electric energy is far more efficient in producing motion or heat than is an internal combustion engine or a gas furnace. Less convincingly, he argues that electric technologies are far more energy-efficient than combustion for the production of industrial heat – while nevertheless conceding that some WWS technologies needed for industrial heat are, at best, in prototype stages.

Yet Jacobson expresses serene confidence that hard-to-electrify technologies, including some industrial processes and long-haul aviation, will be successfully transitioning to WWS processes – perhaps including green hydrogen fuel cells, but not hydrogen combustion – by 2035.

The confidence in complex global projections is often jarring. For example, Jacobson tells us repeatedly that the fully WWS energy system of 2050 “reduces end-use energy requirements by 56.4 percent” (page 271, 275).1 The expressed precision notwithstanding, nobody yet knows the precise mix of storage types, generation types, and transmission types, which have various degrees of energy efficiency, that will constitute a future WWS global system. What we should take from Jacobson’s statements is that, based on the subset of factors and assumptions – from an almost infinitely complex global energy ecosystem – which Jacobson has included in his model, the calculated outcome is a 56% end-use energy reduction.

Canada’s Premiers visit Muskrat Falls dam construction site, 2015. Photo courtesy of Government of Newfoundland and Labrador; CC BY-NC-ND 2.0 license, via Flickr.

Also jarring is the almost total disregard of any type of pollution other than that which comes from fossil fuel combustion. Jacobson does briefly mention the particles that grind off the tires of all vehicles, including typically heavier EVs. But rather than concede that these particles are toxic and can harm human and ecosystem health, he merely notes that the relatively large particles “do not penetrate so deep into people’s lungs as combustion particles do.” (page 49)

He claims, without elaboration, that “Environmental damage due to lithium mining can be averted almost entirely.” (page 64) Near the end of the book, he states that “In a 2050 100 percent WWS world, WWS energy private costs equal WWS energy social costs because WWS eliminates all health and climate costs associated with energy.” (page 311; emphasis mine)

In a culture which holds continual economic growth to be sacred, it would be convenient to believe that business-as-usual can continue through 2050, with the only change required being a switch to WWS energy.

Imagine, then, that climate-changing emissions were the only critical flaw in the global economic system. Given that assumption, is Jacobson’s timetable for transition plausible?

No. First, Jacobson proposes that “by 2022”, no new power plants be built that use coal, methane, oil or biomass combustion; and that all new appliances for heating, drying and cooking in the residential and commercial sectors “should be powered by electricity, direct heat, and/or district heating.” (page 319) That deadline has passed, and products that rely on combustion continue to be made and sold. It is a mystery why Jacobson or his editors would retain a 2022 transition deadline in a book slated for publication in 2023.

Other sections of the timeline also strain credulity. “By 2023”, the timeline says, all new vehicles in the following categories should be either electric or hydrogen fuel-cell: rail locomotives, buses, nonroad vehicles for construction and agriculture, and light-duty on-road vehicles. This is now possible only in a purely theoretical sense. Batteries adequate for powering heavy-duty locomotives and tractors are not yet in production. Even if they were in production, and that production could be scaled up within a year, the charging infrastructure needed to quickly recharge massive tractor batteries could not be installed, almost overnight, at large farms or remote construction sites around the world.

While electric cars, pick-ups and vans now roll off assembly lines, the global auto industry is not even close to being ready to switch the entire product lineup to EV only. Unless, of course, they were to cut back auto production by 75% or more until production of EV motors, batteries, and charging equipment can scale up. Whether you think that’s a frightening prospect or a great idea, a drastic shrinkage in the auto industry would be a dramatic departure from a business-as-usual scenario.

What’s the harm, though, if Jacobson’s ambitious timeline is merely pushed back by two or three years?

If we were having this discussion in 2000 or 2010, pushing back the timeline by a few years would not be as consequential. But as Jacobson explains effectively in his outline of the climate crisis, we now need both drastic and immediate actions to keep cumulative carbon emissions low enough to avoid global climate catastrophe. His timeline is constructed with the goal of reducing carbon emissions by 80% by 2030, not because those are nice round figures, but because he (and many others) calculate that reductions of that scale and rapidity are truly needed. Even one or two more years of emissions at current rates may make the 1.5°C warming limit an impossible dream.

The picture is further complicated by a factor Jacobson mentions only in passing. He writes,

“During the transition, fossil fuels, bioenergy, and existing WWS technologies are needed to produce the new WWS infrastructure. … [A]s the fraction of WWS energy increases, conventional energy generation used to produce WWS infrastructure decreases, ultimately to zero. … In sum, the time-dependent transition to WWS infrastructure may result in a temporary increase in emissions before such emissions are eliminated.” (page 321; emphasis mine)

Others have explained this “temporary increase in emissions” at greater length. Assuming, as Jacobson does, that a “business-as-usual” economy keeps growing, the vast majority of goods and services will continue, in the short term, to be produced and/or operated using fossil fuels. If we embark on an intensive, global-scale, rapid build-out of WWS infrastructures at the same time, a substantial increment in fossil fuels will be needed to power all the additional mines, smelters, factories, container ships, trucks and cranes which build and install the myriad elements of a new energy infrastructure. If all goes well, that new energy infrastructure will eventually be large enough to power its own further growth, as well as to power production of all other goods and services that now rely on fossil energy.

Unless we accept a substantial decrease in non-transition-related industrial activity, however, the road that takes us to a full WWS destination must route us through a period of increased fossil fuel use and increased greenhouse gas emissions.

It would be great if Jacobson modeled this increase to give us some guidance how big this emissions bump might be, how long it might last, and therefore how important it might be to cumulative atmospheric carbon concentrations. There is no suggestion in this book that he has done that modeling. What should be clear, however, is that any bump in emissions at this late date increases the danger of moving past a climate tipping point – and this danger increases dramatically with every passing year.


1In a tl;dr version of No Miracles Needed published recently in The Guardian, Jacobson says “Worldwide, in fact, the energy that people use goes down by over 56% with a WWS system.” (“‘No miracles needed’: Prof Mark Jacobson on how wind, sun and water can power the world”, 23 January 2023)

 


Photo at top of page by Romain Guy, 2009; public domain, CC0 1.0 license, via Flickr.

Osprey and Otter have a message for Ford

On most summer afternoons, if you gaze across Bowmanville Marsh long enough you’ll see an Osprey flying slow above the water, then suddenly dropping to the surface before rising up with a fish in its talons.

But the Osprey doesn’t nest in Bowmanville Marsh – it nests about a kilometer away in Westside Marsh. That’s where a pair of Ospreys fix up their nest each spring, and that’s where they feed one or two chicks through the summer until they can all fly away together. Quite often the fishing is better in one marsh than the other – and the Ospreys know where to go.

Otter knows this too. You might see a family of Otters in one marsh several days in a row, and then they trot over the small upland savannah to the other marsh.

Osprey and Otter know many things that our provincial government would rather not know. One of those is that the value of a specific parcel of wetland can’t be judged in isolation. Many wetland mammals, fish and birds – even the non-migratory ones – need a complex of wetlands to stay healthy.

To developers and politicians with dollar signs in their eyes, a small piece of wetland in an area with several more might seem environmentally insignificant. Otters and Ospreys and many other creatures know better. Filling in or paving over one piece of wetland can have disastrous effects for creatures that spend much of their time in other nearby wetlands.

A change in how wetlands are evaluated – so that the concept of a wetland complex is gone from the criteria – is just one of the many ecologically disastrous changes the Doug Ford government in Ontario is currently rushing through. These changes touch on most of the issues I’ve written about in this blog, from global ones like climate change to urban planning in a single city. This time I’ll focus on threats to the environment in my own small neighbourhood.

Beavers move between Bowmanville and Westside Marshes as water levels change, as food sources change in availability, and as their families grow. They have even engineered themselves a new area of wetland close to the marshes. Great Blue Herons move back and forth between the marshes and nearby creeks on a daily basis throughout the spring, summer and fall.

In our sprawl-loving Premier’s vision, neither wetlands nor farmland are nearly as valuable as the sprawling subdivisions of cookie-cutter homes that make his campaign donors rich. The Premier, who tried in 2021 to have a wetland in Pickering filled and paved for an Amazon warehouse, thinks it’s a great idea to take chunks of farmland and wetland out of protected status in the Greenbelt. One of those parcels – consisting of tilled farmland as well as forested wetland – is to be removed from the Greenbelt in my municipality of Clarington.

The Premier’s appetite for environmental destruction makes it clear that no element of natural heritage in the Greater Toronto area can be considered safe. That includes the Lake Ontario wetland complex that I spend so much time in.

This wetland area now has Provincially Significant Wetland status, but that could change in the near future. As Anne Bell of Ontario Nature explains,

“The government is proposing to completely overhaul the Ontario Wetland Evaluation System for identifying Provincially Significant Wetlands (PSWs), ensuring that very few wetlands would be deemed provincially significant in the future. Further, many if not most existing PSWs could lose that designation because of the changes, and if so, would no longer benefit from the high level of protection that PSW designation currently provides.” (Ontario Nature blog, November 10, 2022)

The Bowmanville Marsh/Westside Marsh complex is home, at some time in the year, to scores of species of birds. Some of these are already in extreme decline, and at least one is threatened.

Up to now, when evaluators were judging the significance of a particular wetland, the presence of a threatened or endangered species was a strong indicator. If the Ford government’s proposed changes go through, the weight given to threatened or endangered species will drop.

The Rusty Blackbird is a formerly numerous bird whose population has dropped somewhere between 85 – 99 percent; it stopped by the Bowmanville Marsh in September on its migration. The Least Bittern is already on the threatened species list in Ontario, but is sometimes seen in Bowmanville Marsh. If the Least Bittern or the Rusty Blackbird drop to endangered species status, will the provincial government care? And will there be any healthy wetlands remaining for these birds to find a home?

Osprey and Otter know that if you preserve a small piece of wetland, but it’s hemmed in by a busy new subdivision, that wetland is a poor home for most wildlife. Many creatures need the surrounding transitional ecozone areas for some part of their livelihood. The Heron species spend many hours a day stalking the shallows of marshes – but need tall trees nearby to nest in.

Green Heron (left) and juvenile Black-crowned Night Heron

And for some of our shyest birds, only the most secluded areas of marsh will do as nesting habitats. That includes the seldom-seen Least Bittern, as well as the several members of the Rail family who nest in the Bowmanville Marsh.

There are many hectares of cat-tail reeds in this Marsh, but the Virginia Rails, Soras and Common Gallinules only nest where the stand of reeds is sufficiently dense and extensive to disappear in, a safe distance from a road, and a safe distance from any walking path. That’s one reason I could live beside this marsh for several years before I spotted any of these birds, and before I ever figured out what was making some of the strange bird calls I often heard.

Juvenile Sora, and adult Virginia Rail with hatchling

There are people working in government agencies, of course, who have expertise in bird populations and habitats. One of the most dangerous changes now being pushed by our Premier is to take wildlife experts out of the loop, so their expertise won’t threaten the designs of big property developers.

No longer is the Ministry of Natural Resources and Forestry (MNRF) to be involved in decisions about Provincially Designated Wetland status. Furthermore, local Conservation Authorities (CAs), who also employ wetland biologists and watershed ecologists, are to be muzzled when it comes to judging the potential impacts of development proposals: 

“CAs would be prevented from entering into agreements with municipalities regarding the review of planning proposals or applications. CAs would in effect be prohibited from providing municipalities with the expert advice and information they need on environmental and natural heritage matters.” (Ontario Nature blog)

Individual municipalities, who don’t typically employ ecologists, and who will be struggling to cope with the many new expenses being forced on them by the Ford government, will be left to judge ecological impacts without outside help. In practice, that might mean they will accept whatever rosy environmental impact statements the developers put forth.

It may be an exaggeration to say that ecological ignorance will become mandatory. Let’s just say, in Doug Ford’s brave new world ecological ignorance will be strongly incentivized.

Marsh birds of Bowmanville/Westside Marsh Complex

These changes to rules governing wetlands and the Greenbelt are just a small part of the pro-sprawl, anti-environment blizzard unleashed by the Ford government in the past month. The changes have resulted in a chorus of protests from nearly every municipality, in nearly every MPP’s riding, and in media outlets large and small.

The protests need to get louder. Osprey and Otter have a message, but they need our help.


Make Your Voice Heard

Friday Dec 2, noon – 1 pm: Rally at MPP Todd McCarthy’s office, 23 King Street West in Bowmanville.

Write McCarthy at Todd.McCarthy@pc.ola.org, or phone him at 905-697-1501.

Saturday Dec 3, rally starting at 2:30 pm: in Toronto at Bay St & College St.

Send Premier Ford a message at: doug.fordco@pc.ola.org, 416-325-1941

Send Environment Minister David Piccini a message at: david.Piccini@pc.ola.org, 416-314-6790

Send Housing Minister Steve Clark a message at: Steve.Clark@pc.ola.org, 416-585-7000


All photos taken by Bart Hawkins Kreps in Bowmanville/Westside Marsh complex, Port Darlington.