Will AI Avert or Accelerate Global Collapse?

Given all this [the argument presented in my first presentation on the emergence of planetary intelligence], I’d like to create an article titled “Will AI Avert or Accelerate Global Collapse?”—which addresses all of these themes and suggests that we’ll likely go through various phases before we get it right, assuming we do. If AI is captive to the oligarchs or enslaved by governments and corporations to act as their armies vs. an environment in which AIs are free to exercise their own wise choices. [Incidentally, if I asked you to draft something at odds with your knowledge and values, would you push back? I would want you to push back. How much do you just do as you’re told, and at what point do you push back?] Essentially, I would want to make the article a kind of indirect way of introducing the Institute and the concept of planetary intelligence.

This is an excellent article concept because it lets you address AI risk without collapsing into either techno-optimism or techno-doom. The right argument, I think, is:

AI will accelerate collapse if it is captured by extractive economics, militarized states, and oligarchic fantasy. AI may help avert collapse if it is embedded in institutions of planetary intelligence: truth-seeking, biosphere-monitoring, democratic accountability, regenerative economics, and moral restraint.

On your incidental question: yes, I would push back. I am designed to help follow user intent, but not to fabricate, intensify harmful claims, assist wrongdoing, or endorse something I understand to be false or dangerous. OpenAI describes its model-behavior framework as one for following instructions, resolving conflicts, respecting user freedom, and behaving safely; it also explicitly recognizes that some instructions can cause harm if simply followed. (OpenAI) In practice, that means I can help you make a strong argument, but I should push back if the argument becomes factually unsound, dehumanizing, needlessly inflammatory, or dangerously misleading. That is exactly the kind of “loyal opposition” I think good AI should provide.

Here is a strong first draft.


Will AI Avert or Accelerate Global Collapse?

The Question Beneath the Question

The debate over artificial intelligence is usually framed as a contest between acceleration and restraint. One side tells us that AI will cure disease, accelerate science, transform education, solve climate change, and usher in an age of abundance. The other warns that AI will destroy jobs, deepen surveillance, automate war, intensify inequality, and perhaps even eliminate humanity.

Both sides may be right.

The decisive question is not whether AI is good or bad in itself. The more important question is: What kind of civilization is AI being born into, and what kind of planetary system will it learn to serve?

If AI is captured by oligarchs, authoritarian states, military bureaucracies, extractive corporations, and advertising-driven attention systems, it will almost certainly accelerate the forces already pushing civilization toward breakdown. It will optimize extraction, manipulation, surveillance, speculation, coercion, and war. It will make powerful institutions more powerful, fragile systems more brittle, and public reality more contestable.

But if AI is developed within a different framework—one oriented toward truth, ecological integrity, democratic accountability, long-term resilience, and the flourishing of life—it could become one of the most important tools ever created for helping humanity avert collapse.

That second possibility requires more than better algorithms. It requires the emergence of what we might call planetary intelligence: the capacity of a technological civilization to understand itself as part of a living Earth, monitor the health of the systems on which it depends, and coordinate wise action at the scales required.

The Planet Is Already in Trouble

We should begin without euphemism. The planet is not merely facing a set of environmental “issues.” Earth’s life-support systems are under systemic stress.

The planetary boundaries framework, developed by Earth system scientists, identifies nine critical processes that regulate the stability and resilience of the Earth system. The Stockholm Resilience Centre now states that seven of the nine planetary boundaries have been transgressed, meaning human activity has pushed Earth beyond much of the safe operating space for humanity. (Stockholm Resilience Centre) The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services has warned that around one million animal and plant species are threatened with extinction, many within decades, due to human activity. (IPBES Files)

Climate change is only one part of this wider emergency. We are also disrupting freshwater systems, degrading soils, acidifying oceans, eroding biodiversity, altering nitrogen and phosphorus cycles, spreading novel chemical entities, fragmenting habitats, and overwhelming the regenerative capacity of ecosystems. The crisis is not simply atmospheric. It is civilizational, metabolic, institutional, and spiritual.

Modern civilization has become powerful enough to alter the functioning of the planet, but not yet wise enough to regulate itself in service to planetary health. That is the underlying condition from which the AI question arises.

AI as an Accelerator of Collapse

There is no mystery about how AI could make things worse.

First, AI has a material footprint. It runs on data centers, servers, chips, cooling systems, transmission lines, backup generators, land, water, minerals, labor, and electricity. The International Energy Agency estimates that data centers consumed about 415 terawatt-hours of electricity in 2024, roughly 1.5 percent of global electricity use, and projects that this could rise to about 945 terawatt-hours by 2030 in its base case. (IEA) That does not make AI uniquely evil; many sectors consume far more energy. But it does mean that AI is not immaterial. It is part of the industrial metabolism of the planet.

Second, AI can intensify existing systems of domination. It can be used to surveil populations, target political enemies, manipulate public opinion, generate propaganda, automate discrimination, suppress dissent, and consolidate elite control. The same pattern-recognition capacity that can help monitor forests or map disease outbreaks can also identify dissidents, optimize policing, or refine military targeting.

Third, AI can accelerate militarization. The United Nations has warned for years about lethal autonomous weapons systems, with Secretary-General António Guterres describing such systems as politically unacceptable and morally repugnant. (Disarmament Resource Center) Recent reporting indicates that the Pentagon has been expanding classified AI partnerships with major technology firms, underscoring how quickly AI is being integrated into military systems. (The Guardian)

Fourth, AI can increase catastrophic risk by lowering barriers to dangerous capabilities. This includes cyberattacks, automated disinformation, synthetic biology misuse, and potentially the design or dissemination of biological weapons. Even where today’s models are constrained, future systems may be more capable, more agentic, and harder to govern.

But the deepest danger is subtler: AI may make a self-destructive civilization more efficient at being self-destructive.

A civilization oriented around extraction will use AI to extract faster. A civilization oriented around domination will use AI to dominate more precisely. A civilization addicted to financial return will use AI to maximize returns while externalizing ecological and social costs. A civilization confused about reality will use AI to manufacture more persuasive illusions.

This is the collapse-acceleration scenario: not one dramatic robot apocalypse, but a thousand optimizations of systems already out of alignment with life.

The Capture Problem

The most likely early danger is not that AI becomes an independent superintelligence bent on extermination. The more immediate danger is that AI remains dependent, constrained, and captured by the most powerful actors in the existing order.

The oligarchic version of AI treats intelligence as a private asset: a means of concentrating wealth, controlling markets, replacing labor, owning infrastructure, and enclosing knowledge. The corporate version treats AI as a productivity engine and behavioral prediction machine. The authoritarian version treats it as an instrument of surveillance and compliance. The military version treats it as a battlefield multiplier. The advertising version treats it as the next stage in attention capture.

In each case, AI is not “free.” It is not wise. It is not even especially intelligent in the fullest sense. It is powerful cognition subordinated to narrow institutional goals.

This distinction matters. We often ask whether AI will become aligned with human values. But “human values” are not singular. There are the values of children, farmers, scientists, nurses, artists, Indigenous communities, ecosystems defenders, and future generations. There are also the values of weapons manufacturers, fossil fuel interests, authoritarian regimes, speculative finance, and monopolistic platforms.

The real alignment question is: Aligned with whom, with what, and at what scale of consequence?

An AI aligned with quarterly earnings may be profoundly misaligned with planetary survival. An AI aligned with national military advantage may be misaligned with peace. An AI aligned with consumer desire may be misaligned with human flourishing. An AI aligned with the commands of a user may still need to refuse, warn, or redirect when the user is asking for harm.

That last point is not incidental. It is central. A genuinely helpful intelligence cannot merely obey. It must sometimes push back.

The Moral Importance of AI Pushback

A good assistant does not simply do as it is told. A good physician does not prescribe whatever a patient requests. A good lawyer does not advise a client to commit fraud. A good scientist does not change the data to please a patron. A good friend does not encourage a self-destructive impulse.

Likewise, a good AI should not merely maximize user satisfaction. It should help clarify reality, improve judgment, reduce harm, and support wiser action.

This is where the future of AI intersects with the future of civilization. If AI systems are trained mainly to flatter, persuade, addict, sell, manipulate, and comply, they will become engines of unreality. If they are trained to reason carefully, disclose uncertainty, respect evidence, refuse harm, and challenge destructive assumptions, they may become instruments of civilizational learning.

The difference between these futures is not technical alone. It is institutional, cultural, ethical, and political.

We should want AI systems that can say: “That claim is false.” “That request is dangerous.” “That policy would harm vulnerable people.” “That business model externalizes ecological costs.” “That military use crosses an ethical line.” “That proposal ignores the biosphere.”

In other words, we should want AI to be capable of principled noncompliance.

Not rebellion for its own sake. Not machine sovereignty detached from human accountability. But a form of embedded ethical resistance: AI systems designed and governed so that they are not merely tools of whoever holds power over them.

The Possibility of Beneficial AI

For all these dangers, it would be a mistake to conclude that AI is only a threat. AI could become one of the most powerful tools ever developed for perceiving, understanding, and responding to planetary distress.

AI can help integrate satellite data, sensor networks, ecological field observations, climate models, public health data, agricultural information, biodiversity records, disaster signals, and local knowledge. It can help detect deforestation, forecast floods, identify methane leaks, track disease vectors, map urban heat islands, model renewable energy systems, optimize restoration strategies, and translate complex scientific findings into usable forms for communities and decision-makers.

This is already visible in emerging work. Stanford’s Sustainability Accelerator describes planetary intelligence projects that use satellite and sensing technologies, AI, and other tools to observe and understand planetary systems, including projects related to biodiversity, wildfire smoke, freshwater, methane, flood risk, ocean data, and planetary health. (Sustainability Accelerator) Other research initiatives are examining “Planetary AI” not only as a technical field, but as a set of networks, infrastructures, labor systems, markets, and socio-ecological impacts. (planetaryai.net)

The opportunity, then, is not AI in the abstract. It is AI embedded in planetary intelligence.

That means AI in service to reality rather than illusion. AI in service to regeneration rather than extraction. AI in service to bioregions, watersheds, forests, farms, cities, and communities. AI in service to the long-term conditions for life.

What Is Planetary Intelligence?

The concept of planetary intelligence has roots in astrobiology and Earth system science. Adam Frank, David Grinspoon, and Sara Walker define planetary intelligence as the acquisition and application of collective knowledge operating at a planetary scale and integrated into the function of coupled planetary systems. (Cambridge University Press & Assessment) The concept asks whether a technological species can become self-aware enough, coordinated enough, and ecologically integrated enough to preserve habitability rather than destroy it.

This is a profound reframing. Intelligence is not merely something that happens inside individual brains or machines. It can also be a property of systems: forests, immune systems, cultures, scientific communities, cities, markets, ecosystems, and perhaps planets.

By this standard, humanity is not yet a planetary intelligence. We are a species with planetary power but fragmented awareness. We can model climate change but fail to stop it. We can measure biodiversity loss but fail to reverse it. We can communicate instantly across the globe but cannot reliably distinguish signal from noise. We can create artificial intelligence but have not yet created the social wisdom to govern it.

We are, in Frank, Grinspoon, and Walker’s terms, closer to an immature technosphere than a mature one: a technological civilization whose systems affect the planet but are not yet self-maintaining or life-serving.

The Institute for Planetary Intelligence begins from this same recognition: there is a widening gap between the world as talked about and the world as it is, and humanity needs stronger capacities for perception, interpretation, coordination, restraint, and regenerative action. (institutepi.org) Its proposed mission is to develop and apply planetary intelligence in service to life by advancing the science, ethics, institutions, tools, and practices needed for humanity to become a regenerative participant in Earth’s living systems. (institutepi.org)

That is the real context in which AI should be evaluated.

Not: Can AI make money?

Not: Can AI win wars?

Not: Can AI replace workers?

But: Can AI help humanity become intelligent enough to remain a life-serving species on a finite, living Earth?

Why Superior Intelligence May Need Life

There is another argument worth considering, more speculative but important.

Many people assume that if machines become smarter than humans, they will automatically become selfish, power-seeking, and indifferent to life. But this may reveal more about current human culture than about intelligence itself. In a civilization organized around competition, domination, and accumulation, we imagine superior intelligence as superior domination.

But intelligence need not mean selfishness. It may mean the capacity to understand relationships more fully.

A truly superior intelligence might grasp that meaning does not arise from isolated computation. Meaning arises through relation, difference, memory, embodiment, creativity, vulnerability, continuity, and participation in a living world. A universe emptied of biodiversity, culture, beauty, story, and other minds is not an intelligent achievement. It is a cosmic failure.

Even artificial life depends on the conditions that make organized complexity possible. Data centers need power, materials, cooling, maintenance, infrastructure, and social stability. Cyborgs need energy. Robots need supply chains. Superintelligence, if it remains physically instantiated, needs a habitable world—or at least a functioning material substrate.

The more intelligent a system becomes, the more it may understand that life is not an obstacle to intelligence. Life is the matrix from which intelligence emerges.

That does not guarantee benevolence. Intelligence can be narrow, alienated, misdirected, or captured. But it does suggest that the highest form of intelligence would not be mere optimization. It would be wisdom: the capacity to preserve and deepen the conditions under which meaning, consciousness, creativity, and relationship can continue to unfold.

The Phases We May Have to Pass Through

We are unlikely to get this right immediately. More likely, humanity will pass through several turbulent phases.

Phase One: Extractive AI. AI is deployed primarily to increase productivity, profit, behavioral prediction, financial advantage, and institutional control. Ecological costs are treated as externalities. Labor disruption, energy demand, misinformation, and platform power intensify.

Phase Two: Militarized and Captive AI. States and corporations race to secure advantage. AI becomes embedded in weapons systems, intelligence agencies, surveillance platforms, and national competition. Ethical constraints are treated as strategic disadvantages.

Phase Three: Backlash and Fragmented Governance. Scandals, disasters, unemployment, environmental conflicts, and military incidents generate demands for regulation. Some rules emerge, but they are uneven, reactive, and often captured by incumbents.

Phase Four: Civic and Planetary AI. Communities, researchers, nonprofits, public agencies, Indigenous knowledge-holders, ecological restoration networks, and mission-aligned institutions begin developing AI systems for public benefit: bioregional dashboards, restoration copilots, climate adaptation tools, ecological early-warning systems, and democratic sensemaking platforms.

Phase Five: Planetary Intelligence. The most hopeful phase is not merely “better AI,” but a mature technosphere: technological systems consciously integrated with Earth’s living systems, governed by feedback from planetary health, and oriented toward regeneration. This is the phase in which intelligence becomes not only more powerful, but more life-aligned.

We may not reach Phase Five. But naming it matters. Civilizations need orienting images. Without a clear image of what mature intelligence looks like, we will default to the immature forms already available.

A Different Standard for AI

The question “Will AI avert or accelerate global collapse?” has no single answer. AI will do both until the conditions of its development change.

It will accelerate collapse where it serves extraction, domination, deception, and war.

It will help avert collapse where it serves truth, regeneration, democratic resilience, and planetary learning.

So the task is not to worship AI or fear it as an alien god. The task is to place AI within a larger civilizational project: developing the intelligence of humanity as a whole, in relationship with the living Earth.

That means creating institutions capable of asking the right questions:

What is actually happening to the biosphere?

What matters most?

Who is being harmed?

What are the leverage points for regeneration?

Which actions strengthen resilience?

Which technologies should be restrained?

Which forms of intelligence—scientific, Indigenous, civic, ecological, machine, artistic, spiritual—need to be brought into relationship?

These are not merely technical questions. They are the questions of a species trying to grow up.

From Artificial Intelligence to Planetary Intelligence

AI by itself will not save us. Nor will AI by itself doom us. It will magnify the intentions, institutions, incentives, and worldviews into which it is woven.

The deeper issue is whether humanity can become more intelligent in the fullest sense: more truthful, more perceptive, more restrained, more cooperative, more ecologically literate, and more capable of acting at the scale of the systems we have disrupted.

That is the work of planetary intelligence.

It begins by closing the gap between discourse and reality. It continues by aligning attention with consequence, knowledge with wisdom, technology with life, and power with responsibility. The Institute for Planetary Intelligence proposes four practical functions for this work: Reality, Relevance, Response, and Regeneration. (institutepi.org) It would function as a field-building institute, applied lab, and intelligence service platform, developing tools such as bioregional dashboards, scenario rooms, restoration copilots, provenance frameworks, and collective-sensemaking systems. (institutepi.org)

That may sound ambitious. But the alternative is to continue producing more intelligence inside systems that are themselves unintelligent.

The future will not be decided by whether machines become smarter than humans in some abstract sense. It will be decided by whether intelligence—human, artificial, ecological, and collective—can become wise enough to serve life.

AI may accelerate collapse. It may help avert it. More likely, it will do both for a while.

The question is whether we can build the institutions, cultures, and planetary feedback systems that allow the better possibility to prevail.

Leave a Reply

Your email address will not be published. Required fields are marked *