Unapocalyptic

Unapocalyptic

Share this post

Unapocalyptic
Unapocalyptic
Better than Brains

Better than Brains

Is more intelligence automatically better? If not, then how exactly does Artificial Superintelligence threaten us?

Karl Schroeder's avatar
Karl Schroeder
May 02, 2024
∙ Paid
11

Share this post

Unapocalyptic
Unapocalyptic
Better than Brains
1
1
Share

In Dragged into the AI Hype Cycle I promised a deeper dive into my views on AI, so here it is:

It’s been clear to me for many years that debates around Artificial Intelligence are typically driven by people who have, shall we say, a particular worldview. They tend to be more familiar with the American tradition of analytic philosophy than with thinkers like Haraway and Varella. As a result, the conversations I get into about AI and the potential threat of AGI are nearly always couched in terms of game theory and utility functions, rather than extended-mind theory or considerations of the boundary-of-self an AGI might have. This is consequential because, in the wake of ChatGPT’s ascendance, this unexamined framing is distorting society’s debate about AI’s role in our lives.

Let’s take the example of ‘superintelligence.’ Because computers can now beat human beings at many tasks that can be formalized, the assumption is that we are just a step away from them beating us at everything. But the problems that we’d want to ask a superintelligence to solve are not like playing Chess. They’re more like, ‘provide a viable plan for peace in the Middle East that everyone can agree on.’ Could even a superintelligence figure that one out? Does answering a requirement like this even require super-intelligence? Or would something else suffice?

I do know that there are ways of addressing intractable complexity in problem-solving. For a reallly good one that is one of foresight practice’s star techniques, check out morphological analysis.

I’ve mentioned Stephen Jay Gould’s Full House before. In this book, he points out that evolution doesn’t have a preferred direction. Life on Earth is not evolving toward greater complexity, this is an illusion. The only reason complexity has increased is because there has only ever been a lower limit to how complex a life form can be, not an upper one. With this single constraint, a random walk of evolutionary change will drift slowly away from the limiting “wall” of lesser complexity. On average, life on Earth remains simple; it’s just that over time, an accumulation of more complex forms happens in a sparse ‘long tail’ of rare species (like ours) simply because there's nothing to stop it.

One answer to the Fermi Paradox is that there is a limit to how intelligent life can be, and humanity is at that limit. Once groups or even individual members of a species are able to figure out how to destroy their entire world using trivially simple means, they eventually do. Intelligence is therefore bounded between two walls, a minimum of complexity and a maximum. This rule will apply in any environment as long as intelligence is high enough to find a ‘nuclear option’ that it can use when pushed to the wall; the rule will therefore apply to posthuman minds, interstellar machine empires, etc. It might not apply to an AI singleton (a superintelligent loner), but for all the reasons I mentioned in my last post, I doubt the possibility of such a loner. —And for such a loner, suicide is always an option.

The idea that intelligence has limits directly opposes the usual framing of intelligence as having no upper boundaries. I contend that the framing of unlimited growth in intelligence dominates the debates and headlines because the technologists who are driving the debate share a set of biases. One is that they view intelligence through an analytic-philosophy lens, treating it as ultimately mathematical, game-theory and utility-function oriented, and existing in a world of discrete ‘concepts’ and ‘problems’ that can be isolated and ‘solved.’ Maxima can be found. In such a universe intelligence is a matter of degree; it has a dial with a zero setting but no upper volume. You can crank it up way past 11 and when you do, our real-world issues become trivial to fix.

Wicked problems such as ‘peace in the Middle East,’ however, are not amenable to being ‘solved’ by analytical means alone. So while superintelligence might be able to help, it could never on its own achieve such an end. To me, the typical framing of AI and superintelligence has always seemed to be a case of, ‘if all you’ve got is a hammer, everything looks like a nail.’ (Full disclosure: my perspective on this was heavily influenced by the work of Brian Cantwell Smith, particularly his 1996 book On the Origin of Objects [free version—or you can buy the hardcover on Amazon].)

I highlighted these issues twenty-five years ago in my first novel, Ventus, which features an ironic ‘mirror world’ vision of AI. Instead of AI developing sentience, it evolves into what I call thalience. Thalience is an alternative theory of Artificial Intelligence based on Cantwell Smith’s ideas and on Enactivism.

Share

Thalia, Muse of Nature

What if you could separate the activity of science from the human researchers who do it? Automate science? Imagine creating a bot that does physics experiments and builds its internal model of the world based on those experiments rather than from concepts it inherited from its human creators. It could start as something simple that stacked blocks and knocked them over again. Later models could get quite sophisticated; and let's say we combine this ability with the technology of self-reproducing machines (von Neumann machines). Seed the moon with our inquisitive AIs and let them go nuts. Let them share their findings and refine their models.

So far so good. Here's the question that leads me to ask whether sentience, on the human model, applies to AI: if they were allowed to freely invent their own semantics, would the AI’s physical model of the universe end up resembling ours? —I don't mean would they produce the same results given the same inputs, because they would. But would they do so using a humanly-comprehensible theory?

This is a better question than it might at first appear, because even we can produce mutually irreconcilable theories that successfully describe the same things: Quantum Mechanics and Relativity, for instance. Their worldviews are incompatible, even though together they appear to accurately describe the real world. So it's at least possible that non-human intelligences would come to different conclusions about the nature of reality even if their theory produced results that agreed with our models. Their minds might not be copies of ours, but be entirely different.

This little thought experiment asks whether we can turn metaphysics into a hard science; and this is why we need a new word, other than sentience, to describe what’s going on here:

Thalient AI gives the physical world itself a voice, so that rather than us asking what reality is, reality itself can tell us.

In Ventus I called this the Pinnochio change: it’s that ineffable line past which we are no longer speaking to a human construct that’s repeating our own words back to us (a ‘stochastic parrot’ as GPT is now) but to something truly different. A nonhuman part of the world suddenly awakened, seeing us; the nonhuman word, recognizing us.

In Ventus, of course, the thalient system has lost the ability to communicate with humans; but the end of the novel holds out the hope that some sort of bridge can be constructed. This bridge is politics, rather than a meeting of minds through Reason or Mathematics, because we’re no longer talking about the mind as a disembodied specter living in some Pythagorean ether where any other mind is accessible. This is not the universal reason of Analytic philosophy but the embodied and embedded one of enactivism.

Among other things, this means there can be no such thing as an ‘explainable’ AGI. Intelligence is always intelligent for some way of existing in the world, and if that way is not human then we will remain mutually incomprehensible and may not even recognize one another as thinking beings. Our commonalities enable us to live with dogs and horses; our differences make us separate islands in the ocean of cognitive possibilities. The same will be true for AI.

So then, where does enactivist AI fit in the debate over AI governance?

Keep reading with a 7-day free trial

Subscribe to Unapocalyptic to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Karl Schroeder
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share