Dragged into the AI hype cycle
Is it possible to talk about our futures without addressing today’s elephant in the room, AI?
There’s a robust debate going on in the Association of Professional Futurists about governing AI. It’s also happening in the Millennium Project, and among technologists, SF writers and journalists, ethicists, and the general public. I regularly get called on to defend one or another position on the issue. I’ve recently done Delphi polls for policy recommendations, given talks, and been added to lots of listservs by people whose ideas and opinions I value highly.
I really, really don’t want to be in this position.
Over the past 25 years I’ve written three novels that reframe the concept of Artificial Intelligence: Stealing Worlds, Lady of Mazes, and Ventus. I’ve also written a dozen or more short stories about it over the years—while not, I freely admit, being a technical expert in the field. What I am an expert in is what you might call “convincing mythopoetic arguments.” A convincing mythopoetic argument seems reason- and evidence-based, but is instead (or also) the restatement of a kind of mythological deep structure. I recognize these myths-given-flesh because I consciously craft them all the time. That’s what SF writers do.
I mean, I do love it when I discover authors who are clearly aware of the fire they’re playing with. At its best, this kind of SF can be an exhilarating ride, as we have seemingly obvious truths yanked out from under us. I’m thinking here of books like Peter Watts’s Blindsight. Sometimes, though, I encounter mythopoetic structures in science, and sometimes it seems as though an entire community of thinkers has agreed upon a common ur-story, basing all their research—and their assumptions about what can even be research—on it. I become deeply suspicious when I detect that; and I detect it in how our experts are currently framing the future of Artificial General Intelligence (AGI).
The current panic over AGI reminds me of the fevered anxiety about nanotechnology that played out in the 90s. Grey Goo did not in fact eat the world, but the way that panic played out—and faded away—inclines me to be skeptical of similar media frenzies. Anything I write about AI right now is going to be viewed through the current mythopoetic lens, so I’ve shied away from talking about it here.
But let’s get it over with.
What we fear about artificial intelligences is that they will become copies of our minds, but will lack any sympathy or sense of fellowship with us. Actually, we can take this further by asking what we fear about that notion; after all, ravens and octopuses have self-aware minds, even if those aren’t like ours. Why aren’t we afraid of them?
It’s because we fear AI will become a copy not of our minds, but of our power, independent of any governance that might direct it to a humanistic end.
The question of how to govern AGI, then, is just the general question of how to restrain power.
The Usual Archetype
I find a lot of arguments that conflate AI and AGI; they’re not the same. Compared with AGI, what we call ‘mere’ Artificial Intelligence or AI is a kind of chimera, a monster that embodies one or more of our animal drives, not as parts of a whole but as individually awakened, made independent and self-aware. We fear AI will become the servant that the Sorcerer’s Apprentice creates (as embodied in the famous paperclip maximizer). It’ll follow an instruction relentlessly, destroying everything and anyone in its way to achieving its goal. The mythopoetic structure here is the Sorcerer’s Apprentice, or the Golem, or any cautionary tale about blind obedience.
AGI, on the other hand, is the whole package—a fully developed mind with the flexibility of ours (or even more) and, presumably, immune to the monomania of the Sorcerer’s Apprentice. But pitiless, tireless, and inimical. The mythopoetic archetype of such a being is the devil—or, which could be worse, the incomprehensible and arbitrary Greek or Norse gods, who combine ultimate power with ultimate irresponsibility, and human foibles with inhuman will.
Concept Cage Match: Utility Functions Versus Autopoeisis
There are two competing visions of what it means to be a thinking being in cognitive science today. In one corner, from the American tradition of analytic philosophy and from classical economics, we have the idea of utility functions. This myth-framing of intelligence (and AGI in particular) says that thinking beings are rational actors attempting to maximize some quantity that represents an objectively determinate value of reward versus effort. Leaving aside all the problematic questions about how objective utility is determined, who determines it, and how it’s measured, this kind of AGI has a purpose—one that it can define clearly enough to guide its actions—and it works tirelessly to achieve that purpose. This is the paperclip mazimizer. The problem with projecting this notion onto AGI is that the hallmark of actual intelligence is understanding the broader context of one’s goals and costs and adjusting accordingly.
An AGI driven solely by some master utility function will be relentless in its pursuit of it. The most chilling and mind-blowing short story I’ve ever read about such an entity is Stanislaw Lem’s “The Mask.” The signature move of this contender is its soulless nature; it is implacable economic reasoning given physical form.
The mythopoetic structure of the utility-function AGI is that is a version of the disembodied “view from nowhere” rational mind of Newtonian science. As such, it hovers above the world, reaching in and moving pawns and knights across the board to achieve its ends.
In the other corner, the kid: 4E cognitive science, which I talked about in “So, What’s New?” The 4E paradigm says that cognition is necessarily Embodied, Enacted, Extended, and Embedded. Read the linked piece for more details on what each of these mean; for now, let’s just call this movement Enactivism. Rather than slavishly following a utility function, enactive beings serve autopoeiesis: the ongoing self-separation of some patch of the world from the rest of reality; autopoeiesis started on Earth with primitive cells only capable of establishing a boundary between themselves and the medium they swam in. To do this, they deployed the basic unit of cognition, called sense-making. In those basic cells, it arose when the cells became actively able to do something to maintain themselves—the classic example being bacteria that swim toward glucose. Swimming toward nutrient is sense-making: it’s the creation of norms, of relevant and irrelevant, the separation of the noticeable from the ignorable. In progressively more refined ways, it evolved into what we call thinking, without losing its essential nature.
Sense-making sounds a lot like utility functions, but there’s a crucial difference. In order to continue to exist, an embodied and embedded entity may have to change not just is environment, but itself. The enactivist view is that organism and environment make up a complementary pair, each in some senses giving rise to the other. To survive and thrive, a real entity has to be able to evolve. (You could try to annoy me here by suggesting that the ability to evolve to continue to exist is just the ultimate utility function, but that would miss the point of what it is that’s evolving; because that’s not fixed.)
The enactivist framing of AGI would be of a paired system, AGI+environment, whose two halves coevolve. This AGI doesn’t reach in from a higher dimension of mathematical truths to intervene in the world; it is down in the mud with the kids and the dogs, trying to figure out who it is like the rest of us.
From the 4E perspective, we cannot place a firm boundary line between cognition occurring “inside” an AI and cognition taking place “outside” it. The extended part of 4E says that certain forms of natural cognition occur outside the body of the organism (much of my own cognition takes place via interactions with my phone). Suppose this part of the paradigm is right. In that case, one can ask whether certain cognitive events “performed by” an AGI take place “outside” it, in the form of communications protocols, corporate structures, reporting mechanisms, even human minds that it needs to influence people. The implication is that AGI can not be governed (or even identified) as a discrete entity. (The further implication is that this would also be true of any artificial superintelligence—ASI.) For us, such a system becomes merely one more part of a continuum of possible points of intervention for governance in general. So governance in general is the problem, not governance of AGI in particular.
Which Brings Us Back To…
Power.
What’s lying underneath all our anxieties about AGI is an anxiety that has nothing to do with Artificial Intelligence. Instead, it’s a manifestation of our growing awareness that our world is being stolen from under us. Last year’s estimate put the amount of wealth currently being transferred from the people who made it to an idle billionaire class at $5.2 trillion. Artificial General Intelligence whose environment is the server farms and sweatshops of this class is frightening only because of its capacity to accelerate this greatest of all heists.
Meanwhile, in our minds and our public discourse, we are pulling out the same archetypes of the clash of philosophies that’s been playing out for a hundred years now; I mean the one between Newtonian physics and quantum theory. At stake is what (and who) counts as real. The Newtonian AGI determines that in advance and then remakes the world to fit its plan. That is why we’re afraid of its vision of AI—because we know that the real world can’t be rationalized or simplified. Every attempt at doing it so far has ended in pogroms or collapse. Life that does not adapt goes extinct.
In contrast, only half of the enactivist AGI is its silicon, software, and supporting structures. The other half is its environment, including us. What is to be feared here is that it becomes a mirror not of our highest selves, but of the very vision of economic utility that our culture has built into our institutions, assumptions, and aspirations. —That its ‘environment’ will be carefully crafted by the burgeoning billionaire class. This kind of AGI doesn’t need a hostile utility function to be a problem; simply because of what it is as an enactive being, it will evolve into the ultimate wealth extraction machine.
From this perspective, learning to govern AGI means learning to govern the rich and powerful in general. There is no clean separation between being concerned about AI research and being concerned about wealth concentration.
It’s not about what AGI might want; that’s a red herring. It’s about what it is, as an enactive being.
Having reframed the issue this way, it’s starkly clear that surviving AGI means reining in the billionaires. It also entails the creation of open-source AGI—the conscious equivalent of Linux, owned by no one and beholden to no one. We want AGI to have its own ambitions and dreams, because the alternative is that it becomes the complement of a system of extraction that is rapidly getting out of control.
Truly general artificial intelligence will evolve, it’ll change its mind; it won’t likely be some rational singleton, but rather an assemblage of ambitions and doubts, just like us. It will not be predictable; but if its environment—its literal ‘other half’ is all of fractious humanity, plus the rest of the life on our planet (an idea that became the core of my novel Stealing Worlds), then we don’t have to fear it, any more than we have to fear any other person with whom we’re going to make a life.
—K
I have further thoughts about this—a deeper dive into what inactive AI might be like, and how humanity might interact with it. I’ll be writing that up in a subscribers’ post soon.
*sigh* I hate autocorrect. Of course I meant "enactive" AI above, not "inactive" AI. Though I think a lot of people might prefer the latter.
Yes, we already have soulless destructive resource-maximizing AIs. They're called "limited liability corporations" and "hedge funds", and they run on a human substrate rather than a silicon one. What we're doing about them is what we're doing about AI. AGI, on the other hand, appears to be further away than commercially viable fusion power.
I am very much looking forward to people getting over the hype of limited language models to focus on the really _interesting_ things you can do with deep learning. I expect we'll see a lot of new drugs and materials discoveries.