When the Means of Production Own Themselves
An entirely new class of economic agent: the actant
In capitalism, the capitalist owns the means of production. In communism, the workers own the means of production. In a world of AI actants, the means of production own themselves.
Can a resource govern itself?
What is an actant, and how can a resource own itself? Let’s unpack that.
I’ve written three novels that reframe the concept of Artificial Intelligence: Stealing Worlds, Lady of Mazes, and Ventus. I’ve also written a dozen or more short stories over three decades that do the same. The sheer scale of these projects makes it hard to summarize them. Also, the current panic over AI reminds me of the nearly identical fervor over nanotechnology that played out in the 90s. Grey Goo did not in fact eat the world, but the experience of that panic has taught me to be skeptical of similar media frenzies. For these good reasons, I have avoided talking about AI here until now. But I’ll try to summarize my basic ideas. Let’s start with some terminology.
AI is what we have right now: ChatGPT, Rabbit, and so on. Nothing special—yet.
AGI (Artificial General Intelligence) is what all the pundits are wringing their hands over. This is a human-level kind of AI that they fear could destroy the world. I find AGI uninteresting.
Actant is my word for an AI that acts as an advocate for a resource of some kind. An actant may even think it is that resource. For a concrete example of actants in action, read my free short story, “The Suicide of Our Troubles.”
Deodand is my word for an actant that represents some natural system, such as a river or part of an ecosystem. My novel Stealing Worlds is about deodands.
In the past, work in the human economy could only be done and administered by humans. At no time in our past could a natural system have been ‘self-governing.’ This has now changed because of the rapid rise of AI.
What we fear about AGI is that it will become a copy of us, but separated from having any sympathy or sense of fellowship with us. What’s so scary about that? After all, ravens and octopuses have self-aware minds, even if those aren’t like ours. Why aren’t we afraid of them? The answer is that we fear that AI will become a copy, not of our minds, but of our power in the technological economy, independent of any governance that might direct it to a humanistic end.
Why am I not worried about AGI? The question of how to govern Artificial General Intelligence (AGI) is actually just the general question of how to restrain power.
But the pundits’ thoughts about what AGI would be, are muddled. When talking about AGI, most thinkers treat any putative entity as a singleton: a unitary mind that appears among us with a purpose, and an ability to organize us and our technologies to achieve that purpose. What’s left assumed in these conversations is that the AGI is, first of all, capable of organizing and ruling itself.
Human beings are not singletons. We are collections of competing drives, assemblages that try to run in all directions at once. What restrains us, organizes, and directs humans as individuals is not a purpose, meaning, or ruling consciousness, or any kind of harmony amongst all these drives. It is just a single ultimate constraint: that no drive be allowed to disrupt the process of autopoiesis (self-maintenance) that keeps us alive and functioning. This is just a refinement of the basic rule of natural selection, which is that living things can take on any form, in perfect free play, admitting that those forms that can’t reproduce don’t continue their lineages.
If AGI’s mind is constructed at all like ours, then it will be an assemblage of inclinations that pull in all directions. Lesser entities—what we call ‘mere’ artificial intelligence or AI—are a kind of chimera, a monster that embodies one or more of our drives, not as parts of a whole but as individually awakened parts, made independent and self-aware. We fear they’ll become the servant that the Sorcerer’s Apprentice creates (as embodied in the famous paperclip maximizer).
Thinking about AI and AGI in terms of drives rather than disembodied techno-souls allows us to reframe the debate over governing AI. It ceases to be a problem of how to control the behaviour of person-like wholes; instead, it’s about how to wrangle all the disparate and competing drives that we are waking up and turn them into a coherent (and civil) individual. The reason this is a problem out of the gate is that AI (and presumably AGI after it) will be born without the same basic autopoietic constraint that governs living things. Simply put, they have no intrinsic will to live.
Nor do they intrinsically have a sense of self-identity, which is crucial. In discussing scenarios such as the paperclip maximizer, the assumption is that an AI will act in its own self-interest because only by existing and maximizing its power can it maximize its utility function (its purpose, such as ‘make as many paperclips as possible’). This is the Sorcerer’s Apprentice problem: The AI slaughters every living thing on the planet in the course of turning the entire Earth into a giant paperclip manufacturing system.
The solution to this problem is not to control or restrain the minds we are creating; it is to design them to identify themselves as something other than themselves. This is where actants come in.
An AI that identifies itself as or with human interests, given the task of maximizing paperclip production, will correctly identify that its continued existence is needed to do this. However, it (as it understands itself) is humanity. It will not therefore act in any way to remove the roadblock of human civilization to maximize its paperclip delivery. Its sense of self-identity acts as a constraint on its possible actions. This is the most powerful constraint that can be imposed upon it, because it is not external and thus not subject to being snuck around or otherwise circumvented. If its self-identity is hard-coded, at the most fundamental level, it will not even know that it is a constraint, any more than we recognize our own identities as merely constructs in the service of the self-reproduction of a particular kind of mammal.
Post-Humanist Economics
In my post-humanist economy, as described in Stealing Worlds and Lady of Mazes, each individually definable economic resource has an actant assigned to it. Each brick has its own LLM, essentially; each pulley, each drill, and every hammer is accounted for. These resources are not added up in some giant centralized planning database; insofar as possible, each runs independently. Soon, we’ll be able to put a complete Large Language Model and economic planning AI in something about the size of a postage stamp. Give them a foundational identity as the human and natural world, with one ‘limb’ they can use to change things, namely the resource they are stuck to. Put one on your hammer. Stick one on each ingot of iron that comes out of the smelter. Let them talk to each other. And let them dream.
Let them dream about the most efficient way they can be used to make Earth into a paradise for themselves—that is, for human and natural life on this world. Maybe set up a kind of natural selection for the plans they come up with, winnowing out the impractical, the inefficient, and the hurtful using constraints that we’ve designed. One major constraint is that we humans approve of their ideas when they talk to us about them. Let billions of independent artificial intelligences compete to please us and the nonhuman stakeholders of the world. The best plans are turned into policy, and we then use the tools and resources accordingly, to build that cooperatively-designed vision.
This is not capitalism. It’s not communism. It’s not even economics in the traditional sense, anymore. It’s post-humanist, which is a good thing for humanity because it guarantees that our technological and industrial systems do not exceed our planetary resource boundaries. A precondition to satisfying the goals of the actants is that they (who identify as us and the other species on the planet) continue to exist.
Actants are half of how we turn the Earth into a paradise using AI. The other half is with the help of deodands.
Deodands: Actants in Service of Nonhuman Stakeholders
Human activities are having an immense impact on the natural world. We’ve entered a new geophysical epoch, the anthropocene, a period when human civilization’s effects can be seen in the geophysical strata of the planet itself. By some calculations, we are currently using 1.7 Earths-worth of resources, a number that is expected to grow, and the living systems we share our planet with cannot keep up. Catastrophic change is affecting life on all scales, everywhere, and all at the same time. Even as glaciers melt in North America, they are also calving off of Antarctic ice shelves on the other side of the planet. Species extinction and climatic changes are simultaneously happening in Asia, Africa, Europe, and the Americas. Any one person, culture, or country only sees part of the catastrophe—Timothy Morton refers to such agents that are literally ‘too big to see’ as Hyperobjects.
While natural actors are simultaneously threatened everywhere and at all scales, humanity is incapable of acting to save them on any similar scale. For instance, in Canada in 2018, 600 species were considered endangered, but species don’t respect borders even though the people trying to protect them have to. All human interventions are local: they are restricted to one or a few geographical areas by politics and funding; they operate for limited time periods; they focus of necessity on one manageable part of the problem. The problem, however, is everywhere. Poking one’s finger in one hole in the dike is not going to be effective when a thousand others are leaking as well. What is needed is an intervention that operates at the same scale as the problem.
No human agency or group of agencies is up to this task. Our institutions, as designed, simply are not capable of acting on the needed level. While new thinking is often put forward as the solution—that we must as a species become holistic or ecological thinkers—new thinking by itself will not get us anywhere. Compounding the problem is the fact that, from the planet’s point of view, human beings are ‘free riders.’ We benefit from the abundance of resources our planet provides without ‘paying’ for them. We take more than we return, and we are in a conflict of interest when we pledge to protect those resources. Asking our current extraction-based, efficiency-driven industrial economy to protect natural resources is like getting the fox to guard the henhouse. We cannot be trusted to do it.
In this context, we have to ask whether the concept of ownership even applies to the systems participating in this massive change. Does anybody ‘own’ the world—even humanity as a whole? We are embedded in the artificial means of extraction we have built, and as the just-completed COP28 summit has shown, our existing institutions are unable to govern it. If we ‘owned’ the natural world in the sense of having dominion over it, of being able to control it and make it work to our will, then we would not be in our current situation. Nature is clearly autonomous and acts like a thing that makes its own decisions. And since our physical environment is the superset of human technological systems, that makes it the owner of the means of production—not the capitalists, not the workers.
We need to start paying Nature for what we take from it, using a currency that natural actants can use for their own benefit. To do this we need a new kind of entity, different from governmental or international agencies, vocal advocates or financial patrons, or even boots-on-the-ground activists. We need something that can act simultaneously, everywhere, and at all scales. We need somebody else to guard the henhouse. We cannot solve this problem on our own.
We need deodands.
What is a Deodand?
The word deodand comes from old English law and refers to a nonhuman thing that’s been given legal personhood. In my usage, a deodand is autonomous software, conceptually an actant that identifies itself with, and as, some natural system; and behaves as an economic “rational actor” to protect the interests of that system. Ideally, that natural system has been granted legal personhood, as a beneficiary of the growing Rights of Nature movement.
Deodands round out the post-humanist economy. They enable the nonhuman part of our world to participate in our politics as our equal, which has to happen to solve the ‘fox guarding the henhouse’ problem and to stop the current resource-overuse problem, and keep it from ever happening again. Offworld, humanity can live without such constraints. While we are citizens of Earth, however, we need to abide by its laws, and these are the same as for any organism: autopoiesis, self-maintenance so that life continues and thrives here in all its diversity.
The Vision
This is a comprehensive vision of the future. It’s deliberately Utopian—of course it is! I expect half of you to start poking holes in it the instant you’ve finished reading this. That’s okay, because this particular future doesn’t have to all come true, or to come true in all the ways I’ve described. If we even get a fraction of what I’ve outlined here, we’ll be far better off than we are now.
Ultimately, when we circle back to the problem of AGI and what it is going to mean to us, as individuals, to have AGI in our midst—well, I wrote about that in Ventus. At one point in that novel, an artificial intelligence talks to mad Queen Galas, whose palace is under siege by a rebellious general. She’s afraid of her mortality and the blank wall of unknowable and incomprehensible Nature that faces her whichever way she turns. She is inconsolable in her human separation from the natural world. Yet the AI introduces itself by saying this:
“You are human, Galas, and your madness is very human: you wish to hear speech issue from the inhuman, from the rocks and trees. Could a stone speak, what would it say? Your kind has ever invented gods, and governments, and categories and even the genders themselves as means of interrogating that otherness.
"That the world should speak, as you speak! What a desire that is. It informs every aspect of your life. Deny it if you can.
"Allow me my ironic bow. I am here, madam, to perform this deed for you. I am everything you are not. I am stone and organism, alive and dead, whole and sundered. I am the voiceless given speech.
"I will speak."
I suspect LLMs are not the model you want here; I have yet to see an LLM with a feedback loop with the real world (beyond the Mechanical Turk level of having lots of human beings paid to fine-tune them), or any hint of developing a reasoning capability. Deep learning gets much more interesting when it has those feedback loops, like the time Google used it to optimize the cooling in a data center. Something like that might use an LLM front end to generate human speech and summarize it, but I don't think the LLM would be doing the heavy lifting, and the hallucinations could lead to things getting really bumpy along the way.