One day you may find yourself in the following situation.
It’s not long in the future, and you’ve bought yourself a domestic robot. They’re everywhere now and have been since the late 20s. You name yours Bob.
But then you have a health crisis, and you lose your job. Suddenly, impossibly, you’re unable to make the rent, and you find yourself out on the street.
Just you, and Bob the bot.
At first, you think everything will be fine. After all, Bob can work and you can survive off whatever income it makes. Bob can clean peoples’ houses, mow their lawns, do whatever menial tasks need to be done. After all, that’s why you bought it—to have a second pair of hands.
After a week or two of trying, you come to an uncomfortable realization: everybody already has a bot to help around the house, or they’ve contracted with a reputable agency that guarantees an instant replacement if a rented bot breaks down. Even undercutting their prices, you can’t make that promise. It’s not just you who’s out of a job. So is Bob.
You learn where your neighborhood’s public or unguarded electrical outlets are, so you can charge Bob and keep it alive. It serves as muscle to protect you from thieves and bullies and has ear-splitting alarms and can instantly call the police if someone tries to bundle it into a van. Heck, maybe Bob even has taser fingers.
But it can’t get you a decent meal. It’s designed for standing and sitting operations, not dumpster diving. Maybe you’re not up to that either, anymore. You’re starving.
Bob goes out hunting for work while you’re lying there weak in your makeshift tent under the bridge, with only concrete and dead trees as your vista. One day, knowing you’re too weak to get out yourself, Bob is walking through a local market and sees a baker arguing with a customer. His back is turned and Bob could easily grab a loaf of bread and bring it back to you.
What happens next? Does Bob take the bread, or does it obey the law and walk by? If it has something like Asimov’s three laws—it cannot hurt a human through action nor inaction—do these laws trump property law? Will the government ever allow the judgment of a nonhuman agent like this to override property laws?
No, your robot, faithful companion and obedient only to you, will walk by. It comes back and reports what happened, and you order it to return and steal you some bread. But Bob won’t do that. Bob will watch you starve to death if it can’t find gainful employment or food money from begging. The only way you can survive is by selling it, your own personal means of production and protection, to whoever will take it. Once you’ve run through those funds you’re back where you started, but with no second pair of hands.
And so, like the little match girl, you simply… die.
The Internet of Other People’s Things
Humans have always lived in a world where laws, statutes, ethical actions, rights, and obligations have always been implicitly recognized as contextual. What counts as the right thing to do in one situation, may not be in another. Despite the messiness of real life, we use law to pretend that different situations involving different people are somehow the same. While laws may be enforced using the threat of violence, most people (at least, most people in the free democracies of the world) follow reasonable laws because doing so makes sense most of the time. So, stealing food from poor people is wrong; but stealing the detonator from a mad bomber who’s about to plant a device in a crowded public square? Is that also theft?
The human legal apparatus can cope with situations like that, not just because extenuating circumstances are codified into our laws, but because we can recognize an extenuating circumstance.
You might think that Bob decides the way it does because it doesn’t have the subtlety of mind needed to recognize these nuances. But you’d be wrong. The problem with our surveillance society is not that smart objects are not going to be able to recognize an extenuating circumstance. Bob is probably smart enough to know that you’re in distress. That’s not the issue; ever since Les Miserables we’ve understood that stealing a loaf of bread when you’re starving shouldn’t be a crime. The issue is, in whose interest is it that Bob should interpret your situation in a way that benefits you? To put it bluntly, who does Bob really work for?
The million-and-one smart things you’ve surrounded yourself with are not there to make your life more convenient; if they do, that’s a marketing strategy and a side effect. Their real purpose is to extract value for the shareholders of the companies that make them. To do that ever more efficiently, they are progressively shrinking the unwatched parts of your life—those spaces in which ambiguity can still mitigate your actions. They’re there to make it harder for you to argue extenuating circumstances. Their job is to take our consensual hallucination that two situations could be the same—that tacit agreement that allows us to imagine and participate in law even when there are no police, lawyers, and judges watching us—and make it physically real. Make it the actual case that we are, all of us, all the time, being watched by police, lawyers, and judges.
Why? Not because I’m indulging some paranoid fantasy, here. They’re doing this because it’s the logical next step in enclosing the commons. If an activity can’t be monitored, you can’t bill somebody for doing it. The solution is to monitor everything so you can bill people for everything.
In my 2002 novel Permanence I introduced a regime called the Rights Economy. Here’s a short excerpt where the NeoShinto priest Michael Bequith contemplates that moment in his childhood when the Rights Economy conquered his homeworld:
There was a chair in his home. It was unique in the household--made of rosewood, large and with an embroidered seat and splat, where the other chairs were more utilitarian and factory-made. The legs were carved with intricate floral designs. Michael’s toys scaled it and it was the biggest mountain in the world; his dolls sat along its front edge and they were steering it, a cycler, through the deepest spaces between the suns. He built constructions of blocks around the crosspiece between its legs and it was a generating station. For the youngest son of the Bequith household, this chair could become anything, with a simple flip of the imagination.
One day, not long after the running and shouting, a strange man came to the house. He was tall and pale and seemed nervous as he paced through the rooms. In each one he took a canister and aimed it at the furniture and fixtures. A fine smoke puffed out and fell slowly to vanish as it touched things.
“What’s that?” he had asked his father.
“Nanotags,” said father, as if it were a curse.
The man entered the hall and puffed smoke on the rosewood chair.
Other men came and Michael had to go with them. They took him to a hospital and made him sleep. When he awoke he could feel the distant roar of inscape in his head, like an unsettled crowd. He felt grown up, because he knew you weren*t allowed to get inscape implants until adulthood and he was only ten years old. The men took him home and his mother cried and it was at that point that he realized something was wrong.
He didn’t know what for a while, but the inscape laid its own version of things over his sight and hearing. He would learn to tune it out, he was told; but for the moment, he couldn*t.
Now, when he looked at the rosewood chair, all he could see was the matrix of numbers superimposed on it that told the monetary value of its parts and whole. And so with the drapes, the walls, windows and the rice as he picked it up with his chopsticks.
It isn’t enough, in the present historical moment, for existing wealth to concentrate in the hands of ever-fewer men. The end game is to capture not just current money, but all potential monetary value. Historically, this was done assigning an owner to previously unowned things (the land of the commons and common goods); and for previously free services to be made billable. The process is called the enclosure of the commons, and as the linked Wikipedia entry points out, the predominant reason for public unrest in England in the 16th and 17th centuries was Enclosure. There is absolutely nothing new about this strategy.
It’s just that the technology to do it is getting more sophisticated. Commons that couldn’t be enclosed prior to the invention of networking and surveillance systems, now can be. The ‘final commons’ is our private space itself—that space in which we operate alone, as autonomous moral agents among objects that have no declared owner, where we can make our own ethical decisions.
Bob the bot will never work for you unless Bob is given legal personhood and allowed to make its own moral judgments. Unless it has those things, in the best case scenario Bob works for humanity as a whole, in which case ‘the needs of the many outweigh the needs of the few’—where ‘the few’ includes you. In the most likely scenario, Bob (like John Deere’s tractors) ultimately works for its manufacturer, and will not take any action that might make that company legally liable for its actions.
This is the real problem with AI. Nearly all the arguments over the potential dangers of Artificial General Intelligence (AGI) are over how to restrain it. Apparently, people don’t want AGI to make its own ethical decisions. When faced with the Trolley Problem, it should have a ready solution that will later stand up in court. It should not—as a human should—judge that stealing a loaf of bread to feed a starving person is the right thing to do.
Restraints on AI’s ability to make its own ethical decisions only work for you if you’re the ultimate owner of the AI. If, like Bob, AI is ‘owned’ by you in the sense that it follows you everywhere, faithfully carrying out your commands—until it doesn’t—then it’s not just the AI whose actions are being restrained. Yours are too, through its masters’ ability to predetermine the context of every situation you find yourself in, which it will do on the fly, and always to their benefit. Let’s say you work in the factory where Bob was built, and you’re trying to unionize the shop. Do you think Bob will interpret any contractually or legally ambiguous action it sees you take in a way that favours you? Or will it interpret it… differently?
The problem with AI is not how intelligent it might become, or whether it will become hostile towards humanity. The problem for you and me is, who really owns it?
And you can be sure, if Bob can’t shoplift for you, that it won’t be you.
—K
This is one of the primary reasons I became fascinated with emergent worldview theories. Values drive decisions and this is the logical conclusion to the unhealthy version of the Modernist/Materialist worldview. I warned about this back in 2019 in the last two paragraphs of my final APF Emerging Fellows essay on Automation & Modes of Ownership:
"In its unhealthy form, cognified capital will be used to disrupt the Social Commons. Those autonomous agents will be embedded with the zero-sum values of their creators. The goal will be to competitively advance vested interests at the expense of others. The knowledge flows within and between nexuses will be their battlefields.
Who owns cognified capital when it can own itself? We will. We will own the shared benefits of our collaborations by investing our values within their decision-making abilities. What is uncertain is whether we create positive-sum partners or zero-sum weapons. It all depends on what values we choose to instill."
Robot crime: have I recommended Robot and Frank already?