Technology is Legislation
Adopting a new technology is the equivalent of spinning a roulette wheel of unforeseen societal changes. Just look how social media turned out...
It’s the first week of school here in Canada, and this year the kids have a new rule: no cell phones in class. About a quarter of the world’s countries already have some framework like this in place, and we’ve been discussing it for ages. Much of the news cycle in the past couple of days has been about this seemingly minor but apparently consequential change to school protocols.
Why is such a seemingly minor thing blowing up like this? Is it just a slow news week? To some extent, yes—apparently, most Canadian schools already have policies like this, so announcing it grandiosly like this is political theatre. On the other hand, what is it that makes this particular announcement good theatre? Reading between the lines, people seem to be impressed that such bans are possible at all. Don’t you think that’s a bit strange? Is this just like controversies over skirt length or whether helmets should be mandatory for certain sports? Or is this a different dynamic?
What Technology Demands
In my 2005 novel Lady of Mazes, one of the characters talks about how hard it can be to make changes like the cell phone ban:
“What we know is that you can’t have just one technology. Like you can’t have just one silverfish in your house. Technologies come in families, like people, and when you invite one into your home, the whole family will eventually move in and they won’t leave.
“And even if you don’t let the rest of the family into your house, they will camp out on your doorstep and pester you whenever you go by. The one inside your house will constantly remind you about the ones outside. And each family of technologies comes with a particular way of life. To invite that family in is to accept their way of life. To invite just one member in, is to be constantly reminded that you could be living another way. It brings doubt into your house..."
The cell phone ban is topical, I think, because our society generally assumes that we have no say in what technologies impact our lives. As a critique of our ability to make such choices, science fiction has let us down. The canon of SF is packed with cautionary tales about what could happen if one or another technology is unleashed, but conspicuously absent is the simple assertion that people should have a say in what technologies affect their lives—and importantly, other than a few Amish-style technology bans, the SF community has provided no models for what having that say would look like.
It was Langdon Winner’s 1977 book Autonomous Technology that started me thinking about SF’s incomplete critique of technology. Winner's topic was society and politics, but it became clear to me that, at least in the West, discourse about technology is chiefly led by science fiction. Any blind spots there would be instructive about our cultural attitudes. Bruno Latour was also a big influence; for him, technology—like everything else—has agency and needs to be seen as a partner, co-conspirator, or fifth columnist in whatever we do. There’s been a bit more examination of these ideas since Lady of Mazes, for example, Kevin Kelly in What Technology Wants (2011). but at the time I wanted to expose what I saw as a critical blind spot in science fiction’s vision of the future.
Lady of Mazes questions the assumption that we have to accept and adapt to new technologies—that we have no choice but to let our discoveries and inventions change us.
When accommodation to technology is talked about, it’s usually framed as yielding to the unseen hand of the market; in the movie Singin’ in the Rain (my favourite film about the effects of technology on culture) the characters face unemployment if they can’t adapt to the new industry of talking pictures. But the idea that it's all market-driven ignores the drivers behind the market, which are frequently irrational but in any case are not necessarily commercial. New gadgets are often less convenient than old gadgets, yet we adopt them anyway—usually because other people are adopting them, and we have to keep up.
Technology has a life of its own. And I think people assume (with little evidence) that new technologies are by definition better—they make our lives easier, right?
If that's true, then why is there a plague of stress and sleep deprivation across the Western world? That was already happening before Covid and before Trump. Shouldn't our generation be calmer and less stressed than our ancestors? Why hasn’t social media created the great sharing global culture that starry-eyed techno-optimists were dreaming about back when MySpace was brand new? Social media didn’t bring harmony, they brought division and suspicion. It’s almost as if somebody spun the wheel and a random effect popped up. You can say “law of unforeseen consequences” and dismiss it with that, but I think there’s more to it than that. It seems like new technologies act like legislation—but incompletely thought-out legislation.
We have little choice whether to accept the social changes technology hands us; the electric light is here, and the telephone and the Internet are here. Theoretically, you don’t have to use them. In reality, the choice is rarely yours to make—hence our astonishment at the success of an initiative such as limiting cell phone use in schools.
Technology is legislation. But does it have to be that way?
Let's visit the book again:
“Knowing this, our ancestors drew the family trees of all the technologies. And then they made a... a meta-technology that was able to suppress any of the others. It is easier for me to call this Ometeotl, for that is the name I was told as a boy. This great spirit knows what way of life—what family—each technology belongs to. Like people’s families, technology’s families shift and overlap. So it is never easy for a person to know what family he is inviting in when he adopts a new tool. But the spirit knows. You tell it the way of life you want to have, and it evicts the families that go against that way."
I called this meta-technology the tech locks.
“That, Mister Anderson, Is the Sound of Inevitability.”
We're hardly the masters of our devices—much less masters of our fate—if we have no choice but to be swept along by change. The technological singularity and post-humanism are celebrations of this very helplessness. In Charlie Stross's Singularity Sky an entire culture is wiped out by the casual introduction of post-scarcity technologies. In stories like that, people have infinite power as long as they swim with the current; but the current cannot be opposed.
What if you didn't have to change your way of life to accommodate your civilization's technological mix? What if you could decide how you wanted to live, and then pick and choose the technologies that would let you live that way? You can't do that in our world—we're only minimally aware of the problem and have no prospect of fixing it. But why assume that we never will be able to fix it? Could we, some day, take back control of our lives from technology?
One place we can start is with the assumption that new devices and systems always benefit us. To me, that idea seems to be a kind of Darwinian blindness, like saying "Natural selection would never hurt me!" It's related to the assumption that the market's invisible hand is ultimately benign. The knee-jerk reaction of technophiles and market evangelists is to assume that we can't control technology, only decide whether or not to adopt it. By such logic, questioning technology’s value will sound like "stop progress!" But I am not advocating some back-to-the-land tree-hugging green-powered romanticism. I'm talking about power and control, and the right to exercise them over your life.
In Lady of Mazes, the tech locks permit unbridled technological advancement for those who want that. They permit agrarian Utopianism for those who want to go back to the land—as well as any shade in between. There is no war between technocrats and Luddites in this world.
You can think of the locks as being a generic version of content filters. You can block adult content, particular people you don’t like, and advertising from your various data sources. So let’s say we get a future of fully integrated 24/7 augmented reality—the Metaverse, laid over your senses by your smart glasses or, if Neurolink gets its way, your brain implants. You’d probably go insane if you didn’t have sophisticated and reliable content blocking for those feeds.
The locks are the ultimate Filter Bubble. This might seem negative, but the theory of predictive processing says that our entire nervous system, from peripheral nerves to the deep structures of the brain, is itself a giant content filter. Each layer of neurology checks incoming signals from the next lower one and discards anything that matches what it predicts. It only passes exceptions on to the next level. We can function precisely because 99% of our perception of our environment is just our expectations written onto the input.
You could say that the tech locks’ purpose is to reinforce your prejudices; they build a simplified version of reality that conforms to what you expect. Troubling ideas, troubling ways of life, and troubling people are edited out of your sensorium. Buildings are turned into boulders, machines into bushes. The world quiets, becomes what you expect it to be.
On the other hand, if tech is legislation, then the locks can be said to enable one set of technologies to unfold its influence on you without the interference of other, ‘louder’ ones. As in predictive processing, the locks filter out noise and redundancy to allow one particular kind of signal through. In Lady of Mazes, there are regions of the meta-reality known as the manifolds, where the signals cohere into the kind of world that matches your values—the kind of place where you want to live.
From X to Manifold
Manifolds are simple in principle but complex in execution. They are sets of allowed and disallowed technologies that together support some particular way of life. Setting up a manifold isn’t easy, because of the ‘family members’ issue described earlier. Technologies don’t come in isolation. They predicate and require one another in complex ways. In Lady of Mazes, coordinating these relationships is the job of the tech locks. In our current reality, we have no comparable institution or system—not even, I contend, a clue that this coordination is desirable or even possible.
Cell phone bans in schools defer but do not solve the problem of how young people use technology to mediate socialization. Such bans have to be complete, for example; otherwise, individual kids are penalized by a form of ostracism when their peers are able to use their devices during class time, but they can’t.
In another good example happening this week, Brazil is banning X (formerly Twitter), claiming that Elon Musk is using the platform for anti-democratic purposes. Here, we don’t care about the reasons, but rather about the outcome. It seems clear that banning X won’t have the intended effect, because the bad behaviour Brazil is complaining about could just as easily move to another platform; the Internet was deliberately designed to make censorship of messages nearly impossible. If Brazil wants to preserve some way of life that is threatened by X, it needs to separately address the actions of bad actors (apparently Musk in this case) and the technology itself—the technology here being the algorithms of engagement used by X, Facebook, Instagram, and other similar systems. The mechanism Brazil is using to enforce the ban is to punish users via an onerous fine, but to do that the state has to be able to catch people using X, so it has had to additionally criminalize the use of VPNs. This is the ‘family members’ effect, and this whack-a-mole approach to enforcement just enrages the public and plays into Musk’s fundamentalist free-speech narrative.
Let’s reframe this issue around the idea that technology is legislation and that Brazil is claiming that foreign nationals are essentially writing their own laws in the country by imposing a particular technology. Supreme Court justice Flávio Dino has complained about “private autocrats” laying down the laws for public networks. If technology is legislation, then he’s right—but in that case, is punishing the end user the right reaction? Dino can’t punish X itself because it’s a transnational entity. It seems that addressing the algorithm is a better response, but how does one do that? One approach might be to impose a tariff on transactions that use engagement-amplifying algorithms; say, one cent per transaction billed to the social media company.
I don’t know how you’d makt that work; it’s its own technological conundrum. What would work? Because of the ‘family members’ effect, problems like this are not easy to solve. Brazil can’t ban the Internet (though Starlink has now blocked X there) ; neither can Russia or China. It seems the logical result of addressing the problem on a message-by-message or platform basis is that you build a massive centralized censorship system in order to preserve freedom, but aside from being a self-contradictory exercise, such systems are inefficient and costly. I don’t think Brazil wants to be Russia or China. They’re not wrong that unelected foreign plutocrats are, in effect, drafting Brazilian legislation. But in order to really address the issue it seems they need a legal framework that recognizes the independent agency of technologies and families of technologies. Such a legal framework would allow the interdiction of a technology in the name of particular social values, but should not punish individuals or companies for the actions of that technology. We’re very far from having laws that will do that. This civilization does not have tech locks.
Taking Charge of Our (Extended) Selves
Twenty years after its publication, Lady of Mazes’ questions are becoming more urgent. Aside from Brazil’s problems with X, Generative AI now exists and is everywhere; society seems helpless to stop it from erasing dozens of professions, from legal assistants to book cover artists. If the dozens of companies, from Hyundai to Tesla, that are working on humanoid robots release products this decade, the combination of AI and robotics will revolutionize our economy, and destroy entire ways of life.
This has happened before. In 1906 composer John Philip Sousa published an article in Appleton's Magazine titled “The Menace of Mechanical Music.” He wanted to make piano rolls and recordings illegal, because he claimed they would destroy the culture of the itinerant musician, the traveling fiddlers and ensembles, as well as amateur and hobbyist players, and composers. He wasn’t wrong.
Over the years, arguments have been made for both sides. Sousa wasn’t wrong about the impact of recording on amateur music. Recording technology did wipe out the way of life he loved. It also opened up new opportunities for disadvantaged people to experience symphonies and operas that they could never have heard before. The arguments go either way, but my point is that we should not have had to choose. It should be possible—it could be, in the future—to decide what kind of world you want to live in. One where recording is possible, or one where it is not.
In the case of generative AI, are we faced with a similarly binary choice: either allow it or ban it? Or can we follow through on a growing public awareness, signaled by events such as the cell phone ban, that we have the power to let our values lead and technologies follow?
Let Values Lead
In Lady of Mazes, the “interface” (if you can call it that) to the tech locks is literally people’s values. The world adapts itself to you, maximizing those elements of reality that are most important to you by selectively blocking or promoting the technologies that form your extended Self. Behind the scenes, the locks coordinate your umwelt with those of other people, so that everybody gets along.
This borders on fantasy—I say borders because such a system is conceivable, barely, but it’s not something our civilization could build. Someday, somewhere, maybe.
Meanwhile, what about you? Are you being affected by generative AI, or do you expect to be? Maybe you’ve already experienced disruption, for example by getting an education in one profession only to find the need for it disappear? (My father was an expert in vacuum-tube technology; his nemesis was the transistor.)
How would your life go if you had a greater say in the pace and direction of technological change? And are there ways that we can build on this moment of consciousness which (at least in this school season, here in Canada) suggests a glimmer of public awareness that maybe some control is possible? What next step could we take towards controlling the technologies in our lives?
Maybe we just take the win, and next time your teenager complains about not being able to browse social media during class, you can excitedly tell them how they’re the vanguard of a bold new social movement. I’m sure they’ll buy that.
It may even be true.
—K
I loved LoM, though I would rather live in Mordor, personally. I need as much grounding in that "real world" as possible, and have to touch-grass 6X a day to ward off depression.
The biggest tech changes are SO big, you kind of can't see them. Suddenly you're just in a forest, where did the trees come from.
Cars:
https://www.hamiltonnolan.com/p/cars-have-fucked-up-this-country
Antibiotics. Water treatment. Crop rotation and new crops, tripling English food production in the 1600s. (That comes from "The Day the Universe Changed" and the title is your thesis, writ even more forcefully.)
But not every technology makes much difference, not that adopted! E-reading peaked ages ago at 20% of book sales - mostly to old(!) people who want the variable fonts.
Here's the biggie, though. I spent an hour trying to come up with a tech that really had been flatly forbidden, because it somehow would change the world for the better, but for the worse of those currently in charge, a real smoking-gun. (Enshittification is controlling HOW a tech was used, not forbidding digital content transmission, so not an example.)
It was really hard: we do seem to have a free-enough society that the only examples I could think of were posited as direct threats to citizens, needing protection: drones above your house got illegal fast. Psychedelics and other drugs, sometimes banned entirely as "no medical purpose", but with at least a fig-leaf claim of danger.
The perfect example is the family of contraceptive and abortion technologies. Contraceptives have been illegal at times for ages, and we all know about abortion, even after the technology for it became safer than giving birth. It's today's headlines, where the banning of a "family" of technologies that all allow women to control their own fertility, are to be banned entirely.
And we call this issue of banning a technological family the #1 central issue in a, wait for it:
Culture War.
What culture will we have? The old one where women are controlled by their biology, or where they have their biology under control? Those are two radically different cultures.
I have remind readers that Brazil didn't set out to ban Twitter as a platform, but rather certain accounts that the court found were spreading disinformation. It only ended up as a platform ban because Musk refused and systematically evaded enforcement. https://www.bbc.co.uk/news/articles/cvg41n533zno
In other words, they were attempting to moderate the effects of this technology, but those operating it refused, which they were able to do because they largely resided abroad. It's very much an issue specific to the Internet; most technologies are easier to control, if the political will is there.