Unapocalyptic

Unapocalyptic

Alien Politics

We shouldn't be worried about AGI or superintelligent AI. We should be actively designing a political system that gives standing to the incomprehensible

Karl Schroeder's avatar
Karl Schroeder
Mar 27, 2026
∙ Paid

First, an apology. I’m behind in posting for a variety of life-related reasons. For example, this week I’m writing from rural Western Australia, where we are currently sitting in the path of Cyclone Narelle, a category 3 storm that has ravaged three states and now has its sights set on scouring the west coast. There’s that. Then there’s the foresight contract I’ve just wrapped up, which took up most of my free time, and of course, preparations for the release of my new short story collection (it really is happening this spring!).

I promise to tell you all about all of this, in due course, and will soon be back on my normal schedule, with some bonuses to come (such as a pre-order window opening soon for the collection, Laika’s Ghost.)


Our Next Political Move

As you know, I’ve been thinking a lot about the future of politics, because if we’re going to bequeath a just and humanistic political system to our kids, we have to start building it now. There are a lot of moving parts to such a project, so I’ve been wondering how to boil them down to fundamentals. One thing that is clear is that the political frameworks of the 19th and 20th centuries are not up to the job. What is the most critical addition we need to make to our political systems right now?

Our future political freedom depends on us developing protocols that deliberately hold understanding at bay during deliberation.

If this sounds weird, it’s because we are in a weird situation, and that is the point. The kind of abeyance I’m talking about is not like working with statistical uncertainty; I am doing that right now as I’m watching the many possible paths Narelle could take, including some that pass directly over my head in the next 48 hours. That’s what you might call ‘normal’ uncertainty. What I’m talking about is more like Badiou’s Event, a concept I’ve written about before. But let’s try to avoid abstraction here.

Imagine a very near future (say, later this year) when people turn to AI systems such as Grok to help them decide how to vote. Hopefully, we all know by now that these systems are designed to be sycophantic, and therefore, reinforce our biases rather than expanding our worldview. ChatGPT, Gemini, Claude, and Deepseek are all bias amplifiers, just like social media. But they can be tweaked to nudge our thinking in particular directions. They are going to have a big impact on voting patterns if their owners (who are all oligarchs, except for the Chinese who are simply autocrats) have skewed their AIs’ models to reflect some partisan position.

This is just like the capture of journalism by the billionaires, so I won’t repeat arguments that others have made about that. It should be an obvious danger and therefore, politically, we need counterbalances in institutional or informal terms.

No, there’s a deeper issue here. It doesn’t have to do with AI’s sycophancy, but rather with its (and our) deep-seated drive to make things make sense.

The Bed of Procrustes

Large Language Model AIs aggregate humanit’s current understanding of the world. And, as Brian Boyd has pointed out, “if the human mind can understand something in narrative terms, it automatically will.” Whatever it is that is going on in the world today, we are frantically integrating it into a consensus-reality tale we’ve already written. The huge, under-examined problem is that if LLM AIs are bias amplifiers, they are also amplifiers of this integration process. They make things make sense to us, and they will try to do that even if the things in question do not make sense within any existing frame of thought.

We use them because they help us understand the world; and that is precisely why they are profoundly dangerous. See, there’s a lag between their training and what’s happening now; theorists and historians have not yet fully teased apart the phenomenon that is Trumpism, for instance, yet we’re living through it and have questions. LLMs are more than happy to answer those questions; but they will of necessity do so using the paradigms they were trained on.

This makes LLM AIs like Procrustes from the ancient Greek story, who would invite travelers to stay with him. If they were too short for his bed, he would stretch them to fit, and if they were too tall, he’d cut them down to size. This is what Large Language Models do with any situation we describe to them, because they represent the interconnections already present in language, and cannot reason or imagine new ones.

If something unprecedented happens, not only can they not recognize it, they will actively and cleverly confabulate an explanation that makes complete sense to us within the categories of thought that they’ve been trained on.

The AI apocalypse we should be worried about is not, therefore, them taking over the world and wiping us out. The AI apocalypse we should be worried about is one in which everything is explainable. AI is the Procrustean Bed for human knowledge.

Thanks for reading Unapocalyptic! This post is public so feel free to share it.

Share

The Department of Abeyance

Maybe aliens will help this make more sense. Say aliens land tomorrow, and we can kind-of communicate with them. There’ll be areas of overlap between our concepts and theirs. When they say something really strange, though, we have a couple of options. One is to take the weirdness seriously. Another is to treat what they’ve just said as nonsense and skip over it. Or, we can take what they’ve just said and shave off the uncomfortable parts until it fits the way we understand the world. We can Procustize them.

To take the weirdness seriously means to, firstly, admit it is there, and secondly to refrain from ‘fixing it’ the way Procrustes would. We’d have to learn to dwell with the incomprehensible. Based on the past of Colonial Europe in contact with the cultures of the New World, that ‘dwelling-with’ seems highly unlikely to happen on its own.

So is democracy, unless you have institutions that are designed to support it.

We’re rapidly institutionalizing Large Language Models and thus, their ability to explain the world to us. I propose that we create institutions designed to counterbalance the Procrustean problem by deliberately holding off—keeping in abeyance—understanding when we sense that, in some way we can’t yet describe, there is more to the story.

What would such a Department of Abeyance look like?

User's avatar

Continue reading this post for free, courtesy of Karl Schroeder.

Or purchase a paid subscription.
© 2026 Karl Schroeder · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture