AI for a New Democracy
Current Large Language Models are being trained to aid programmers. I propose we train one to aid collective decision-making
Brenda Cooper is both an accomplished and insightful futurist, and also writes excellent science fiction. I recommend her Project Earth books to anyone who would like a new, more optimistic vision of what the next half-century could be like. She recently suggested I read Plurality: The Future of Collaborative Technology and Democracy, by Tang and Weyl. It’s incredibly relevant to our current political situation—not just locally, but globally.
But who has time to read, these days? The book is 600 pages long. And shouldn’t we be acting instead of reading?
There’s been a lot of work on new democratic processes over the past quarter-century. Most such systems have been deliberately low-tech; for instance, Jason Diceman’s Feedback Frames can be deployed anywhere and can be used by people who are illiterate or undereducated. The operating system of sophisticated group decision-making systems such as Syntegrity and Structued Dialogic Design is face-to-face human interaction; and many of them work very well. Unfortunately, the consultancies that develop these techniques tend to hold onto the rights to use them, and there’s no general mechanism for educating the public about them. Most people don’t even know they exist.
But we’re all familiar with the OS. We just need an expert we can ask about whereto start when we face a neighbourhood issue or a wider political situation that’s going to affect diverse stakeholders. Right now, it’s apparent that people in general are relying on Facebook, X, and Google to keep them informed about politics. These are… ahem… unreliable sources. Surely there’s something better.
If there isn’t, let’s build it. Here’s how:
Large Language Models like Microsoft’s Copilot are currently being optimized to assist computer programmers. These AIs do a pretty good job. One outlier, Claude, is touted as a “Constitutional AI,” a language model based upon a set of norms or standards, such as avoiding discriminatory or racist responses to queries. Constitutional AI is a great idea, but like all LLMs, it is sensitive to the material that it’s trained on. To date, most LLMs have been trained on data plundered from the Internet, but if you’re the trainer, you can add your own data. Microsoft has complemented Copilot’s general knowledge with a lot of specific examples of how to code well. This may make programming computers easier, but you know, on the scale of importance to the future of humanity, faster software development is pretty low.
It’s not an AI-driven Technological Singularity that’ll save us. It’s learning to get along better.
I propose that the unapocalyptic community combine our efforts to train a good, open-source LLM on the vast literature that exists on collective decision-making.
We could use Deepseek’s R1 model; there are plenty of tutorials, like this one, on running it locally on your own hardware. The task would be to create a Democracy chain-of-thought dataset, similar to the Medical chain-of-thought dataset Abid Ali Awan talks about training. The dataset we’re after comprehensively distills what we’ve learned about problem definition, framing and reframing, stakeholder engagement, commons governance, Cybersyn and Viable Systems, village-level alternative currencies, networking, decentralized decision-making for teams, etc.
We want an expert resource to assist us in finding the right set of high- and low-tech tools to quickly and effectively make collective decisions. Stuck on where to start, or how to bring opposing stakeholders together? Ask the AI. It might hallucinate some bullshit answer, but they all do that and you never take an LLM at its word. What current AI is good at is providing you with threads you can pull; some go nowhere, but some lead you to primary sources that are just what you were looking for. Let’s do this, but as a public good, not to serve the coding camp that’s enthusiastically working to make their own jobs obsolete.
The main challenge I see is getting permission to include crucial text that are under copyright, and in some cases do not exist in electronic form. I’m thinking of crucial texts such as Stafford Beer’s Platform for Change, or Aleco Christakos’s The Talking Point. (Btw, I’m not one for conspiracy theories, but I bought The Talking Point through Amazon a few years back, but now I can’t find it on the site. Why would Jeff Bezos’s company hide or take down a book designed to empower citizens to resolve their differences without requiring an outside authority? I’m sure I don’t know.)
As I said, many relevant techniques, such as Syntegrity, are the property of management consulting firms. They’re important IP for such companies, who sometimes rely on them to make a living. They should give their permission and be compensated for their methodologies being used in training an AI.
There’s already a legal structure in place for this. I sell various rights to my literary works—foreign and translation rights, film rights, non-exclusive reprint rights, etc. If you quote more than three lines of text from one of my books in one of your own, you’re supposed to pay me, and there’s contractual law for that. It should be possible to adapt this legal language to compensate creators for the use of their work in training an AI. Personally, I’d be thrilled to be able to contribute my books to birthing a new AI; it would just be nice to get paid for it.
The LLM should also be able to provide direct resources, such as a link to the Feedback Frames website. The AI itself must be free and available to run on your home computer, and even, in a year or two, on your phone.
Decentralized, Fractal Democracy
Stafford Beer’s vision of future democracy, first trialed in Chile in 1971-3, was of nested viable systems. A viable system is autonomous; it has its inputs and produces something, is managed locally, and has sensors to tell it if the world is changing and how to react. It is guided by an overarching purpose. Each of these functions—input, output, management, principles, and foresight—could be a viable system of its own. As long as a viable system is doing its job, the systems it’s embedded in leave it alone. In Chile, this meant that regional factories and warehouses could manage themselves however they wanted as long as they met their quotas. But our democratic AI could explode this model into something much more ambitious.
A democratic LLM could enable every person to walk around with the whole armamentarium of democratic techniques in their pocket. Problem definition and resolution would not be monopolized by a professional managerial class. It would be available to anybody who could talk to their phone.
We can build this right now. All we have to do is get the rights to train the model on the best tools and techniques. We should figure out how to do this, and just do it.
Put it this way: would you like to be part of democracy’s “iPhone moment?”
—K
I like this idea a lot. Also I happen to be a member of a community of practice that uses a large group dialogue system that’s in the same category as SDD and Syntegrity. Some colleagues have been playing with using LLMs to enable the work (eg synthesizing summaries from group conversations). I’m going to share this piece with the community to see if I can spark some interest.
This is a fascinating and timely proposal. Decentralized, accessible decision-making tools could be transformative. Great read!