I Am the Very Model of a Singularitarian
...Or not. If the Technological Singularity is a ridiculous idea (it is) then what are we getting instead? Welcome to the Technological Maximum
Singularity, Shmingularity
There are plenty of reasons to be suspicious of the Technological Singularity, but it’s an important concept because of its disproportionate effect on the culture of Silicon Valley and subsequently, the “tech bros” who now effectively control the United States. But if we’re not barreling towards a Singularity, where is our technological development headed? I’ve proposed an alternative and in this post I’ll lay it all out for you.
First of all, what is the Technological Singularity? Basically, it’s the idea that a certain point (right about now, according to some people) computers will become ‘self-improving’ and progressively redesign themselves to become smarter and smarter, quickly outstripping human intelligence. At that point, predicting the path of technological development, and by extension civilization, becomes impossible. This is obviously a secular version of the Christian Apocalypse, so anybody with an ounce of reason should recognize this as a red flag. Also, if we can’t predict, we also can’t design; if the Singularity is iminent then we don’t have to make ethical decisions about what technologies we fund and promote. That’s bad. We don’t need to care what our new technologies do to us or the planet, because they’ll all be transcended in the Rapture of the Nerds anyway. So, anything goes.
There are also practical reasons for being suspicious. Technological development generally follows S-curves rather than indefinite exponential growth. Consider air travel, which rapidly advanced from the Wright brothers to the moon landing but then plateaued (we're still flying at similar speeds to the 1970s). As technologies advance, it’s the problems that tend to become exponentially more difficult. Each increment of progress requires disproportionately more resources, creating natural ceilings. This has happened numerous times in AI research.
The gigantic electrical requirements of current AI highlight another limitation to exponential growth. The laws of thermodynamics and finite resources impose hard limits on computation that no amount of clever engineering can overcome.
There are lots of other counter-arguments, but for me what’s decisive is that “superintelligence” is an incoherent idea. We don’t even have an accepted definition of intelligence; it’s certainly not just “problem solving” but even if it were, what objectively constitutes a problem? There are no “problems in general,” only problems for some mode of living or existing in the world. Once a being (any kind of being) has “solved” all the problems that prevent it from continuing to exist (to maintain homeostasis), intelligence has done its job and becomes a drain on the system (or, at best, a ‘peacock’s tail’). In other words, in the real world intelligence is only coherently understandable as a tool for maintaining some specific mode of existence. For any given entity, there can be no value in intellectual capabilities that exceed what it needs to continue to exist. It might give birth to a “superior” being with new needs, but that being can’t be objectively superior—it’s superior according to the standards of the current entity; by definition its superiority is being posited by something inferior to itself. This dilemma can be dramatized by a little parable I call “transdogism.” I’ll write about that sometime.
Meanwhile, let’s admit that the Technological Singularity is nonsense. If we do, we’re left with a problem: if the Singularity isn’t happening, then what is? We’re living through a very strange historical moment, and a lot of its weirdness is being driven by new technologies. What is this event (as Badiou would call it) that AI represents?
The Technological Maximum
The technological maximum is just what it sounds like: the highest possible level of technological advancement. Once it’s reached, no further progress is possible (or, only sporadically and within certain bounds). This may sound as preposterous as the Singularity, but hear me out. There’s a case for us being very close to it now.
If we stop fantasizing about exponential growth of something we vaguely call “technology” (in a hand-wavy way) to considering optimizing the design of specific technologies for specific purposes and within known constraints, then our questions about future tech are very different. I mean really, we’re not going to ask ‘what is the ultimate eating implement,’ but we might ask ‘what is the best fork/chopstick/spork for person X living in circumstances Y? This is the reframing that makes sense of where we’re going:
Technological ‘advancement’ means better fitness for some local circumstance. The most advanced technology is the one that perfectly matches your own particular needs.
It’s easy to imagine how this works: if you can provide a prompt to ChatGPT or Midjourney to get them to generate text or an image that matches the exact requirements of your prompt, why can’t you prompt a future generative AI to design the perfect refrigerator for your specific house in its context, matched to your budgetary and esthetic requirements? Rather than an improvement in “tech level” in general (as if that could be defined) the increasing power of AI permits hyper-contextualization. A refrigerator designed specifically for your home in Arizona would be fundamentally different from one designed for a home in Finland.
All we have to do to get to this point is to extend generative AI to the design of physical devices. Forget about improving the intelligence of the system; intelligence as we generally define isn’t used by generative AI. These systems don’t think. They start with random noise and subtract everything that doesn’t look like what you prompted them to see, until they have an image or written article. The same process is already being used to design custom proteins. It’s only a matter of time until it’s applied to the design of electronic and mechanical systems. We’ll soon experience a democratization of device design similar to what we’ve seen with text and image creation. Generative AI undermines the whole idea of Intellectual Property, threatening peoples’ livelihoods and attribution of styles and content for a wide swath of visual and musical creatives, as well as lawyers and others who specialize in the use of language. Expect this crisis to extend into engineering as well.
One result is the dissolution of technological hierarchies. When everything becomes custom-designed for its exact purpose and context, traditional measures of "advancement" lose meaning. Is a hand-powered tool that perfectly suits its environment more or less advanced than an electric one that's less optimized? If the idea of “advancement” no longer makes sense, then the exponential growth of technological advancement doesn’t make sense either. The whole fabric of ideas that implies a Technological Singularity comes unraveled.
This is the future of mass customization rather than mass production. The drivers of generative AI coupled with the resource constraints a finite Earth inexorably push us from economies of scale to economies of precision. Each item becomes a bespoke creation rather than one-size-fits-all whenever doing this optimizes/reduces resource use. If we take Elon Musk’s dictum that “the best part is no part” seriously, then the end state of technological growth is a world with far fewer parts to any given machine, and fewer machines. The way I put this is that “any sufficiently advanced technology resembles Nature.”
In this near future, research shifts from knowledge creation to knowledge integration. The emphasis is on optimally combining and applying existing knowledge about physical laws and materials, rather than inventing new technologies.
The limiting factor of the Technological Maximum is the accuracy of the generative AIs’ training models. Our understanding of physical reality rather than our engineering capabilities is decisive.
Note that at the Maximum we sidestep the common critique of the Singularity, that it assumes infinite resources and unbounded growth. The Maximum acknowledges natural limits while providing vast growth in actual benefits to people.
The Devil in the Details
One objection to the Maximum might be that what technology allows us to do is to "climb mount improbable." Technology pushes local conditions far away from what could occur naturally by introducing very particular constraints to the flow of energy and material in a system. These improbable physical states might in turn be used to create new higher-order constraints, and so on until we reach physical states and actions that would be considered impossible to occur if we only considered the basic laws of physics and chemistry. A generative AI creating new systems would have to be trained on and generate physical objects that instantiate certain sets of constraints, rather than just using physics to model objects. Many processes are emergent, and according to Stuart Kauffman, these cannot even in principle be derived from the entailing laws that make them possible. This suggests a blind spot in generative systems that could only be gotten around by empirical experiment. This is not an insurmountable problem; an experimental arm of a Maximal generative AI could do these experiments to discover new emergent behaviors. It does anchor the capabilities of the Maximum and even suggest that it can grow, albeit not at the exponential rate implied by the Singularity.
All of this suggests that we reach the Technological Maximum when we have:
Generative AI that produces devices and systems rather than just text and images. This generative system is coupled to:
An empirical experimentation loop . The system needs an experimental arm to discover emergent properties that cannot be derived from first principles. This creates a cycle where:
The generative AI designs based on known physics and emergent properties
Experimental systems test these designs and discover new emergent behaviors
These discoveries feed back into the generative AI's knowledge base
Incompleteness recognition. Perhaps most importantly, the system has to be able to recognize the boundaries of its own knowledge and identify when empirical testing is necessary. Or, its users have to be able to tell it. This is where the more traditional model of intelligence comes into play, and where actual thinking machines can be useful. It’s a limited, but important role to ask “what else is possible?”
So maybe we’re not reaching a static "maximum" here; it’s more about achieving a dynamic equilibrium where generative capabilities and empirical discovery are perfectly balanced—a system that optimally navigates the space of possible technologies given current understanding of physics and emergent properties.
This suggests a different kind of technological plateau; not one where progress stops, but where the nature of progress fundamentally changes from creating new technological paradigms to optimizing within a comprehensive understanding of possibility space. But from the human perspective, there’s no mysterious “singularity” here where prediction becomes impossible. There’s no Rapture of the Nerds where we all upload into an AI heaven. Things just work better and cost less. That’s it. That’s the end state.
A Different Kind of Social Revolution
The key societal implication is that, in the very near future, society will move from a paradigm of technological progress as "more/bigger/faster" to one of technological progress as "more perfectly adapted to context." The framework fundamentally challenges our current notions of technological advancement and hierarchies, even with the addition of empirical discovery loops. This shift could reshape what competition and cooperation between societies means. Would they compete on technological "advancement" or on how perfectly their technologies serve human needs?
Governing at the Max
How does ownership work in a system where artifacts are unique, integrated, and optimized for specific contexts? And where all IP, including patents, has been absorbed into the generative model in the same way that current AIs are being trained on proprietary documents and artworks? This is a fraught question—but there are some possible answers:
Keep reading with a 7-day free trial
Subscribe to Unapocalyptic to keep reading this post and get 7 days of free access to the full post archives.