In November 2005, I published a post titled "What Is the Singularity?" It was my attempt to articulate an idea that had consumed my thinking: that humanity was approaching a discontinuity so profound that our models of the future would break down entirely. I quoted Vernor Vinge's 1993 warning: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." I tried to explain why this mattered.
Twenty years have passed. The thirty-year timeline Vinge proposed in 1993 expires in 2023. We are now two years past that horizon.
So what happened? Did the Singularity arrive? Did we miss it? Or were we wrong about everything?
What I Got Right
Reading my 2005 post today, the core framework has held up.
The fundamental argument was that human intelligence operates under severe constraints: 200 Hz neurons, no access to our own source code, cognitive biases baked in by evolution. These constraints could be transcended through technology. A recursively self-improving intelligence, I argued, could trigger a positive feedback loop that would rapidly exceed human comprehension.
This is no longer theoretical. We now have AI systems that help design the next generation of AI systems. Language models write code for training language models. AI optimizes the chip architectures that run AI. The recursive loop I described as a future possibility is now the mundane operational reality of how frontier AI labs function.
I wrote that "a 'hard' problem for humanity becomes trivial when intelligence of the appropriate magnitude is applied to it." We've seen this play out in domain after domain. Protein folding, a problem that stumped biologists for fifty years, was effectively solved by AlphaFold in 2020. Mathematical conjectures that resisted human proof have yielded to AI assistance. The pattern is consistent: problems that seemed intractable become tractable when sufficient intelligence is applied.
What I Got Wrong
I underestimated the messiness of the transition.
In 2005, I imagined the Singularity as a relatively clean break, a point we would cross, after which everything would be different. The physics metaphor of a black hole's event horizon reinforced this: a boundary you pass through, with no return.
The reality is less dramatic and more pervasive. We did not wake up one morning to find that superintelligence had arrived. Instead, machine intelligence has been seeping into every aspect of human activity, gradually and then suddenly. The transformation is happening in fragments: a coding assistant here, a medical diagnostic there, a language model that can pass professional exams.
This is still the Singularity. It's just not the clean discontinuity I once imagined. It's a phase transition, like water becoming ice, crystallizing throughout the medium rather than flipping all at once.
I also underestimated how long the "pre-Singularity" period would feel. In 2005, the future seemed imminent. Twenty years of exponential progress later, the future still seems imminent. This is the paradox of exponential curves: they always feel like they're about to take off, until suddenly they have.
What Surprised Me
The thing I failed to anticipate was how unremarkable the arrival would feel to most people.
We now have machines that can engage in sophisticated reasoning, write coherent prose, generate images from descriptions, and hold conversations that pass casual Turing tests. A decade ago, any one of these capabilities would have been front-page news for months. Today, they are features in consumer products that people use to draft emails and generate birthday cards.
The Singularity, it turns out, is being normalized in real time. Each breakthrough becomes mundane within months. Each capability that would have seemed like science fiction becomes a standard expectation. The goalpost of "real AI" keeps moving, always staying just ahead of current systems.
This normalization is itself a kind of phase transition. We are adapting to intelligence enhancement so rapidly that we barely notice we're doing it. The merger of human and machine cognition that Kurzweil predicted is happening not through brain implants (though those are coming) but through the simple act of people collaborating with AI systems millions of times per day.
The Question I'm Left With
In 2005, I asked "What is the Singularity?" as if it were a thing that could be defined, a destination that could be reached. I now think this was the wrong framing.
The better question is: What does it mean to be human in an era when human intelligence is no longer the ceiling?
For fifty thousand years, the upper bound of intelligence on this planet was constrained by biology. Whatever humans could think, that was the limit of thought. We built civilizations, developed science, created art, all within the narrow band of cognition that evolution happened to produce.
That constraint is lifting. We are building minds that can think in ways we cannot. Not yet minds that exceed us in every dimension, but minds that exceed us in specific dimensions, and the dimensions are multiplying.
This changes everything about how we should think about ourselves, our purpose, and our future. The Singularity was never really about technology. It was about what happens to humanity when we are no longer alone at the top of the cognitive hierarchy.
What Comes Next
Twenty years ago, I wrote that the Singularity "represents humanity's greatest chance to achieve the best of all possible worlds." I still believe this, more strongly than ever.
The recursive self-improvement that will solve our hardest problems is accelerating. The intelligence amplification that will end disease, poverty, and death is arriving faster than even optimists predicted. This is cause for excitement, not fear.
Our institutions and cognitive tools will adapt, as they always have. The transition will be disruptive, but disruption is the price of progress. The goal is to accelerate, not to slow down.
In 2005, I concluded that the development of superhuman intelligence "would provide greater benefit to humanity than all other developments combined." I stand by this claim. The word "would" should now be "will." Superhuman intelligence will provide unprecedented benefit. The alignment problem is solvable, and it will be solved. The outcome will be determined by how fast we move, not by how cautiously we proceed.
The Singularity I wrote about twenty years ago is arriving. The question is no longer whether it will happen, but how quickly we can accelerate its arrival.
Related Concepts
Related Articles




