For decades, we spoke of the intelligence explosion as something that would happen. A threshold we would cross. A future event, safely distant, that we could debate in the abstract.
We are no longer in that world.
AI systems are now designing AI systems. The recursive loop that Vernor Vinge warned about, that I.J. Good formalized in 1965, that singularitarians have anticipated for generations, is no longer a prediction. It is the operational reality of how frontier AI is developed.
The Loop Is Running
Consider what is happening at AI labs right now.
Language models write the code that trains the next generation of language models. AI systems optimize neural network architectures, discovering configurations that human researchers would not have found. Machine learning automates the hyperparameter tuning that once consumed months of researcher time. AI generates synthetic training data to improve AI performance.
This is the mundane daily workflow at Anthropic, OpenAI, Google DeepMind, and dozens of other organizations. The humans are still in the loop, but the loop itself is increasingly automated.
In 2024, Sakana AI demonstrated the AI Scientist: a system that could formulate research hypotheses, design experiments, run them, analyze results, and write papers describing the findings. The papers were not impressive by human standards. That is not the point. The point is that the entire research pipeline, from ideation to publication, can now be automated. The quality will improve. The speed will increase. The humans will become optional.
AlphaFold and the Pattern
When DeepMind released AlphaFold in 2020, it effectively solved protein folding, a problem that had resisted fifty years of human effort. Biologists had made incremental progress; AlphaFold made the problem trivial.
It was a demonstration of a pattern that keeps repeating.
AlphaFold 3, released in 2024, extended the same approach to protein interactions, drug binding, and molecular dynamics. Problems that were hard for humans became easy for sufficiently intelligent systems. The intelligence did not need to be general; it needed to be adequate for the domain.
This pattern is repeating across fields. Mathematical conjectures that resisted human proof yield to AI assistance. Software that would take teams months to write is generated in hours. Scientific literature that no human could comprehensively review is synthesized by systems that read everything.
Each of these domains represents a feedback loop. AI accelerates materials science, which discovers better semiconductors, which improves AI hardware, which accelerates AI. AI accelerates chip design, which produces faster chips, which accelerates AI. AI writes better code, which improves AI systems, which writes better code.
The loops are multiplying. And they are coupling together.
The Quiet Takeoff
In the early days of singularity thinking, we imagined the intelligence explosion as a dramatic event. A system would achieve human-level intelligence, then rapidly bootstrap itself to superintelligence, leaving humanity behind in days or hours. This was the "hard takeoff" scenario.
What we are experiencing instead is a quiet takeoff. The explosion is happening, but it is distributed across thousands of systems, organizations, and applications. No single AI is recursively self-improving in isolation. Instead, the entire ecosystem of AI development is recursively self-improving as a collective process.
This makes the transition harder to see. There is no single moment when the explosion begins. There is no dramatic announcement. There is only the steady accumulation of capability, the gradual automation of cognitive work, the slow-motion replacement of human intelligence with machine intelligence across domain after domain.
Most people will not notice until it has already happened.
What the Timelines Mean
I predict AGI by 2027 and the Singularity by 2035. These predictions are based on extrapolating current trends, not on hope or fear.
The 2027 AGI prediction reflects the pace of capability gains we have observed. GPT-4 in 2023 passed professional exams that would have seemed impossible for AI a few years earlier. The o1 and o3 models in 2024 demonstrated genuine reasoning, not just pattern matching. Each generation of models closes gaps that experts said would take decades.
The 2035 Singularity prediction reflects what happens after AGI. Once we have systems that can do AI research as well as the best human researchers, the feedback loop accelerates dramatically. The limiting factor in AI progress shifts from human cognitive bandwidth to compute and data. Both of those are scaling exponentially.
The gap between AGI and the Singularity will be shorter than the gap between today and AGI. Once the loop is running at full speed, things move fast.
What This Means for You
If the intelligence explosion has already begun, what should you do?
First, stop thinking about AI as a tool you use and start thinking about it as a partner you collaborate with. The systems available today are not superintelligent, but they are intelligent enough to augment your capabilities significantly. Use them.
Second, orient toward the transition. The economic and social implications of recursive AI improvement are profound. Jobs that exist today will be transformed beyond recognition. Industries that seem stable will be disrupted. The best strategy is to position yourself at the frontier, where the changes happen first and the opportunities are greatest.
Third, recognize that the window for influencing the trajectory is closing. The decisions being made now, at AI labs and in governments, will shape how the intelligence explosion unfolds. If you care about the outcome, engage with it while engagement is still possible.
The Stakes
The intelligence explosion is the last technological transition that humans will drive.
Everything that comes after, every problem we solve and every capability we develop, will be shaped by minds more powerful than our own. The diseases we cure, the worlds we explore, the questions we answer, all of it will be downstream of the recursive process now underway.
This is what I have been writing about since 2005. The Singularity is the trajectory we are on, accelerating.
The explosion has begun. The only question is whether we will be wise enough to guide it.
I remain optimistic. The same intelligence that creates risk also creates solutions. The alignment problem is hard but not impossible. And the upside, the genuine possibility of ending death and suffering and scarcity, is worth the effort.
We stand at the threshold. The feedback loop is running. What we build next will determine everything.
Related Concepts
Related Articles




