
Leopold Aschenbrenner was a researcher on OpenAI's Superalignment team before his departure in 2024. In June of that year, he published "Situational Awareness: The Decade Ahead," a 165-page analysis of AI trajectories that became one of the most widely read and debated documents in the AI community. The essay argues that AGI by 2027 is "strikingly plausible" based on trendlines in compute scaling, algorithmic efficiency, and model capabilities. Aschenbrenner traces how an intelligence explosion could compress a decade of AI research progress into less than a year once AI systems can automate their own improvement. He also outlines the national security implications, arguing that the US government will inevitably be drawn into an AGI race with China. After publishing the essay, Aschenbrenner founded Situational Awareness LP, a hedge fund investing in companies positioned to benefit from the AI buildout. The fund grew from $225 million to over $5.5 billion in equity exposure within a year, reflecting investor conviction in his thesis. Whether or not his aggressive timelines prove correct, Aschenbrenner has shaped how a generation of policymakers and investors think about AGI.
“We are on the cusp of the most transformative event in human history. By 2027, we may have built systems that can do AI research better than we can.”
paraphrased · 2024