
Irving John Good worked alongside Turing at Bletchley Park and went on to articulate one of the most important ideas in the Singularity canon: the intelligence explosion. In his 1965 paper "Speculations Concerning the First Ultraintelligent Machine," Good wrote that a machine smarter than any human could design an even smarter machine, triggering a chain of recursive self-improvement that would leave human intelligence far behind. He called this "the last invention that man need ever make." This single insight, that superintelligent AI would be self-amplifying, became the conceptual backbone of both Singularity optimism and AI safety concerns. Yudkowsky, Bostrom, and Kurzweil all build directly on Good's framework. The intelligence explosion may be the most consequential idea of the twentieth century that most people have never heard of.
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion."”
Speculations Concerning the First Ultraintelligent Machine · 1965