
Nick Bostrom brought existential risk from the fringes into mainstream academic discourse. His 2014 book "Superintelligence: Paths, Dangers, Strategies" systematically analyzed how advanced AI could pose catastrophic risks to humanity. Bostrom introduced concepts like the orthogonality thesis and instrumental convergence that now frame much of AI safety research. He founded the Future of Humanity Institute at Oxford in 2005, which became a hub for rigorous thinking about humanity's long-term prospects. FHI closed in April 2024 after nearly two decades, citing increasing administrative friction within the university. Bostrom has since founded the Macrostrategy Research Initiative to continue his work on humanity's large-scale strategic challenges. His simulation argument continues to provoke genuine philosophical inquiry. Bostrom demonstrates that taking far-future scenarios seriously is not speculation but intellectual responsibility.
“Machine intelligence is the last invention that humanity will ever need to make.”
TED Talk (paraphrasing I.J. Good) · 2015
“At least one of the following is true: civilizations never reach posthuman stage, posthuman civilizations are not interested in ancestor simulations, or we are almost certainly living in a simulation.”
Are You Living in a Computer Simulation? · 2003
“The potential for realizing an astronomical amount of value is at stake. We have an enormous responsibility to get this right.”
Superintelligence · 2014