Key Points
- •Philosophical view that positively influencing the far future is a key moral priority
- •The vast majority of potential humans will live in the future, not the present
- •Existential risk reduction is extremely important from this perspective
- •Associated with effective altruism and AI safety movements
- •Critics argue it can justify neglecting present-day suffering
The Moral Weight of the Future
Longtermism is the philosophical view that positively influencing the long-term future is among the most important things we can do. Because the future could contain vastly more people than exist today—potentially trillions living across billions of years—actions that affect humanity's long-term trajectory may have enormous moral significance.
The movement is associated with effective altruism and has heavily influenced AI safety research and existential risk studies.
The Core Argument
1. Future people matter morally, just as present people do
2. There could be vastly more future people than present people
3. We can influence whether the future goes well or badly
4. Therefore, positively shaping the long-term future is extremely important
If humanity survives and expands, the number of people who will ever live could be astronomical—perhaps trillions over millions of years. From this perspective, ensuring humanity reaches that potential is enormously valuable, and existential risks that could cut off that future are enormously costly.
Implications for Priorities
Longtermism suggests prioritizing:
Existential risk reduction: Preventing human extinction or permanent civilizational collapse becomes the top priority, as these cut off all future value.
AI alignment: Ensuring superintelligent AI is beneficial is crucial since it could determine humanity's entire future trajectory.
Trajectory changes: Actions that alter the long-term path of civilization—not just near-term outcomes—deserve special attention.
Value lock-in: Preventing scenarios where bad values become permanent (like a stable global totalitarian regime).
Key Figures and Works
William MacAskill: Oxford philosopher, author of What We Owe the Future, co-founder of the effective altruism movement.
Toby Ord: Oxford philosopher, author of The Precipice, focused on existential risk.
Nick Bostrom: Philosopher who developed many foundational ideas about existential risk and the long-term future.
Derek Parfit: Philosopher whose work on personal identity and ethics influenced longtermist thinking.
The Compression of "Long-Term"
One of longtermism's most interesting tensions is that the timelines it considers are being compressed by AI progress. If AGI arrives by 2030, the "long-term future" becomes a near-term decision space. The pivotal choices about AI alignment, governance, and human-AI integration that will shape millennia may need to be made within the next few years.
This creates urgency: longtermist priorities—especially AI alignment—aren't about some distant century. They're about this decade.
Longtermism and the Singularity
The Singularity is a pivotal moment for longtermism. How AI development unfolds in the coming years might determine whether humanity reaches its cosmic potential or goes extinct. This makes AI alignment one of the most important longtermist priorities.
