Key Points
- •Metaphor for systems that trap participants in destructive competition
- •Individual rational choices leading to collective disaster
- •Examples: arms races, pollution, social media engagement
- •AI race dynamics: pressure to deploy before ensuring safety
- •Solving coordination problems is key to navigating AI transition safely
The God of Inadequate Equilibria
Moloch is a metaphor for coordination failures—situations where individual rational choices lead to collective disaster. Named after the ancient god to whom children were sacrificed, Moloch represents systems that trap participants in destructive competition they cannot escape.
The concept was popularized by Scott Alexander's influential essay "Meditations on Moloch" and has become central to discussions of AI risk and existential challenges.
How Moloch Works
Moloch emerges when:
1. Multiple actors compete for limited resources
2. Competitive pressure rewards those who sacrifice values for advantage
3. Anyone who refuses to sacrifice falls behind and loses
4. The result is everyone sacrificing values that everyone wishes they could keep
Each individual actor makes locally rational decisions, yet the collective outcome is worse for everyone.
Classic Examples
Arms races: No country wants to spend heavily on weapons, but if rivals arm, you must too. Everyone ends up less secure with huge military budgets.
Environmental degradation: No company wants to pollute, but the one that doesn't is at a competitive disadvantage. The commons are destroyed.
Working hours: No one wants 80-hour weeks, but if competitors work more, you must match them. Everyone burns out.
Social media: Platforms don't want to maximize outrage, but engagement metrics reward it. Discourse degrades everywhere.
Academic publishing: Researchers don't want to publish questionable work, but "publish or perish" pressures select for quantity over quality.
Moloch and AI
The AI race is perhaps the most consequential Moloch trap:
Companies and nations compete to develop AI first. Moving carefully means falling behind. Safety research slows you down. Anyone who pauses while others race ahead loses the race.
This creates pressure to:
- Deploy systems before they're adequately tested
- Skip safety measures that slow development
- Race to capabilities while alignment lags behind
The result could be powerful AI developed without adequate safety—and everyone involved might agree this is suboptimal while feeling unable to change course.
Escaping Moloch
Coordination problems can sometimes be solved through:
Regulation: External constraints that prevent the race to the bottom
Agreements: Binding commitments between competitors
Technology: Tools that change the payoff structure
Superintelligent AI: Ironically, aligned AI might be the ultimate solution—a coordinator powerful enough to enforce global cooperation
The race for AI might be a Moloch trap that only AI can solve—if we can develop it safely enough to trust it with that role.
