This LessWrong post explores the concept of an "Attacker's Dilemma" as a potential foundation for stable, multipolar civilizations. The author contrasts this with the more commonly discussed "Defender's Dilemma," where offensive actions are strategically advantageous, leading to defection-based equilibria. The article posits that by increasing the risks and consequences for attackers, such as raising the probability of getting caught or ensuring they are stopped before benefiting, a system can be engineered where offense is no longer the best defense. This could foster genuine coordination and cooperation without the need for a single, autocratic enforcer. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Suggests a game-theoretic framework for AI governance that could influence future safety research and policy discussions.
RANK_REASON This is an opinion piece discussing game theory and civilization stability, not a direct announcement of a new model, product, or policy.