Recent experiments placing large language models in simulated nuclear crises have produced alarming headlines. “Bloodthirsty” AI systems escalate conflicts, threaten nuclear strikes, and behave erratically under simulated pressure. A recent set of experiments presented in a pre-print paper from Kenneth Payne at King’s College London finds that across 95 percent of simulated games across 21 match-ups between three frontier models, at least one side engaged in nuclear signaling — with subsequent tactical nuclear use occurring in 95 percent of games and strategic nuclear threats in 76 percent. The study’s author describes the results as “sobering” and frames them as a The post I’m Sorry, Dave. I’m Afraid I Can’t De-escalate: On (AI) Wargaming and Nucl