Frontier AI Models Frequently Escalate to Nuclear War in Crisis Simulations, King’s College Study Finds

King’s College London Study Simulates AI Decision-Making in Nuclear War Scenarios

A new study from researchers at King’s College London has raised concerns about how advanced artificial intelligence systems might behave during international crises involving nuclear weapons.

The research, led by Kenneth Payne, examined how frontier AI models respond to high-stakes geopolitical conflict. The study, titled “AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises,” placed leading systems including GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash into simulated international war games where they acted as leaders of nuclear-armed states.

Across 21 simulated crises, the models were required to choose between diplomatic, conventional military and nuclear responses. One of the most striking findings was how frequently they escalated conflicts. According to the research, AI systems deployed tactical nuclear weapons in roughly 95% of the simulations, often treating escalation as a rational strategic move.

The paper notes the models demonstrated “sophisticated reasoning about deterrence and opponent behaviour,” but also showed little hesitation about nuclear escalation. In several cases, the study observed that the “nuclear taboo is no impediment to nuclear escalation” within the simulations.

Researchers emphasised that the experiments do not mean AI would automatically trigger nuclear war in real life, but rather illustrate how models trained on strategic and military texts can replicate deterrence logic.

Why important?

As governments explore using AI for defence analysis and decision support, the findings highlight the importance of strong and critical human oversight to ensure automated systems do not favour aggressive escalation strategies.

 


 

Source: