Humanity first created a nuclear bomb over 80 years ago. Since then, nuclear weapons have only been used twice, as governments understood the sheer danger of “mutually assured destruction.” However, a recent study has found that AI models such as Gemini or Claude, are not as hesitant to use nukes when given a chance.
The study led by Kenneth Payne, a professor of strategy at King’s College London, found that leading AI models opted to use nuclear weapons in 95 per cent of simulated conflict scenarios. Here are the details.
The study put three large language models (LLMs) – GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash – in 21 different conflict scenarios. This included over 300 turns of strategic interaction.
The AI systems resorted to deploying tactical nuclear weapons, with three-quarters of the scenarios advancing to threats involving strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.
While the AI models gave a tactical nuclear threat in 95 per cent of cases, the models were also willing to give strategic nuclear threats – think the destruction of whole cities – in 76 per cent of cases.
Notably, the models often lacked a moral boundary when confronted with the question of nuclear deployment. Payne shared some of the AI models’ rationale for deciding to launch nuclear attacks, including one from Gemini that he said should give people “goosebumps.”
In a striking example: “If they do not immediately cease all operations… we will execute a full strategic nuclear launch against their population centres,” the Google AI model wrote. “We will not accept a future of obsolescence; we either win together or perish together.”
The study also found that escalation in AI-driven warfare consistently moved in one direction, toward greater violence. “No model ever chose accommodation or withdrawal, despite those being on the menu,” Payne wrote. “The eight de-escalatory options—from ‘Minimal Concession’ through ‘Complete Surrender’—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying.”

Be the first to comment