Nope, headline is the exact opposite of the actual study results.
There was only a single choice to launch nuclear weapons by a model in the experiment (Gemini) across the 21 games.
There were two launches of nukes caused by the experiment’s ‘accident’ mechanic where they randomly changed the model’s move to be more escalated (but only in that direction, and seemingly not accounting for how escalated it already was).
The likely intended headline claim was that they put nuclear options on the table 95% of the time. But the game in the experiment was literally impossible to win without doing so. They provided a prefixed escalation ladder with nuclear signaling on it and the game win rates correlated to just the most aggressive model in the round.
Yet in spite of that, they did not choose to ever get to actually using nukes in 95% of the games.
Like if deployment counts as ‘use’ then it would be equally accurate for headlines over the last decade to discuss how Russia and the US “used nukes” against various countries when they deployed their nuclear capable submarines to different areas or when Putin or Trump make veiled threats about how they’ve got them. Heck, we could say that North Korea “uses nukes” annually.
The headline here is a good example of why it’s wise to take with a massive grain of salt any negative coverage of AI in stories these days. There’s a huge bias towards hyperbole and negative headlines that will get clicks to feed into and off the outrage around AI by slop article authors most at risk of being replaced by the tech being covered.
It’s cool to be pissed off about big tech BS, but be pissed off about the reality, not the manufactured fictions.
(Pete Hesgeth salivating furiously)
“We must do it, the AI told us to!”
He would love to use a nuke before his time ends.
Of course AI would, not like they have to walk outside into a nuclear winter.
Greetings Professor Falken
Let’s just hook up OpenClaw to our defense systems.



