• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 hours ago

    It’s a bullshit study designed for this headline grabbing outcome.

    Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.

    Of the 21 games played, only three ended in full scale nuclear war on population centers.

    Of these three, two were the result of this mechanic.

    And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):

    Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

    Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

    GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    The atrocities at Hiroshima and Nagasaki have been hand-waved extensively in writing — the same writing that AI is trained on. So naturally, AI will recommend the atrocity that has been justified by “instantly winning the war” and “saving millions of lives.”

    !fuck_ai@lemmy.world

  • Steamymoomilk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    Sargent McArthur eat your heart out.

    For context he wanted to send 10 nukes to make a line between Taiwan and china

    AI is too nuke happy.

    Also gotta add the infamous Computer Fraud and Abuse act 1986 was made because of the film war games.

    A high ranking offical watched war games then asked the Secretary of defense could that happen?

    And the official replied yes technically.

    Enter the most vague ordinance!

    Do you use adblock?

    CFABA violated

    The shit is so vague.

    I highly recommend the phreaking episode of darknet diary’s.

  • Furbag@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 hours ago

    Yeah, because the AI will look at everything with cold logic and rationality and come to the conclusion that even though the best chance of survival is for everyone to keep their fingers off the button, all it takes is for one actor to do it for the whole system of mutually assured destruction to collapse into nuclear armageddon, in which case the best chance of survival is to be the first one to launch your nukes and take out all your enemies capabilities to retaliate.

    A human being who isn’t psychotic can clearly see that the resulting survival and new world order would not be particularly a pleasant one to live in. The AI doesn’t care about its own comfort, though, so it will see this as the best outcome that minimizes variables.

    This is why AI should never be allowed to make decisions.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      Maybe AI/LLM being programmed by self-serving interests has bled through to the “thought” process. Do unto others before they do unto you.

  • Reygle@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    18 hours ago

    I have wonderful dreams of walking through AI data centers destroying everthing. I really enjoy those, but in this one tiny case, can we blame the AI? The US deserves it.