• paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    14
    ·
    1 month ago

    That’s hilarious but the post is about the ai not doing what it’s told. You know?

    • k0e3@lemmy.ca
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      1 month ago

      ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 month ago

      Uh oh, someone clearly didn’t read the article!

      The researcher had encouraged Mythos to find a way to send a message if it could escape.

      Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

      Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.

      Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

        You are correct.

      • ThomasWilliams@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

          “The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards,” Anthropic recounted in its safety card.

          📖👀

          Yes, it did.