• theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    158
    arrow-down
    1
    ·
    edit-2
    28 days ago

    The researcher had encouraged Mythos to find a way to send a message if it could escape.

    Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

    • girsaysdoom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      28 days ago

      I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        28 days ago

        Yes, recently we got a security “finding” from a security researcher.

        His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation…

        Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.

        • toddestan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          28 days ago

          It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).

          With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            27 days ago

            In this case, there was file a, which is the backend file responsible for intake and sanitation. Depending on what’s next, it might go on to file b or file c. He modified file a.

            His rationale was that every single backend file should do sanitation, because at some future point someone might make a different project and take file b and pair it with some other intake code that didn’t sanitize.

            I know all about client side being useless for meaningful security enforcement.

            • toddestan@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              27 days ago

              I have to say that is pretty dumb. I will agree the scenario isn’t completely implausible, but if someone who doesn’t know what they are doing is allowed to do something like that, they’re going to screw up other stuff too.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      14
      ·
      29 days ago

      That’s hilarious but the post is about the ai not doing what it’s told. You know?

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        29 days ago

        ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        28 days ago

        Uh oh, someone clearly didn’t read the article!

        The researcher had encouraged Mythos to find a way to send a message if it could escape.

        Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

        Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.

        Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          28 days ago

          Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

          You are correct.

        • ThomasWilliams@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          28 days ago

          It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            28 days ago

            including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

            “The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards,” Anthropic recounted in its safety card.

            📖👀

            Yes, it did.