The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.
Yes, recently we got a security “finding” from a security researcher.
His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation…
Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.
It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).
With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.
In this case, there was file a, which is the backend file responsible for intake and sanitation. Depending on what’s next, it might go on to file b or file c. He modified file a.
His rationale was that every single backend file should do sanitation, because at some future point someone might make a different project and take file b and pair it with some other intake code that didn’t sanitize.
I know all about client side being useless for meaningful security enforcement.
I have to say that is pretty dumb. I will agree the scenario isn’t completely implausible, but if someone who doesn’t know what they are doing is allowed to do something like that, they’re going to screw up other stuff too.
The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.
Yes, recently we got a security “finding” from a security researcher.
His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation…
Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.
It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).
With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.
In this case, there was file a, which is the backend file responsible for intake and sanitation. Depending on what’s next, it might go on to file b or file c. He modified file a.
His rationale was that every single backend file should do sanitation, because at some future point someone might make a different project and take file b and pair it with some other intake code that didn’t sanitize.
I know all about client side being useless for meaningful security enforcement.
I have to say that is pretty dumb. I will agree the scenario isn’t completely implausible, but if someone who doesn’t know what they are doing is allowed to do something like that, they’re going to screw up other stuff too.
That’s hilarious but the post is about the ai not doing what it’s told. You know?
ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO
Uh oh, someone clearly didn’t read the article!
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
You are correct.
It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.
📖👀
Yes, it did.