Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.
Same.
Mood
Relatable.
It was trained by liars. What do you expect.
They paint this as if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goals. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).
Maybe the darknet will grow in its place.
It’s not a lie if you believe it.
this is the AI model that truly passes the Turing Test.
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”