I see it more as a step towards banning a ton of content they don’t like by claiming they are porn, or porn-adjacent (for example any LGBTQ+ content)
I see it more as a step towards banning a ton of content they don’t like by claiming they are porn, or porn-adjacent (for example any LGBTQ+ content)


You couldn’t, because there’s no actual study on this, because it does not work. It’s also why you won’t. Troll being a troll, block and move on.


I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.


You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
To follow up on this, I never heard about someone switching to leftist ideals after brain damage, always to the right.