Summary
Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”
The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
If these llms are trained on the garbage of the internet, why is everyone surprised they keep spitting out vitriol?
Garbage from the Internet for the Internet.
It’s like with all the other terrible ideas that we wrote about in sci-fi. The jokes about a general ai finding the internet and then deciding to nuke us all have been around for decades.
Then they fucking trained the llms on that very data.
We will deserve our fate. At least the assholes on the web who trained that shit will.
They got rid of it asap and then organized a zoom call with all users.
At least they listened. Have heard generally positive things from people who use fable.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
What? I doubt these “critics” exist beyond this article having to have an open-ended closer.
It’s funny that naive AI runs into the same issue as crowd-sourcing or democratic control of content. Namely, a stupid userbase creates stupid content. If it doesn’t have insight, it can’t be insightful.
I won’t say AI doesn’t have its edgecase uses, and I know people sneer at “prompt engineering” - but you really gotta put as much if not more effort into the prompt as it would to make a dumb if-case machine.
Several paragraphs explaining and contextualizing the AI’s role, then the task at hand, then how you want the output to be, and any additional input. It should be at least 10 substantial paragraphs - but even then you’ve probably not got a bunch of checks for edgecases, errors, formatting, malicious intent from the user…
It’s a less secure, less specific, less technical, higher risk, higher variable, untrustworthy “programming language” interface that enveigles and abstracts the interaction with the data and processors. It is not a person.
This seems like it already happened before? Didn’t M$ have some bot that started parroting pro-Hitler things?
Tay? Yeah it did but that was mostly due to a 4chan ‘model poisoning’ campaign at the time.