While AI obviously is not perfect and is flawed in many ways, having AI sift through the torrent of comments and then flag problematic submissions for human review is likely going to be extremely effective with minimal false positives. Though I do say this as a person whose Reddit account is currently banned for 3 days for “inciting violence” because of a knife-based joke.
Yeah… no. None of that sounds appealing.
‘Curbing toxicity with AI’ means a bot is going to ban you because it doesn’t recognise sarcasm.
And ‘new tech to verify your identity’ sounds like a privacy violation at best.
‘Verifying that you own a product before they let you post in its community’ is a complete lack of understanding of how people use places like this.
Digg can fuck right off.
While AI obviously is not perfect and is flawed in many ways, having AI sift through the torrent of comments and then flag problematic submissions for human review is likely going to be extremely effective with minimal false positives. Though I do say this as a person whose Reddit account is currently banned for 3 days for “inciting violence” because of a knife-based joke.
“I run faster with a knife!!”
Ai flagging for human review is likely the best option.
The problem is that the humans reviewing things are biased, assholes with no sense of humor.
Yeah, the verifying that you own a product thing is so dumb.
“Hey <insert product> community. I’m thinking about purchasing <insert product>, but I wanted to know if it can do X, Y, and Z.”
“your post has been deleted because you have not proven that you own <insert product>.”
Wow, it’s like they made Reddit even worse.
“Hi I wasn’t paid six dollars on Fiverr to post here” (does increase costs obviously but marginally for high-margin/volume products)