- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
Worthless research.
That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.
Even if a user called it out, they’d be censored.
Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.
accusing others of speaking in bad faith
You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?
And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.
I won’t tell you.about civiity because
How could that do anything besides shield the sealions, JAQoffs, and grifters?
Not shield, but amplify.
That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.
ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.
I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.
To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.
I agree and recognized that. I’m more emotionally upset about it tbh. The debates aren’t for the debaters, it’s to hopefully disillusion and remove indoctrinated fears from those on the fence willing to read them. It’s oft repeated there when people ask “what’s the point, same stupid debate for centuries.” Well religions unfortunately persist, and haven’t lost any ground globally. Gained, actually. Not our fault they have no new ideas.
Removed by mod
This is deeply unethical, when doing research you need to respect the people who participate and you have to respect what their story is. So by using a regurgitative artificial idiot (RAI) to make them their mind is not respecting them or their story.
The people who are being experimented on were not given compensation for their time and the work they contributed. While it isn’t required it is good practice in research to not actively burn bridges with people so that they will want to participate in more studies.
These people were also not given knowledge they were participating in a study nor were they given the choice to leave with their contributions at their will. Which entirely makes the study unpublishable since the data was not gathered with fucking consent.
This isn’t even taking into account any of the other things which cross ethical lines. All the “researchers” involved should never be allowed to ever conduct or participate in a study of any kind again. Their university should be fined and heavily scrutinized for their work in enabling this shit. These assholes have done damage to all researchers globally who will now have a harder time pitching real studies to potential participants because they could remember this story and how “researchers” took advantage of unknowing individuals. Shame on these people and hope they face real consequences.
These researchers conducted research in a manner that was totally unethical and they deserve to be stripped of tenure and lose any research funding they have.
It already sounds like the university is preparing to just protect them and act like it’s no big deal, which is discouraging but I suppose not surprising.
I absolutely agree these “researchers” deserve to lose their tenure and lose their funding. In my mind they don’t even deserve to be called researchers anymore as they view their job as an extractive one. They hold no regard for the people they impacted and how that impacts the entire fields of research.
If the university does protect these people than I can only hope that no one signs up to participate in any future studies they try to conduct.
Reddit: “Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!”
They literally have some AI thing called “answers” which is shitty practice of pushing AI by reddit
Reddit? More like Deddit, amirite?
I haven’t seen this question asked.
how can the results be trusted that they were actually interacting with real humans?
what’s the percentage of bot-to-bot contamination?
this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.
any professional that puts their name on this steaming pile should be ashamed of themselves.
With all the bots on the site why complain about these ones?
Edit: auto$#&"$correct
deleted by creator
Just in my own understanding of life. There are these political think tanks. They are staffed by your old professors professors professor. These guys make big bucks to sit around and do this stuff then figure out attack points. I really think they had this research probably 20 years ago. I figure that’s what the guys do all day. Eventually the results end up in the firm that handles Steven Crowder, Ben Shapiro, that guy living in the Philippines.
Wow, this is pretty concerning. As someone who spends a lot of time on Reddit, I find it really unsettling that researchers would experiment on users without their knowledge. It’s like walking into a coffee shop for a casual chat and unknowingly becoming part of a psychology experiment!
I never read comments nor do I read the responses to my comments.
How exactly would they deploy this if they’re losing money on chatgpt responding to users saying “thank you”?