A biologist was shocked to find his name was mentioned several times in a scientific paper, which references papers that simply don’t exist.
Brandolini’s law, aka the “bullshit asymmetry principle” : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.
Stupid question: Why can’t journals just mandate an actual URL link to a study on the last page, or the exact issue something was printed in? Surely both of those would be easily confirmable, and both would be easy for a scientist using “real” sources to source (since they must have access to it themselves already).
Like, it feels silly to me that high school teachers require this sort of thing, yet scientific journals do not?
Many of the journals I’ve published in do require a link, usually a PMID or DOI, but they’re not usually part of the review process. That is, one doesn’t expect academic content reviewers to validate each of the citations, but it’s not unreasonable to imagine a journal having an automated validator. The review process really isn’t structured to detect fraud. It looks like the article in question was in the preprint stage - i.e.: not even reviewed yet - and I didn’t notice mention of where they were submitted.
Message here should be that the process works and the fake article never got published. Very different than the periodic stories about someone who submits a blatantly fake, but hand written, article to a bullshit journal and gets published.
Aren’t papers peer reviewed? Or are they getting ChatGPT to do that too?
Submit harsher consequences for falsified information?
Assuming this is carelessness, this just goes to show that working in academia isn’t an indicator of critical thinking skills IMO
Honestly, I bet he has the skills, he just didn’t use them because he didn’t care, or is overworked, or for whatever reason.
You make a valid point, and there are certainly more considerations than my original reply would lead one to believe. Cheers.
A lot of people don’t understand the limitations/weaknesses of AI. The carelessness was probably more in not actually learning about the tool he was relying on (and just assuming it was reliable information).
It’s like the aeroplane lawyer case some time ago. People treat the computer as an arbiter of truth, and/or think checking is just asking the chatbot “Did you use a real citation for this?”.