A biologist was shocked to find his name was mentioned several times in a scientific paper, which references papers that simply don’t exist.

  • krayj@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    1
    ·
    2 years ago

    Brandolini’s law, aka the “bullshit asymmetry principle” : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

    Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.

  • EnglishMobster@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    2 years ago

    Stupid question: Why can’t journals just mandate an actual URL link to a study on the last page, or the exact issue something was printed in? Surely both of those would be easily confirmable, and both would be easy for a scientist using “real” sources to source (since they must have access to it themselves already).

    Like, it feels silly to me that high school teachers require this sort of thing, yet scientific journals do not?

    • tburkhol@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      2 years ago

      Many of the journals I’ve published in do require a link, usually a PMID or DOI, but they’re not usually part of the review process. That is, one doesn’t expect academic content reviewers to validate each of the citations, but it’s not unreasonable to imagine a journal having an automated validator. The review process really isn’t structured to detect fraud. It looks like the article in question was in the preprint stage - i.e.: not even reviewed yet - and I didn’t notice mention of where they were submitted.

      Message here should be that the process works and the fake article never got published. Very different than the periodic stories about someone who submits a blatantly fake, but hand written, article to a bullshit journal and gets published.

  • FuryMaker@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    Aren’t papers peer reviewed? Or are they getting ChatGPT to do that too?

    Submit harsher consequences for falsified information?

  • daredevil@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    2 years ago

    Assuming this is carelessness, this just goes to show that working in academia isn’t an indicator of critical thinking skills IMO

    • average650@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Honestly, I bet he has the skills, he just didn’t use them because he didn’t care, or is overworked, or for whatever reason.

      • daredevil@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        You make a valid point, and there are certainly more considerations than my original reply would lead one to believe. Cheers.

      • Kerfuffle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 years ago

        A lot of people don’t understand the limitations/weaknesses of AI. The carelessness was probably more in not actually learning about the tool he was relying on (and just assuming it was reliable information).

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          It’s like the aeroplane lawyer case some time ago. People treat the computer as an arbiter of truth, and/or think checking is just asking the chatbot “Did you use a real citation for this?”.