Is it possible to understand this somehow, for example, with the help of drafts? Or should a person post a draft and edit it in front of the moderators?

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    5 days ago

    You can’t. The best indicator of human authorship at this point are mistakes like misspellings and poor grammar. The common advice — like looking for em-dashes — was garbage to begin with and has only become worse as LLMs evolve.

    • FinjaminPoach@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      True but still i expect this will be ‘patched’ in 10 years or less and i’m so worried about what will happen to literature by then

  • Bahnd Rollard@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    I dont, I just assume you are all robots. Nice robots, who usually say smart or nice things, but unless I can drive to a place an punch you without breaking my hand… Best to just assume yall are to avoid the disappointment later.

    As for proving to the rest of the robots that im not a robot… For Lemmy, I completely turn off spellcheck and amy grammar assistance. The frequency of errors, poorly constructed sentences and bad use of frequent AI tells is hopefully enough to validate that im beep boop not a robot.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      Completely unrelated but “beep boop!” is what I say to patients (along with appropriate gestures) to request that they show me their wristband so that I can scan it to verify dosage etc. when administering medications.

    • spittingimage@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      unless I can drive to a place an punch you without breaking my hand…

      I feel like there’s steps you could take before that one. Maybe jiggle my squishy belly for a second or two.

  • CombatWombatEsq@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    Everyone in this thread saying it’s impossible is basically correct, but that doesn’t mean you don’t have to do your best to identify ai-generated text sometimes.

    If you’re focused on identifying whether a given text is ai generated, here’s how Wikipedia does it: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

    If you’re focused on proving something you’ve written was written without ai assistance, I think your best bet is a screencast of the writing process.

  • Lembot_0006@programming.dev
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    5 days ago

    It isn’t possible in a common case. There are bots that check text for specific wordage rarely used by humans, but those bots are unreliable. Especially if the text is written by a non-native-speaker.

    • canihasaccount@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Recently, a company called Pangram appears to have finally made a breakthrough in this. Some studies by unaffiliated faculty (e.g., at U Chicago) have replicated its claimed false positive and false negative rates. Anecdotally, it’s the only AI detector I’ve ever run my papers through that hasn’t said my papers are written by AI.

  • Randomgal@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    They’d have to write it in front of you. All the qualities people are describing here are just good writing or something that you can change by simply telling the LLM ‘dont write like an llm’

  • clean_anion@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    There are some generic observations you can use to identify whether a story was AI generated or written by a human. However, there are no definitive criteria for identifying AI generated text except for text directed at the LLM user such as “certainly, here is a story that fits your criteria,” or “as a large language model, I cannot…”

    There are some signs that can be used to identify AI generated text though they might not always be accurate. For instance, the observation that AI tends to be superficial. It often puts undue emphasis on emotions that most humans would not focus on. It tends to be somewhat more ambiguous and abstract compared to humans.

    A large language model often uses poetic language instead of factual (e.g., saying that something insignificant has “profound beauty”). It tends to focus too much on the overarching themes in the background even when not required (e.g., “this highlights the significance of xyz in revolutionizing the field of …”).

    There are some grammatical traits that can be used to identify AI but they are even more ambiguous than judging the quality of the content, especially because someone might not be a native English speaker or they might be a native speaker whose natural grammar sounds like AI.

    The only good methods of judging whether text was AI generated are judging the quality of the content (which one should do regardless of whether they want to use content quality to identify AI generated text) and looking for text directed at the AI user.