• Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    125
    arrow-down
    3
    ·
    18 days ago

    Huge Study

    *Looks inside

    this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

    Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

    AI sucks in a lot of ways sure, but this feels like fud.

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      18 days ago

      I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 days ago

        It’s about 300 samples for an estimate of the distribution with a 95% confidence iirc. That’s assuming the samples are representative (unbiased) and 95% confidence doesn’t mean it’s within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there’s no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        18 days ago

        fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      18 days ago

      It’s not really ethical to just yoink people’s chats and study them

      • braxy29@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        17 days ago

        "We received chat logs directly from people who self-identified as having some psychological harm related to chatbot usage (e.g. they felt deluded) via an IRB-approved Qualtrics survey "