• Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.

  • daepicgamerbro69@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    They paint this as if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goals. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).

    • wischi@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.

  • Ogmios@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.

    • wischi@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”