• oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    I work for a consulting company and they’re truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It’s maddening.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    7 months ago

    Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?

    …no.

    So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.

    But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.

    Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.

    This is the idea that"Sentience can’t come from semantics"… More is needed than that.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 months ago

      i like your comment here, just one reflection :

      Thinking is relational, it requires an internal self awareness.

      i think it’s like the chicken and the egg : they both come together … one could try to argue that self-awareness comes from thinking in the fashion of : “i think so i am”

  • vonxylofon@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    7 months ago

    I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.

    Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I’m leaving a little possibility that the slice of pizza will outreason GPT 4.

    • Michal@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      LLMs keep getting better at imitating humans thus for those who don’t know how the technology works, it’ll seem just like it thinks for itself.