• VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    15
    ·
    2 months ago

    Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 months ago

      Because you’re using it wrong.

      No, I think you mean to say it’s because you’re using it for the wrong use case.

      Well this tool has been marketed as if it would handle such use cases.

      I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

      I personally think image recognition is the best use case as it pretty much does what it promises.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 months ago

        Really? AI has been marketed as being able to count the r’s in “strawberry?” Please link to this ad.

      • L3s@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        5
        ·
        edit-2
        2 months ago

        Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”

        Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”

        Another is feeding it an article and asking for a summary, https://hackingne.ws does that for its Bsky posts.

        Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”

        Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”

        • lurch (he/him)@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 months ago

          it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.

          • L3s@lemmy.worldM
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 months ago

            My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          2 months ago

          The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.

          How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?

          Are you really saving time?

          • L3s@lemmy.worldM
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 months ago

            Yes, I’m saving time. As I mentioned in my other comment:

            Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.

            And

            and helps correct my shitty grammar at times.

            And

            Hallucinations are a thing, so validating what it spits out is definitely needed.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              2 months ago

              How do you validate the accuracy of what it spits out?

              Why don’t you skip the AI and just use the thing you use to validate the AI output?

              • L3s@lemmy.worldM
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                2 months ago

                Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.

                Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              edit-2
              2 months ago

              If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I have it write for me emails in German. I moved there not too long ago, works wonders to get doctors appointment, car service, etc. I also have it explain the text, so I’m learning the language.

        I also use it as an alternative to internet search, which is now terrible. It’s not going to help you to find smg super location specific, but I can ask it to tell me without spoilers smg about a game/movie or list metacritic scores in a table, etc.

        It also works great in summarizing long texts.

        LLM is a tool, what matters is how you use it. It is stupid, it doesn’t think, it’s mostly hype to call it AI. But it definitely has it’s benefits.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        We have one that indexes all the wikis and GDocs and such at my work and it’s incredibly useful for answering questions like “who’s in charge of project 123?” or “what’s the latest update from team XYZ?”

        I even asked it to write my weekly update for MY team once and it did a fairly good job. The one thing I thought it had hallucinated turned out to be something I just hadn’t heard yet. So it was literally ahead of me at my own job.

        I get really tired of all the automatic hate over stupid bullshit like this OP. These tools have their uses. It’s very popular to shit on them. So congratulations for whatever agreeable comments your post gets. Anyway.

      • chaosCruiser 🚫@futurology.today
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.

        Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT is also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.

        If you’re well familiar with the code you’ve working on, GPT code will be convoluted by comparison. If so, you can ask GPT for the rough alpha version, and you can do the debugging and refining in a few minutes.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          2 months ago

          That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.

  • whotookkarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 months ago

    I’ve already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it’s not a primary source for anything I just get that blank stare like I have two heads.

  • eggymachus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    ·
    2 months ago

    A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: ‘Talking Dog For Sale.’

    He rings the bell and the owner appears and tells him the dog is in the backyard.

    The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.

    “You talk?” he asks.

    “Yep” the Lab replies.

    After the guy recovers from the shock of hearing a dog talk, he says, “So, what’s your story?”

    The Lab looks up and says, “Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running… but the jetting around really tired me out, and I knew I wasn’t getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I’m just retired.”

    The guy is amazed. He goes back in and asks the owner what he wants for the dog.

    “Ten dollars” the guy says.

    “Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?”

    “Because he’s a liar. He’s never been out of the yard.”

  • whynot_1@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    2 months ago

    I think I have seen this exact post word for word fifty times in the last year.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    2 months ago

    There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.

    In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.

    • Static_Rocket@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?

  • ClusterBomb@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    2 months ago

    “My hammer is not well suited to cut vegetables” 🤷

    There is so much to say about AI, can we move on from “it can’t count letters and do math” ?

    • Strykker@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      But the problem is more “my do it all tool randomly fails at arbitrary tasks in an unpredictable fashion” making it hard to trust as a tool in any circumstances.

  • Grabthar@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Doc: That’s an interesting name, Mr…

    Fletch: Babar.

    Doc: Is that with one B or two?

    Fletch: One. B-A-B-A-R.

    Doc: That’s two.

    Fletch: Yeah, but not right next to each other, that’s what I thought you meant.

    Doc: Isn’t there a children’s book about an elephant named Babar.

    Fletch: Ha, ha, ha. I wouldn’t know. I don’t have any.

    Doc: No children?

    Fletch: No elephant books.

  • humorlessrepost@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 months ago

    Works fine for me in o3-mini-high:

    Counting letters in “strawberry”

    Alright, I’m checking: the word “strawberry” is spelled S T R A W B E R R Y. Let me count the letters: S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). There are three R’s: in positions 3, 8, and 9. So, the answer is 3. Even if we ignore case, the count still holds. Therefore, there are 3 r’s in “strawberry.”

    • sheogorath@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 months ago

      Finally! With a household energy consumption for one day we can count how many Rs are in strawberry.

  • Zess@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 months ago

    You asked a stupid question and got a stupid response, seems fine to me.

  • winkly@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    How many strawberries could a strawberry bury if a strawberry could bury strawberries 🍓

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    What would have been different about this if it had impressed you? It answered the literal question and also the question the user was actually trying to ask.