• buddascrayon@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      4 months ago

      The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

      This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can “DO ALL THE THINGS!” which it’s not and never will be.

      • five82@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        4 months ago

        The obsession with replacing workers with AI isn’t going to die. It’s too late. The large financial company that I work for has been obsessively tracking hours saved in developer time with GitHub Copilot. I’m an older developer and I was warned this week that my job will be eliminated soon.

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 months ago

          The large financial company that I work for

          So the company that is obsessed with money that you work for has discovered a way to (they think) make more money by getting rid of you and you’re surprised by this?

          At least you’ve been forewarned. Take the opportunity to abandon ship. Don’t be the last one standing when the music stops.

          • five82@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            4 months ago

            I never said that I was surprised. I just wanted to point out that many companies like my own are already making significant changes to how they hire and fire. They need to justify their large investment in AI even though we know the tech isn’t there yet.

    • ssfckdt@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      4 months ago

      This is easy to say about the output of AIs… if you don’t check their work.

      Alas, checking for accuracy these days seems to be considered old fogey stuff.

    • Eldritch@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      9
      ·
      4 months ago

      Computers have always been good at pattern recognition. This isn’t new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

      The same goes for code completion. They will just generate something that fills the pattern they’re told to look for. It doesn’t matter if it’s right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we’ve had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can’t even get an llm to reliably spit out a hello world program.

      • brie@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        4 months ago

        Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

        Yet, they’re unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it’s up to you to decide to learn it or not. I see quite an even split among devs. I think it’s worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

        • cley_faye@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

          Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…

          • brie@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            It’s similar to fixing code written by interns. Why hire interns at all, eh?

            Is it faster to generate then debug or write everything? Needs to be properly tested. At the very least many devs have the perception of being faster, and perception sells.

            It actually makes writing web apps less tedious. The longest part of a dev job is pretending to work actually, but that’s no different from any office jerb.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 months ago

      Goldman Sachs, quote from the article:

      “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”

      Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.

      As humans we’re not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Like what outcome?

      I have seen gains on cell detection, but it’s “just” a bit better.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      See now, I would prefer AI in my toaster. It should be able to learn to adjust the cook time to what I want no matter what type of bread I put in it. Though is that realky AI? It could be. Same with my fridge. Learn what gets used and what doesn’t. Then give my wife the numbers on that damn clear box of salad she buys at costco everytime, which take up a ton of space and always goes bad before she eats even 5% of it. These would be practical benefits to the crap that is day to day life. And far more impactful then search results I can’t trust.

      • ssfckdt@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        4 months ago

        There’s a good point here that like about 80% of what we’re calling AI right now… isn’t even AI or even LLM. It’s just… algorithm, code, plain old math. I’m pretty sure someone is going to refer to a calculator as AI soon. “Wow, it knows math! Just like a person! Amazing technology!”

        (That’s putting aside the very question of whether LLMs should even qualify as AIs at all.)

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          In my professional experience, AI seems to be just a faster way to generate an algorithm that is really hard to debug. Though I am dev-ops/sre so I am not as deep in it as the devs.

          • AA5B@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            As a devops person, I’m constantly jumping back and forth to whatever programming language and tools each team uses. Sometimes it takes a bit to find the context, and I’m hoping ai can help. Unfortunately, allowing the ai to see code is currently off limits by corporate policy, so it only helps in those situations where I need to generate boilerplate

            • Modern_medicine_isnt@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              In my jobs there have slways been certain stule requirements to the code. AI doesn’t take those into account. So I would have to rework the code anyway. And of course there are the local libraries it know nothing about.

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 months ago

                Fight technology with technology. I’m sure you can specify a style for it to generate, but we already run everything through a prettifier configured for what we look for …. Unless you mean a higher order like naming or architecture

                • Modern_medicine_isnt@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  Lol, the lead can’t spec the style, he just reviews the code and asks for changes. Sometimes it’s just that we already have a method that does a similar thing, so we should use it. Of course an AI wouldn’t know about that unless you gave it access to your code. And given how speed first AI companies are, I would never trust that data with them. But other times it’s just the leads personal preference.

                  • AA5B@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    4 months ago

                    I just had to transfer one of my guys out after frequent arguments to do that. I don’t understand - I point out a function that does exactly why he wants, yet he still wants to reinvent it.

                    I’m dreading when I come back after break. I got 50% a new junior guy who keeps saying he’s a great programmer. No sign of it so far but my management insists I take him on. All he needs to do is expose a new endpoint, wire up functionality that’s already there, and I walked him through it. Should be easy, right? No reinventing the wheel, right?

          • ssfckdt@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            I remined of the time researchers used an evolutionary algorithm to devise a circuit that would emit a tone on certain audio inputs and not on others. They examined the resulting circuit and found an extra vestigial bit, but when they cut it off, the chip stopped working. So they re-enabled it. Then they wanted to show off their research at a panel, and at the panel it completely failed. Dismayed they brought it back to their lab to figure out why it stopped working, and it suddenly started working fine.

            After a LOT of troubleshooting they eventually discovered that the circuit was generating the tone by using the extra vestigial bit as an antenna that picked up emissions from a CRT in the lab and downconverted it to the desired tone frequency. Turn of the antenna, no signal. Take the chip away from that CRT, no signal.

            That’s what I expect LLMs will make. Complex, arcane spaghetti stuff that works but if you look at it funny it won’t work anymore, and nobody knows how it works at all.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        I agree with your wife: there’s always an aspirational salad in the fridge. For most foods, I’m pretty good at not buying stuff we won’t eat, but we always should eat more veggies. I don’t know how to persuade us to eat more veggies, but step 1 is availability. Like that Reddit meme

        1. Availability
        2. ???
        3. Profit by improved health