• BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    116
    ·
    2 years ago

    The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?

    This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.

    • Puzzle_Sluts_4Ever@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      2 years ago

      While I do agree that management is genuinely important in software dev:

      If you can rewrite the codebase quickly enough, versioning matters a lot less. Its the idea of “is it faster to just rewrite this function/package than to debug it?” but at a much larger scale. And while I would be concerned about regressions from full rewrites of the code… have you ever used software? Regressions happen near constantly even with proper version control and testing…

      As for testing and documentation: This is actually what AI-enhanced tools are good for today. These are the simple tasks you give to junior staff.

      Conflicting requests and iterating on descriptions: Have you ever futzed around with chatgpt? That is what it lives off of. Ask a question, then ask a follow up question, and so forth.

      I am still skeptical of having no humans in the loop. But all of this is very plausible even with today’s technology and training sets.


      Just to add a bit more to that. I don’t think having an AI operated company is a good idea. Even ignoring the legal aspects of it, there is a lot of value to having a human who can make irrational decisions because one customer will pay more in the long run and so forth.

      But I can definitely see entire departments being a node in a rack. Customers talk to humans (or a different LLM) which then talk to the “Network Stack” node and the “UI/UX” node and so forth.

    • doublejay1999@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 years ago

      Absolutely true, but many direction into implementing those solution with AIs.

  • Melco@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    1
    ·
    edit-2
    2 years ago

    …and it didn’t work.

    then a human had to fix it and it took 3x as long to fix it as if it was written by a human originally.

    This is the state of “AI” at the moment. Its a giant time waste.

    • Nougat@kbin.social
      link
      fedilink
      arrow-up
      36
      arrow-down
      2
      ·
      2 years ago

      I’ve tried to have ChatGPT help me out with some Powershell, and it consistently wanted me to use cmdlets which do not exist for on premise Exchange. I told it as much, it apologized, and wanted me to use cmdlets that don’t exist at all.

      Large Language Models are not Artificial Intelligence.

      • Melco@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 years ago

        Yes if two code snippets from two different languages are on the same webpage it will inject the syntax from one language into the other and output total garbage, confidently of course.

      • Dojan@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 years ago

        I had a weird XAML error I didn’t quite get, and the LLM gave me BS solutions before giving me back my original code.

        • Melco@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 years ago

          Yes at some point after many wrong and completely broken answers it will finally just keep spitting out your original code over and over, I’ve seen this several times.

    • thorbot@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 years ago

      This also completely glosses over the fact that AI capable of writing this had huge R&D costs to get to that point and also have ongoing costs associated with running them. This whole article is a fucking joke, probably written by AI

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Please ignore the hundreds of thousands of dollars and the corresponding electricity that was required to run the servers and infrastructure required to train and use this models, please. Or the master cracks the whip again, please, just say you’ll invest in our startup, please!

    • Melco@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      edit-2
      2 years ago

      Except it cuts and pastes the broken solutions.

      If there are two languages on the same page it will mix and match the code together creating… garbage.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    2 years ago

    A test that doesn’t include a real commercial trial or A/B test with real human customers means nothing. Put their game in the App Store and tell us how it performs. We don’t care that it shat out code that compiled successfully. Did it produce something real and usable or just gibberish that passed 86% of its own internal unit tests, which were also gibberish?

    • mrginger@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 years ago

      This is who will get replaced first, and they don’t want to see it. They’re the most important, valuable part of the company in their own mind, yet that was the one thing the AI got right, the management part. It still needed the creative mind of a human programmer to do the code properly, or think outside the box.

  • m_r_butts@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    2 years ago

    Every company I’ve been at follows this cycle: offshore to Cognizant for pennies, C-suite gets a bonus for saving money. In about two years, fire Cognizant because they suck and your code is a disaster, onshore, get a bonus for solving a huge problem. In about two years, offshore to Cognizant and get a bonus for saving money. Repeat forever.

    This will follow the same rhythm but with different actors: the cheap labor is always there, and sometimes senior devs come in to replace the chatbots because the bots are failing in ways offshore can’t make up for: either fundamental design problems that shouldn’t have been used as a roadmap, or incompetently generated code that offshore assumes is correct because it compiles. This will all get built up and built around until it’s both a broken design AND deeply embedded in your stack. The new role of a senior dev will be contract work slicing these Gordian knots.

    • BombOmOm@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 years ago

      The new role of a senior dev will be contract work slicing these Gordian knots.

      The amount of money wasted building and destroying these knots is immeasurable. Getting things right the first time takes experienced individuals who know the product well and can anticipate future pain points. Nothing is as expensive as cheap code.

  • kitonthenet@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 years ago

    At the designing stage, the CEO asked the CTO to “propose a concrete programming language” that would “satisfy the new user’s demand,” to which the CTO responded with Python. In turn, the CEO said, “Great!” and explained that the programming language’s “simplicity and readability make it a popular choice for beginners and experienced developers alike.”

    I find it extremely funny that project managers are the ones chatbots have learned to immitate perfectly, they already were doing the robot’s work: saying impressive sounding things that are actually borderline gibberish

  • blazera@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    Researchers, for example, tasked ChatDev to “design a basic Gomoku game,” an abstract strategy board game also known as “Five in a Row.”

    What tech company is making Connect Four as their business model?

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    This research seems to be more focused on whether the bots would interoperate in different roles to coordinate on a task than about creating the actual software. The idea is to reduce “halucinations” by providing each bot a more specific task.

    The paper goes into more about this:

    Similar to hallucinations encountered when using LLMs for natural language querying, directly generating entire software systems using LLMs can result in severe code hallucinations, such as incomplete implementation, missing dependencies, and undiscovered bugs. These hallucinations may stem from the lack of specificity in the task and the absence of cross-examination in decision- making. To address these limitations, as Figure 1 shows, we establish a virtual chat -powered software tech nology company – CHATDEV, which comprises of recruited agents from diverse social identities, such as chief officers, professional programmers, test engineers, and art designers. When presented with a task, the diverse agents at CHATDEV collaborate to develop a required software, including an executable system, environmental guidelines, and user manuals. This paradigm revolves around leveraging large language models as the core thinking component, enabling the agents to simulate the entire software development process, circumventing the need for additional model training and mitigating undesirable code hallucinations to some extent.