• radix@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 years ago

      It was “Blockchain” in 2017. “NFT” in 2020. “AI” in 2023. In a few years, there will be a new buzzword that companies throw a bunch of money at in hopes of being on the leading edge of the ‘next big thing.’

      • DreamButt@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        2 years ago

        While I appreciate the sentiment I think the key difference here is that ML is actually helping people do their work either better or more easily. While Blockchain and NFTs mostly amounted to autofellatio. Meaning those technologies are only helpful if you are interested in using those technologies. Whereas ML has clearly been helpful for all kinds of professions not just Brogrammers

        • astropenguin5@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          I’m not sure if AR has already had it’s big spike yet or not. It already basically exists, just isn’t as good or useful as VR yet. As it gets better we may see it become a buzzword again, but AR/VR has already kinda had it’s big time and is becoming more and more mainstream.

  • borkcorkedforks@kbin.social
    link
    fedilink
    arrow-up
    62
    ·
    2 years ago

    To me it looks like an over estimation of the capabilities for the tech. Same kind of thinking that led to lawyers submitting fake cases as support in court. The current tech can be useful but has to be verified and generally tweaked a bit to be good enough. It certainly has room for improvement in quality and just not lying. Real world use has some copyright questions with what the training data was. Applying it to something creative is questionable and more or less feels like uninspired remixes.

    Also the whole graphic is kinda suspect to me when “Blockchain engineers” is a job category and it’s produced by an org working on AI.

  • Calcharger@kbin.social
    link
    fedilink
    arrow-up
    60
    ·
    2 years ago

    Ah yes, because programming and critical thinking do not go hand in hand. We are going to have so many software vulnerabilities in the coming years. Better learn to hack, ladies and lads.

  • Candelestine@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 years ago

    It’s not bad.

    There’s one thing that people tend to neglect that I like to remember–it’s going to be awhile yet before an AI can walk up to your door, knock, come in and find the specific nature of a plumbing/electrical/HVAC or whatever problem, and then diagnose and fix it. And then get safely home without getting hit by a truck or vandalized by bored teenagers or both.

    That’s such a complex suite of different “problems” that we’re going to need nothing less than a general AI to navigate them all. Thus, one of the last jobs that’ll be replaced is various kinds of repair professionals that do house calls. The constant novelty of the career, where every call is its own unique situation, is a nightmare for a current-method AI.

      • Candelestine@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 years ago

        You’re going to have an unacceptably high failure rate as you attempt to trial-and-error your way through all the lower-probability problems. Meanwhile, independent research paths aiming at general AI, which absolutely could handle all these problems, is racing you.

    • Brkdncr@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      An hvac company that is able to adopt AI for 1st call processing and scheduling will be able to eliminate a number of jobs and remain open 24x7. They will undercut their local competitors, and the hvac techs will find themselves out of a job or working for their competitor soon.

      Small companies won’t be able to compete.

      I’m all for this but we need to offset these immense productivity gains with economic safety nets. I don’t know how the next 100 year will look if we don’t adopt UBI, universal healthcare, and some amount of subsidized housing.

      • effingjoe@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        if we don’t adopt UBI, universal healthcare, and some amount of subsidized housing

        This has been my stance for years. Automation is coming for all of us. The only reason LLMs are so controversial is that everyone in power assumed automation was coming for the blue collar jobs first, and now that it looks like white collar and creative jobs are on the chopping block, suddenly it’s important to protect people’s jobs from automation, put in safety nets, etc, etc.

        Forgive my cynicism. haha

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 years ago

    I think realistically, it’ll be decades before people comfortable with GPT enter the workforce and actually make most of those jobs redundant. It’s like when the internet blew up and older managers had no clue what to do with it. They hired web developers and eventually, the web developers wrote things like Wordpress so the staff could edit the web site themselves.

    And guess what happened? The web devs didn’t get laid off. Staff kept sending the web developers changes in Word documents for like a decade before a generation of young people comfortable with posting text on the web entered the workforce and actually wanted to do it themselves. (Even then, the web developers didn’t disappear but, instead, were freed to build more complicated things.)

    So, basically, I think the concepts there are fine but that it’ll take a generation for businesses to fully take advantage of the new tech. Some firms will embrace it quickly but these things almost always take longer than technology enthusiasts assume.

  • kenbw2@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 years ago

    Seems like OpenAI lobbying the government to pave the way for their technology to become indispensable

    No opinion on the accuracy of it, but this is lobbying, not independent opinion