• JiveTurkey@lemmy.world
    link
    fedilink
    arrow-up
    72
    arrow-down
    1
    ·
    edit-2
    22 days ago

    I think we can all agree, we should put people like this in the wood chipper and unboubtedly the world would be a better place.

    • D_C@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      Agree. AGREE?!
      Well, firstly, I want to know are you a namby pamby “make it quick, put them head first in to the woodchipper” person.
      Or a steely eyed “make the parasites suffer and go feet first and make them watch it happen” type.
      If we are going to agree on anything then we need to agree on these fundamentals!!

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    edit-2
    22 days ago

    What do you call a private jet full of billionaires crashing into a mountain?

    A good start.

  • Formfiller@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    22 days ago

    That’s actually a pretty good idea. What if we started putting tech billionaires through the wood chipper? It could be like the American guillotine

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      22 days ago

      There’s is nothing too awful to happen to sundar Prichai.

      No horrible fate could befall that man that would not cause me delight.

  • jj4211@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    22 days ago

    The things to remember is that these CEOs have made a whole living out of not knowing what they are doing, but being insufferably confident in whatever vomit of words they spew, whether they know anything or not, while ultimately just saying the most milquetoast blatantly obvious stuff and pretending it’s very insightful. All this while they believe and the money proves that are the most important people in the world.

    So naturally it’s easy for them to believe LLM can take all the jobs, because it can easily take theirs.

  • The_Blinding_Eyes@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    22 days ago

    This type of stuff is exactly why I am moving all of my accounts away from Google. Google is now as bad or evenworse than Microsoft.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      22 days ago

      It actually ascribes too much agency to individuals.

      The technology is out there, there’s no shoving this back in the box. No matter what individuals decide to do, only laws will regulate this, and only laws that can be enforced.

      Even if the US decides “no AI” a bunch of other countries are going to laugh at them, and then use AI to beat America at pretty much anything that can take advantage of AI to improve it.

      It would be no different than a country refusing to use electricity or automobiles.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 days ago

      AI isn’t going to take your jobs, though. AI is going to take over management of the economy, which is a very different thing.

      The jobs will still exist, because manual labor continues to be far cheaper to produce and deploy than machine labor. The conditions of employment will get worse over time, as computer management tools prioritize “efficiency” (aka margin of profit) over quality of life and ecological sustainability.

  • bluegreenpurplepink@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    21 days ago

    Doesn’t this situation call for companies that could decide to block AI and double down on the human workforce? And those companies who do would be rewarded by all of us who hate AI and they would succeed by the supposed rules of the free market. Why isn’t any company stepping up to compete against AI run companies? Wouldn’t it be an amazing opening to compete and win?

    Also, it wasn’t talked about in the article, but one of the big arguments for why this AI thing has to be so inevitable is that we have to compete with China. They think we have to start this race with China, to try to win AI.

    First of all, I think we might have already lost the race. Second of all, even if you don’t agree that we’ve already lost. What if by embracing AI, China and all the other countries are destroyed by it? What if it just makes so many mistakes and errors that it just destroys their economy and destroys their country?And then the countries who were cautious about AI would be fine.We’d be the winners, not having succumbed to this ridiculous urge to use everything AI.

    People always forget that anything and everything hooked up to a network is hackable. I’ll say it again. Everything hooked up to a network is hackable.Including this shitty AI stuff. If we put everything into AI, even if we win, another country could just hack us. And screw everything up. The bottom line is.There is a space to say no to AI and succeed.

    I know I’m not that super articulate about this, but I would love to see somebody else write about these ideas with more finesse than I have, so that we could all start talking about this more and stop letting this inevitable push to AI just keep going without pushing back.

  • nonentity@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    22 days ago

    For me, this is the most depressing part.

    So many of the technically competent individuals I know are just gleefully throwing their competency and credibility into this ‘AI’ grift chipper with utter abandon.

    I can’t see a viable path through it, and whenever they’ve articulated what they see on the other side, it is beyond repugnant and I truly don’t see any benefit in existing in that world if it ever manifests.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      22 days ago

      I just don’t get how so many people just start by it. Every time I set my expectations lower for what it can be useful at, it proceeds to prove itself likely to fail at that when I actually have a use case that I think one of the LLMs could tackle. Every step of the way. Being told by people that the LLMs are amazing, and that I only had a bad experience because I hadn’t used the very specific model and version they love, and every time I try to verify their feedback (my work is so die-hard they pay for access to every popular model and tool), it does roughly the same stuff, ever so slightly shuffling what they get right and wrong.

      I feel gaslit as it keeps on being uselessly unreliable for any task that I would conceivably find it theoretically useful for.

      • kshade@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 days ago

        I’ve had similar experiences. Try to do something semi-difficult and it fails, sometimes in an entertainingly shit way at least. Try something simple where I already know the answer? Good chance there’s at least one fundamental issue with the output.

        So what are people who use this tech actually getting out of it? Do they just make it regurgitate things from StackOverflow? Do they have a larger tolerance for cleaning up trash? Or do they just not check the output?