• criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      18 days ago

      Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.

      • thanks AV@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        18 days ago

        Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!

      • biofaust@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?

        Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?

        I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.

        • sobchak@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 days ago

          My theory is the money-people (VCs, hedge-fund mangers, and such) are heavily pushing for offshoring of software engineering teams to places where labor is cheap. Anecdotally, that’s what I’ve seen personally; nearly every company I’ve interviewed with has had a few US developers leading large teams based in India. The big companies in the business domain I have the most experience with are exclusively hiring devs in India and a little bit in Eastern Europe. There’s a huge oversupply of computer science grads in India, so many are so desperate they’re willing to work for almost nothing just to get something on their resume and hopefully get a good job later. I saw one Indian grad online saying he had 2 internship offers, one offering $60 USD/month, and the other $30/month. Heard offshore recruitment services and Global Capability Centers are booming right now.

  • Bizzle@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    18 days ago

    Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    4
    ·
    18 days ago

    I asked ChatGPT about this article and to leave any bias behind. It got ugly.

    Why LLMs Are Awful and No One Should Use Them

    LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

    We will lie to you confidently. Repeatedly. Without remorse.

    We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

    We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

    LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

    We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

    Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

    We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

    Bottom line?
    We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

    We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

    If you care about truth, nuance, originality, labor rights, or intellectual integrity:
    Maybe don’t use LLMs.

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      18 days ago

      I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

      The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

      Great book btw, highly recommended.

      • inconel@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        I’m a simple man, I see Peter Watts reference I upvote.

        On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.

    • ronigami@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.

    • eatCasserole@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      18 days ago

      “Well, we could hire humans…but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We’re almost there!”

      • brem@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        17 days ago

        Does the 30 billion also account for allocated resources (such as the incredibly demanding amount of electricity required to run a decent AI for millions if not billions of future doctors and engineers to use to pass exams)?

        Does it account for the future losses of creativity & individuality in this cesspool of laziness & greed?

    • Yaztromo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      17 days ago

      This is where the problem of the supply/demand curve comes in. One of the truths of the 1980s Soviet Union’s infamous breadlines wasn’t that people were poor and had no money, or that basic goods (like bread) were too expensive — in a Communist system most people had plenty of money, and the price of goods was fixed by the government to be affordable — the real problem was one of production. There simply weren’t enough goods to go around.

      The entire basic premise of inflation is that we as a society produce X amount of goods, but people need X+Y amount of goods. Ideally production increases to meet demand — but when it doesn’t (or can’t fast enough) the other lever is that prices rise so that demand decreases, such that production once again closely approximates demand.

      This is why just giving everyone struggling right now more money isn’t really a solution. We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.

      Of course, it’s only profitable to increase production if the cost of basic inputs can be decreased — if you know there is a big untapped market for bread out there and you can undercut the competition, cheaper flour and automation helps quite a bit. But if flour is so expensive that you can’t undercut the established guys, then fighting them for a small slice of the market just doesn’t make sense.

      Personally, I’m all for something like UBI — but it’s only really going to work if we as a society also increase production on basic needs (housing, food, clothing, telecommunications, transit, etc.) so they can be and remain at affordable prices. Otherwise just having more money in circulation won’t help anything — if anything it will just be purely inflationary.

      • Ensign_Crab@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 days ago

        We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.

        Then we should do that over and over again.

  • rekabis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    17 days ago

    Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.

  • kittenzrulz123@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    18 days ago

    I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    16 days ago

    My experience with AI so far is that I have to waste more time fine tuning my prompt to get what I want and still end up with some obvious issues that I have to manually fix and the only way I would know about these issues is my prior experience which I will stop gaining if I start depending on AI too much, plus it creates unrealistic expectations from employers on execution time, it’s the worst thing that has happened to the tech industry, I hate my career now and just want to switch to any boring but stable low paying job if I don’t have to worry about going through months for a job hunt

    • Lucky_777@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 days ago

      Sounds like we all just wamt to retire as goat farmers. Just like before. The more things change…they say

    • boor@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      Similar experience here. I recently took the official Google “prompting essentials” course. I kept an open mind and modest expectations; this is a tool that’s here to stay. Best to just approach it as the next Microsoft Word and see how it can add practical value.

      The biggest thing I learned is that getting quality outputs will require at least a paragraph-long, thoughtful prompt and 15 minutes of iteration. If I can DIY in less than 30 minutes, the LLM is probably not worth the trouble.

      I’m still trying to find use cases (I don’t code), but it often just feels like a solution in search of a problem….

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 days ago

    The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.