I’ve noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X

Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

  • DJKJuicy@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    arrow-down
    4
    ·
    3 days ago

    AI (LLMs) is/are a fantastic tool.

    But that’s what it is, a tool that can make some tasks easier.

    It’s not world-changing like some tech bros and CEOs think it is because they don’t actually understand the technology.

    It’s also not the apocalypse or The Matrix or Skynet coming to end civilization. It’s just a tool.

    After the AI bubble bursts, AI will still be there, as a tool for humans to use.

    I think it’s possible that some of the people you see on Lemmy may have started using AI a little more in their lives and see it for what it is.

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 days ago

      To be fair, given the power consumption it requires, it definitely leans towards civilization ending.

      • DJKJuicy@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        3 days ago

        We also have “the Internet” slurping up massive amounts of energy.

        Current Global Electricity Breakdown:

        • Total Data Center/Infrastructure Demand: Approximately 2.0% of global electricity.
        • AI-Specific Share: Roughly 0.5% of global electricity.
        • “Traditional” Internet/Cloud: Roughly 1.5% of global electricity.

        The Internet is also a tool that humanity uses. Should we shut that down too? (I would argue yes considering how the “Information Superhighway” somehow made the average person dumber, but that’s a different discussion.)

        • SpacetimeMachine@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          3 days ago

          Except the Internet is actually useful. AI has not shown that it deserves to use that insane amount of energy. It’s actually insane that you think AI isn’t an issue when it’s using 1/3rd as much energy as the ENTIRE INTERNET

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      3 days ago

      It’s also not the apocalypse […] It’s just a tool.

      So, the problem with tools is that their existence still affects the systems they’re a part of.

      For instance, war between the US and Russia is much more dangerous now (yes, it used to be dangerous before as well) because now we have nuclear bombs. We did a whole cold war thing about it. Nuclear bombs change the world even when they’re not being used.

      Similarly, meth is just a tool. It is entirely possible to smoke meth, not become addicted, have a great time, vacuum your entire house I guess, come down, chill, and move on with the rest of your life. But, that’s not what we would say meth’s effect on society is, is it?

      I am so happy that you are capable of using AI without becoming a psychopath. I am concerned about the psychopaths.

  • zeroConnection@programming.dev
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

    They’re both “annoying teenage tech-bros who are detached from reality” and they are spreading propaganda they picked up elsewhere.

  • Tiral@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    3 days ago

    I think AI has positives to help people, that being said I think it’s out of control currently. I hope the bubble burst soon and we can actually get to a reasonable balance.

    • deadymouse@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      3 days ago

      I hope the bubble burst soon and we can actually get to a reasonable balance.

      In fictional stories yes, in reality no. The only application that AI will find is to replace all employees, and people will be thrown out into the street.

  • Lasherz@lemmy.world
    link
    fedilink
    arrow-up
    65
    arrow-down
    6
    ·
    4 days ago

    It’s usually bots. Unfortunately it’s not easy to moderate them, but if a bot is reported, doesn’t have a bot flag, and says a bunch of pro-ai stuff in addition to the reported activity it’s usually enough evidence to ban. It’s just one of their current tells, I wouldn’t base a ban only on that though. Report when you suspect them though.

  • RoddyStiggs@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    33
    arrow-down
    6
    ·
    4 days ago

    If people weren’t fucking stupid, these scams would eventually stop working.

    What’s it been, 4 years since NFTs? And AI morons are already falling for this shit.

    • bbb@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      3 days ago

      I lean anti-AI, but comparing generative AI to NFTs is very strange to me. Even if you didn’t intend to imply any similarity beyond both being scams, surely generative AI is at least a much more compelling scam.

      LLMs can now understand, to some extent, almost any text humans can. They might not be able to reason about it well, but they can at least translate it, summarize it, etc. If you had asked me 10 years ago, I’d have told you there was a near-zero chance of that happening within our lifetimes. NFTs were just “if we put baseball cards on the blockchain, people might buy them because of that same quirk of psychology.”

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        Transformers are like blockchain: an interesting use of mathematical principles to solve certain problems in a novel way, where the hype around that core attracts charlatans and scammers and combinations of the two traits who claim that it will go on to solve totally different problems in such a way as to revolutionize the world we live in.

        NFTs were the end of that line for blockchain where the machine started to eat itself. I can see a future, stable use of blockchain in some limited contexts, but cryptobros have always overstated the contexts in which that particular type of digital ledger can be more useful than other types of digital ledgers.

        We’ll see where the end of the road is for transformers, and what’s left at the end. I believe that computer inference will always be useful in some contexts, and that the advances in huge models with absurdly large numbers of parameters have unlocked some previously impractical tasks, but I could also see that settling into a general background existence as just another technological tool for doing things in a world that still looks pretty similar to the world today.

  • Bazoogle@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    3
    ·
    4 days ago

    Honestly, the problem when talking about “AI” is how many different things that can mean.

    • General AI chats
    • Coding agents
    • Automated pentesting/vulnerability discovery
    • Image/video/music generation
    • Grammar checking
    • Automated support agents (phone or chat)
    • Autonomous weaponry

    and so many more. Being Pro-AI could mean you like one or two application of the AI, but be against it in the others. I know very few people that like it for the use of media generation. However, there have been a lot of long time vulnerabilities in very popular open source projects that was only just discovered. That seems like a pretty undeniable use case demonstrating its usefulness.

    Then of course there’s governments that want to get their greedy blood thirsty hands on it to create autonomous weaponry. So now if you try to defend AI for a use case like defensively finding program vulnerabilities you somehow also have to defend AI weaponry?

    For a generic AI model, it is very powerful and can either be used to grow yourself or abused so your brain doesn’t have to work at all. You can use AI to do the hard work for you, or use it as a personal tutor to guide you into what to learn. People will of course mention hallucinations as why it can’t be used to learn, but you don’t have to take AI at its words. If you were to ask it to create a lesson plan on what you should study for a subject, in what order, and resources are available, you can do all of the actual learning using content the AI has no control over. So what you do with that is going to be up to the person, and opinions on it are going to vary wildly.

    Some people argue any use case is not okay given the various concerns of energy and water usage, and where those models sourced their training data. Not to mention if you support AI you must be supporting the AI companies. I agree there are concerns for the environmental impact, and the training data discussion is a long one on its own. However, I do think you can support AI as a technology, and not be okay with the way the technology is being done in regards to environmental impact. And given AI can be done on a local machine, I don’t think it has to be tied at all with the big tech at all.

    “AI” is such a wide and immense topic. And what we talk about with AI today will not be relevant come next year with how quickly it is developing. We shall see if some form of Moore’s law applied with the growth of AI as far as efficiency and quality of the AI goes.

    • clif@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      4 days ago

      One of the first things I say when non tech people ask me about ““AI”” is :

      “The term AI here is just marketing wank”

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    4 days ago

    This is nothing new actually, the same thing happend during the crypto boom.

    There’s slop users (autoclankers) and then there’s researchers or developers actually doing the same stuff they’ve been doing for 5+ years.

    I think it just seems that way because there’s always a clash on practically every post.

    Some people don’t see the inherent flaw in outsourcing their physical thoughts to a cloud model, or the massive economic bubble they are helping to create.

    But some people are doing some genuinely interesting things that would have otherwise been impossible several years ago just because AI and model training research got a huge boost for everyone the past few years.

    My personal favorite is a drone that rapidly identifies and counts produce plant quality, output, issues, etc for large farms with some brand spanking new image models, and it costs about as much as maybe a new toolbox. No one wants to manually weed through hundreds of acres to count buds and try to catch problems before its too late. It’s a great upgrade from doing random samples that misses a lot of data.

    On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.

    It’s like when you throw world and ml users into one post. They both think the other is louder, and also the big dumb lol.

  • GarboDog@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    Humans are social animals, in the United States especially where people are severely separated- they’ll look for and find any kind of easy access towards social interactions: including but not limited to Chat bots. It’s a sad reality that they would dismiss the negative affects it has on our social brains, dismiss the environmental effects it has on our planet, dismiss the social warmings because they’re too involved with LLMS “AI”.

    That’s right, it’s not even AI; it’s only large language models or some agentic systems. Way smaller ones existed in the past, think Dr. Sbaitso (1992) or A.L.I.C.E. (1995.) it’s actually not hard to make a chat bot, just have it echo what the user says with some key phrases. That’s the whole existence of chat bots and today’s current “ai” only they have a LOT more variables that were generated off of huge randomly generated data sets (both off of free open sources and stolen data) and that’s what causes it to hallucinate: it’s the randomness that humans don’t have the ability to change or update simply because it’s such a huge list of variables. It’s so massive people think it’s real intelligence! PEOPLE WERE FOOLED ON 1990’s CHART BOTS TOO! 😭 😂

    Anywho we recommend the movies Desk Set, Space Odyssey, pi and even Alphaville. They’re related to the subject and they’re pretty good at pointing out the bruhs.

  • mnemonicmonkeys@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    8
    ·
    4 days ago

    AI hs already been demonstrated as a tool that largely benefits fascists and oligarchs. It is not a question at this point. At this point, all of the AI-evangelists are either extremwly stupid or fascists themselves.

    • Bazoogle@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      8
      ·
      4 days ago

      AI hs already been demonstrated as a tool that largely benefits fascists and oligarchs.

      Lmfao, what? The internet is also a tool that largely benefits fascists and oligarchs. Does that make every user of the internet a fascist, or just stupid?

      Of course bad actors are going to take advantage of a tool that is very useful in an absurd amount of contexts…

      • lechekaflan@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Palantir wants not only a total surveillance world, but also needs AI to process and use the information it has on almost anyone to send to jail or murder.

      • Doomsider@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        3 days ago

        Yeah that is why the worst fascists and oligarchs block access to the Internet. I think you are being pretty disingenuous here whether you realize it or not. The Internet is like a utility at this point and that is not even remotely comparable with how AI is currently being deployed and used by major corporations.

        I am glad to hear you accept bad actors will misuse it. I don’t think anyone is actually ready for the level of deception that AI will be able to accomplish once it has access to you and your family/friends information while being used by a nefarious party. For example impersonating loved ones to trick you, epic cat fishing with a persona that has been custom tailored to you, complex financial schemes involving fraud, etc.

  • OriginEnergySux@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    4 days ago

    It’d be great to see more centralist views. AI can be a useful assistant with certain things, but i dont get needing to be fully against or fully for it

  • tensorpudding@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    4 days ago

    I have been expecting there to be some softening and some people who use AI for coding on the DL here. It really has gotten significantly more common to at least try out tools like Claude Code. But those people aren’t writing articles like that and I’m not seeing them.

    • moseschrute@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      4 days ago

      I’ve been encouraged to use Claude Code for work, and by a lot of genuinely very talented engineers. It’s absolutely overhyped if you look at twitter tech bros, and absolutely under hyped if you only read Lemmy.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

    Does it matter? Either way, it’s just a symptom of the hype/bubble around “AI” lately.