• CaptPretentious@lemmy.world
    link
    fedilink
    arrow-up
    58
    arrow-down
    1
    ·
    3 months ago

    I hated it when everything became ‘smart’.

    Now everything has ‘AI’.

    Nothing was smart. And that’s not AI.

    Everything costs more, everything has a stupid app that gets abandoned, IoT backend that’s on life support the moment it was turned on. Subscriptions everywhere! Everything is built with lower quality, lower standards.

  • pyre@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    5
    ·
    3 months ago

    they don’t care. you’re not the audience. the tech industry lives on hype. now it’s ai because before that they did it with nft and that failed. and crypto failed. tech needs a grift going to keep investors investing. when the bubble bursts again they’ll come up with some other bullshit grift because making useful things is hard work.

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        3 months ago

        Not as bad as the IR touch screens. They had a IR field protected just above the surface of the screen that would be broken by your finger, such would register a touch at that location.

        Or a fly landing on your screen and walking a few steps could drag a file into the recycle bin.

  • Diplomjodler@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    3
    ·
    3 months ago

    The problem isn’t AI. The problem is greedy, clueless management pushing half baked products of dubious value on consumers.

  • Dagnet@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    3 months ago

    Was shopping for a laundry machine for my parents and LG, I shit you not, has an AI laundry machine now. I just can’t even

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      5
      ·
      3 months ago

      The reassuring thing is that AI actually makes sense in a washing machine. Generative AI doesn’t, but that’s not what they use. AI includes learning models of different sorts. Rolling the drum a few times to get a feel for weight, and using a light sensor to check water clarity after the first time water is added lets it go “that’s a decent amount of not super dirty clothes, so I need to add more water, a little less soap, and a longer spin cycle”.

      They’re definitely jumping on the marketing train, but problems like that do fall under AI.

      • Kvoth@lemmy.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        3 months ago

        The thing is, we’ve had that sort of capability for a long time now, we called them algorithms. Rebranding it as ai is pure marketing bullshit

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          Well that’s sort of my point. It’s an algorithm, or set of techniques for making one, that’s been around since the 50s. Being around for a long time doesn’t make it not part of the field of AI.

          The field of AI has a long history of the fruits of their research being called “not AI” as soon as it finds practical applications.

          The system is taking measurements of its problem area. It’s then altering its behavior to produce a more optimal result given those measurements. That’s what intelligence is. It’s far from the most clever intelligence, and it doesn’t engage in reason or have the ability to learn.

          In the last iteration of the AI marketing cycle companies explicitly stopped calling things AI even when it was. Much like how in the next 5-10 years or so we won’t label anything from this generation “AI”, even if something is explicitly using the techniques in a manner that makes sense.

      • Carnelian@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        6
        ·
        3 months ago

        Respectfully, there’s no universe in which any type of AI could possibly benefit a load of laundry in any way. I genuinely pity anyone who falls for such a ridiculous and obvious scam

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          You can’t see a benefit to a washing machine that can wash clothes without you needing to figure out how much soap to add or how many rinse cycles it needs?

          I genuinely pity anyone so influenced by marketing that they can’t look at what a feature actually does before deciding they hate it.

          • Carnelian@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            3 months ago

            Those features are literally unrelated to AI, just so you know. It’s comparing sensor outputs to a table. Like all modern laundry machines. The inclusion of “AI” on the label is purely to take advantage of people like you who instantly believe whatever they’re told, even of it’s as outlandish as “your laundry has been optimized” lol

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              3 months ago

              Yeah, I know how it works, and I also know how different types of AI work.

              It’s a field from the 50s concerned with making systems that perceive their environment and change how they execute their tasks based on those perceptions to maximize the fulfillment of their task.

              Yes, all modern laundry machines utilize AI techniques involving interpolation of sensor readings into a lookup table to pick wash parameters more intelligently.

              You’ve let sci-fi notions of what AI is get you mad at a marketing department for realizing that we’re back to being able to label AI stuff correctly.

              • Carnelian@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                3 months ago

                The fact that you’ve been reduced to blabbering about such mundane things in the style of “the ghosts in pac-man technically had AI” tells us everything we need to know here. Have fun arguing with me in the shower about whether or not current trends are just a result of marketing executives finally being liberated to appropriately label the AI they’ve been using for 70 years

    • AutistoMephisto@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I mean, they have Alexa connected refrigerators with a camera inside the fridge that sees what you put in it and how much, to either let you know when you’re running low on something or ask to put in an order for more of that item before you run out, or tell you if something in there is about to spoil, or if the fridge needs cleaned, so I imagine a washer would do something similar?

  • hungryphrog@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    3 months ago

    But then we wouldn’t have to pay real artists for real art anymore, and we could finally just let them starve to death!

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 months ago

      In film school (25 years ago), there was a lot of discussion around whether or not commerce was antithetical to art. I think it’s pretty clear now that it is. As commercial media leans more on AI, I hope the silver lining will be a modern Renaissance of art as (meaningful but unprofitable) creative expression.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        3 months ago

        Issue is, that 8 hours people spend in “real” jobs are a big hindrance, and could be spent on doing the art instead, and most of those ghouls now want us to do overtime for the very basics. Worst case scenario, it’ll be a creativity drought, with idea guys taking up the place of real artists by using generative AI. Best case scenario is AI boom totally collapsing, all commercial models become expensive to use. Seeing where the next Trump administration will take us, it’s second gilded age + heavy censorship + potential deregulation around AI.

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    3 months ago

    At this point, I’m full on ready to make “though shall not make a machine in the likeness of a human mind” global international law and a religious commandment. At least that way, we can burn all AI grifters as witches!

  • Bamboodpanda@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    3 months ago

    AI is one of the most powerful tools available today, and as a heavy user, I’ve seen firsthand how transformative it can be. However, there’s a trend right now where companies are trying to force AI into everything, assuming they know the best way for you to use it. They’re focused on marketing to those who either aren’t using AI at all or are using it ineffectively, promising solutions that often fall short in practice.

    Here’s the truth: the real magic of AI doesn’t come from adopting prepackaged solutions. It comes when you take the time to develop your own use cases, tailored to the unique problems you want to solve. AI isn’t a one-size-fits-all tool; its strength lies in its adaptability. When you shift your mindset from waiting for a product to deliver results to creatively using AI to tackle your specific challenges, it stops being just another tool and becomes genuinely life-changing.

    So, don’t get caught up in the hype or promises of marketing tags. Start experimenting, learning, and building solutions that work for you. That’s when AI truly reaches its full potential.

    • stringere@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      I think of AI like I do apps: every company thinks they need an app now instead of just a website. They don’t, but they’ll sure as hell pay someone to develop an app that serves as a walled garden front end for their website. Most companies don’t need AI for anything, and as you said: they are shoehorning it in anywhere they can without regard to whether it is effective or not.

  • rational_lib@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    3 months ago

    I think we’re running out of advancements that make life better, now all technology does is make production cheaper/increase shareholder value.

  • DragonsInARoom@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    3 months ago

    But the companies must posture that their on the cutting edge! Even if they only put the letters “AI” on the box of a rice cooker without changing the rice cooker

    • Zink@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      3 months ago

      When it comes to the marketing teams in such companies, I wonder what the ratio is between true believers and "this is stupid but if it spikes the numbers next quarter that will benefit me.”

      • BigDanishGuy@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        You forgot the phase immediately preceding AI: 3d prints.

        I mean, in this decade, I’ve heard of car and airplanes being marketed as having 3d printed parts.

        • IMALlama@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Cars and airplanes do have 3D printed parts. They’re much more common in the prototyping phase, but they are used in production and are making their way to space.

          I completely agree with your general sentiment though. Any time a new piece of technology shows promise there are a ton of people who will loudly proclame that it will completely replace in while turning a blind eye to things like scaling and/or practical limitations.

          See also: low/no code, which has roots going back to the 1980s at least.

  • ozoned@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    3 months ago

    Containerize everything!

    Crypto everything!

    NFT everything!

    Metaverse everything!

    This too shall pass.

  • Lila_Uraraka@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    3 months ago

    I hate what AI has become and is being used for, i strongly believe that it could have been used way more ethically, solid example being Perplexity, it shows you the sources being used at the top, being the first thing you see when it give a response. The opposite of this is everything else. Even Gemini, despite it being rather useful in day to day life when I need a quick answer to something when I’m not in the position to hold my phone, like driving, doing dishes, or yard work with my ear buds in

    • mm_maybe@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      Yes, you’re absolutely right. The first StarCoder model demonstrated that it is in fact possible to train a useful LLM exclusively on permissively licensed material, contrary to OpenAI’s claims. Unfortunately, the main concerns of the leading voices in AI ethics at the time this stuff began to really heat up were a) “alignment” with human values / takeover of super-intelligent AI and b) bias against certain groups of humans (which I characterize as differential alignment, i.e. with some humans but not others). The latter group has since published some work criticizing genAI from a copyright and data dignity standpoint, but their absolute position against the technology in general leaves no room for re-visiting the premise that use of non-permissively licensed work is inevitable. (Incidentally they also hate classification AI as a whole; thus smearing AI detection technology which could help on all fronts of this battle. Here again it’s obviously a matter of responsible deployment; the kind of classification AI that UHC deployed to reject valid health insurance claims, or the target selection AI that IDF has used, are examples of obviously unethical applications in which copyright infringement would be irrelevant.)