You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • ace_garp@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    30 days ago

    Scientific use on your own massive data sets(think 100s of TB) - Sure

    Consumer chatbot uses - May give the illusion of positive results, whereas the long-term outcome is an overall negative effect on the user.

  • Dumhuvud@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    29 days ago

    GenAI is a plagiarism machine. If you use it, you’re complicit.

    Ethics aside, LLMs in particular tend to “hallucinate”. If you blindly trust their output, you’re a dumbass. I honestly feel bad for young people who should be studying but are instead relying on ChatGPT and the likes.

  • Canopyflyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    29 days ago

    LLM’s have their use, there is no doubt about that. I’m in the middle of creating a home brew campaign for my D&D group and unfortunately I’m a lousy artist and I wanted a few things visualized. Well, I used a photo generating AI to create something that had the visual I wanted. I’m going to use it for my campaign and it will probably just sit on my hard drive after I’m done.

    My employer is rolling out AI and is asking us to find places to insert it into our workflows. I am doing that with my team, but none of us are really sure if it will be of any benefit.

    The problem right now is we’re at the stage where idiots are convinced it is something that it is not and they have literally thrown 10’s of billions of dollars at it. Now… They are staring at the wide abyss that is the amount of money they invested vs the amount of money people are willing to pay for it.

    I’ve seen arguments for and against the presence of an AI bubble… Personally, I think it’s a bubble that’s so large that it will take down several long established computer industry manufacturers when if pops. Those that are arguing its absence probably have large investments that they do not want to see fail.

  • Strider@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    30 days ago

    No, never.

    Mostly because it’s illegally trained, a fact that is very often just overlooked. Because you know, there are no other easy options. Don’t let them keep sticking to different rules.

  • venusaur@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    30 days ago

    For sure. You could absolutely create and train a model ethically. It wouldn’t be nearly as useful in many aspects, but it would be gen AI. From an environmental perspective, I guess you could ask yourself the same thing of CPU intensive gaming. People play games for hours using up similar, often more, electricity as a small locally run LLM.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    29 days ago

    no, never.

    AI is fascism. it all supports fascist companies that wish for nothing more than to enslave you.

    why anyone would want to support the thing that wants you enslaved or dead is beyond me.

  • MerryJaneDoe@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    29 days ago

    It’s not ready for commercial use by the general public.

    We see this ALL the time in America - a new disruptive technology emerges. We jump all over the benefits and the profits without regard to consequences or expense. We suffer.

    New cheap pesticide? Hell yeah, spray that DDT everywhere, it’s super effective! (Insert other endless examples here, from microplastics to asbestos.)

    AI (and information technology in general) has shown itself to be a danger to human beings. Its effects are not felt so much in the short term (5 or 10 years) but generationally. We’ve seen that information technology has already impacted quality of life. It’s used as spyware, as a tool to collect and correlate massive amounts of data. It’s used to shape our media experience, our purchasing, our social circles. There are great things, like online banking. But they seem more and more to be outweighed by a loss of humanity. So much misinformation that I question my own reality some days.

    What we call “AI” is the evolution of these obtrusive, coercive practices. It exists purely to replace human thinking skills. I’ve spent a bit of time in r/teachers over the last 15 years, and the stories keep getting worse. The rise of AI means that detecting plagiarism/cheating is exponentially more difficult. But, more importantly, the kids don’t have any stress when it comes to cheating. They don’t have to find a friend or know the bare minimum. They can just…cheat. And they never learn to problem solve or overcome adversity.

    None of this matters, though. Ready or not, here we are. A new kind of slavery for a new world order.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      29 days ago

      You raise many good points, but social media also has benefits and is not all just negative. Same with AI and all tech. We are better off overall with tech despite the downsides which we should be doing a better job of mitigating.

      • MerryJaneDoe@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        29 days ago

        despite the downsides which we should be doing a better job of mitigating.

        This is the part where I lose faith. We have failed to mitigate the downsides. In fact, we have encouraged the monetization of the downsides.

  • CoffeeTails@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    29 days ago

    If it truly helps you, I think that might be enough for me. I say truly because you need to use an AI with responsibility to not ruin yourself. Like, don’t let it think for you. Don’t trust everything it says.

    I use it a lot when applying for jobs, something I’ve struggled with on and off for 12 years. I suck and writing the cover letter and CV. It takes me 2-3 days to update a cover letter for a job because it takes so much energy. With AI that is down to 1-2 days.

    It’s also great for explaning things in other words of if you’re trying to look up something that’s hard to search for, I don’t have any examples tho.

    I used to use it to help me formulate scentences since english isn’t my first language. Now I instead use Kagi Translate.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      29 days ago

      Yeah I use it to break up my ADHD monosentence paragraphs. I’ll tell it to avoid changing my wording (it can add definitions if it thinks the word is super niche or archaic) but mostly break things up into more readable sentences and group / reorder sentences as needed for better conceptual flow. It’s actually a pretty good low level editor.

    • Q'z@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      29 days ago

      re: applying for jobs

      Not criticizing your use to write your CV specifically.

      But in general, I wonder where this arms race is going? Companies using AI to pre-filter applications, because they get too many. Applicants then using AI to write their CVs, because they have to apply so many times, because they automatically get rejected.

      Basically in the end the entire process will be automated, and there won’t be any human interaction anymore… just LLMs generating and choosing CVs. Maybe I’m too pessimistic, but that’s the direction we’re headed in imo.

      • CoffeeTails@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        29 days ago

        It does feel like that sometimes! It’s very sad that recruiting has lost the human touch. They seem to be blinded by years of experiences and checking boxes when they should recruit by personality, because a person can always learn. But you can’t really do much about a shitty personality, exception if you see that spark underneath it all. Some people just needs a real chance and to be believed in.

        A lot of recruiters don’t even want the cover letter anymore, some have a few questions and some only go by the CV.

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        28 days ago

        We’re already there. You already read about people applying to hundreds of companies to get an offer

        Even worse than the rejections are the fake jobs - typically a recruiter trying to build up a file of applicants by scamming you into applying for something that doesn’t exist.

        The only part left to automate is the actual fuiding and applying. I’m lucky not having to apply for a bunch of years so maybe it has changed, but there never seemed to be a good way to automate finding the hundreds of openings and sending the application. Job application sites are determined to be middlemen but don’t actually seem to make the process more efficient

  • goat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    29 days ago

    It’s as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can’t be bothered to bother people about dumb ideas.

    But at the moment, no, it’s not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        28 days ago

        To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.

        I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation

        • epicshepich@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          28 days ago

          Yeah, all of your use-cases are what I see as positive use cases for LLMs. I’ve got an Ollama instance hooked up to Home Assistant, but it does not work very well haha. Haven’t had the time to troubleshoot it.

        • epicshepich@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          29 days ago

          Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.

          • goat@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            29 days ago

            Plagiarism if it uses art, yeah.

            For LLMs, not so much since you can’t really own reddit comments

  • CodenameDarlen@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    30 days ago

    Ask programmer bros who work on corporative hell… It’s almost mandatory today if you want to earn money programming.

    If you’re in a dev company that doesn’t require AI, it’s just a matter of time.

    I think programmers are like 90% responsible for impact on environment due to AI use. I’ve a friend who work on a big company, they use AI literally everywhere you can imagine, even on Slack to answer other colleagues messages. They need to feed huge codebases to provide context to AI, at the end it’s more resource hungry than generating video or images a few times a day.

  • Quazatron@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    30 days ago

    It’s not going away. The cat is out of the bag.

    As with any tool it has its use cases. It’s not a good fit for everything. You can drive a screw with a hammer but a screwdriver works best.

    We’re experiencing the capitalist euphoria that happens when something new comes along. This needs to get regulated into submission like all the previous bubbles.

  • Atomic@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    28 days ago

    I’ve always said I think it’s fine in filler content, it can allow small teams to quickly populate their world with background stuff that you never notice. Except when it’s not there.

    But with great power comes great responsibility. And I don’t necssesarily think most can handle that.