Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it’s ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I’m neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.
I was called alarmist, that such a thing was a long way away and we didn’t need “socialism” in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.
Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don’t have a safety net in place.
Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that’s fucking awesome. That’s what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.
I think there’s a meme floating around that really sums it up for me. Paraphrasing, but it goes “I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.”.
I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.
Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don’t have a job anymore if demand doesn’t increase proportionally.
I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we’ve already seen the chaos that unfettered AI can cause to entire industries. It’s a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.
Fwiw, I’ve been getting called an alarmist for talking about Trump’s and Republican’s fascist tendencies since at least 2016, if not earlier. I’m now comfortably living in another country.
My point being that people will call you an alarmist for suggesting anything that requires them to go out of their comfort zone. It doesn’t necessarily mean you’re wrong, it just shows how stupid people are.
Did you move overseas? And if you did, was it expensive to move your things?
It wasn’t overseas but moving my stuff was expensive, yes. Even with my company paying a portion of it. It’s just me and my partner in a 2br apartment so it’s honestly not a ton of stuff either.
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.
TBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.
ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…
So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.
I’m perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!
It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I’d say that’ll happen when pigs fly, but police choppers exist so idk
Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.
If AI is here to stay, we must regulate what slice of the planet we’re willing to give it. I mean, AI is cool and all, and it’s been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn’t really worth the energy consumption that it currently requires.
A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can’t be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI’s datacenter, Amazon and Google’s investments into nuclear). It’s going to get worse. They’ll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.
We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We’re thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.
I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.
Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.
If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.
I just ask for the law to be applied to all equally. What a surprising concept…
I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we’re not already there), we DONT need machines to mass produce slop.
I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.
I dunno. It’s better than their old, non-AI slop 🤷
Before, I didn’t really understand what they were trying to communicate. Now—thanks to AI—I know they weren’t really trying to communicate anything at all. They were just checking off a box 👍
Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.
Emphasis on realistic, before anyone describes some insane turn of events.
Some jobs are automated and prices go down. That’s realistic enough. To be fair there’s good and bad likely in that scenario. So tack on some level of UBI. Still realistic? That’d be pretty good.
I’m afraid I can only give partial credit, my grading rubric required a mention of “purchasing power”.
I think somehow incentivizing companies to use solar power to power their data centers would be a step in the right direction
That would be a win. I think they’re currently angling for more nuclear energy. Because of course.
I’d prefer solar to nuclear and oil, but I’d also take nuclear over oil any day
People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity
We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t
Captalist democracy is not real democracy
This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.
For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you’re lucky, opt out. And also, stop burning down the planet with data center power and water usage. That’s all.
Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.
Edit 2: And the tech press should be ashamed for how much they’ve been fawning over these slop generators. They gladly parrot press releases, claim it’s the next big thing, and generally just suckle at the teet of AI companies.
I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.
Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.
AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.
So, a lot of our AI customers have no real use for LLM. It’s pharmaceutical and genetics companies looking for the treatments and cures for things like pancreatic cancer and Parkinson’s.
It is a big problem to paint all generative AI with the “stealing IP” brush.
It seems likely to me that an AI may be the only controller that can handle all of the rapidly changing parameters needed to maintain a safe fusion process. Yes it needs safeties. But it needs research, too.
I urge much more consideration of the specific uses of this new technology. I agree that IP theft is bad. Let’s target the bad parts carefully.
I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.
The solution is easy and right there in front of everyone’s eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.
This will literally solve every single problem people have other than energy use which is a fake problem to begin with.
I’d like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn’t labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.