Original question by @BalakeKarbon@lemmy.ml
It seems like a lot of professionals are thinking we will reach AGI within my lifetime. Some credible sources say within 5 years but who knows.
Either way I suspect it is inevitable. Who knows what may follow. Infinite wealth gap growth, mass job loss, post-work reforms, I’m not sure.
A bunch of questions bounce around in my head, examples may be:
- Will private property rights be honored in said future?
- Could Amish communities still exist?
- Is it something we can prepare for as individuals?
I figured it is important to talk about seeing as it will likely occur in my lifetime and many of yours.
I don’t think we will be able to achieve AGI with anything other than an absolute accident. We don’t understand our own brains enough to create one from scratch.
It won’t happen while I’m alive. Current LLMs are basically parrots with a lot of experience, and will never get close to AGI. We’re no closer today than when a computer first passed the Turing test in the 60s.
Experienced parrots that are constantly wrong.
I think it is inevitable. The main flaw I see from a lay perspective in current methodology is trying to make one neural network that does everything. Our own brains are composed of multiple neural networks with different jobs interacting with each other, so I assume that AGI will require this approach.
For example: we are currently struggling with LLM hallucinations. What could reduce this? A separate fact-checking neural network.
Please keep in mind that my opinion is almost worthless, but you asked.
Marketing tool. LLM’s are not magic no matter what people think
I’m more worried about jobs getting nuked no matter whatever AGI turns out to be. It can be vapourware and still the capitalist cult will sacrifice labour on that altar.
I don’t see any reason to believe anything currently being done is a direct path to AGI. Sam Altman and Dario Amodei are straight up liars and the fact so many people lap up their shameless hype marketing is just sad.
deleted by creator
The computer doesn’t even understand things nor asks questions unprompted. I don’t think people understand that it doesn’t understand, lol. Intelligence seems to be non-computational!
Anyone telling you it’s five years away? Check their investments.
Why would AGI threaten the existence of the Amish and/or change laws regarding property rights?
I have no doubt software will achieve general intelligence, but I think the point where it does will be hard to define. Software can already outdo humans at lots of specific reasoning tasks where the problems are well defined. But how do you measure the generality of problems, so you can say last week our AI wasn’t general enough to call it AGI, but now it is?
Not without a major breakthrough in knowledge representation.
LLMs aren’t it.
Not happening IMO. Though its important to distinguish that the general public and business sentiment act as if LLMs are already some kinda legitimate intelligence. So I think a pretty ugly acceptance and hard dependence on these technologies in the form of altering our public infrastructure and destroying the planet will lead to some hellscapien future for sure… All the stuff you mentioned and more. All without even reaching this level of AGI as it is understood currently.
Who knows if AGI is possible maybe it wouldn’t cause the future you described in post but instead help us avoid this nonsense road we are on now.