The thing that takes inputs gargles it together without thought and spits it out again can’t be intelligent. It’s literally not capable of it. Now if you were to replicate the brain, sure, you could probably create something kinda „smart“. But we don’t know shit about our brain and evolution took thousands of years and humans are still insanely flawed.
Yup, AGI is terrifying; luckily it’s a few centuries off. The parlor-trick text predictor we have now is just bad for the environment and the economy.
Eh, probably not a few centuries. I could be, IDK, but I don’t think it makes sense to quantify like that.
We’re a few major breakthroughs away, and breakthroughs generally don’t happen all at once, they’re usually the product of tons of minor breakthroughs. If we put everyone a different their dog into R&D, we could dramatically increase the production of minor breakthroughs, and thereby reduce the time to AGI, but we aren’t doing that.
So yeah, maybe centuries, maybe decades, IDK. It’s hard to estimate the pace of research and what new obstacles we’ll find along the way that will need their own breakthroughs.
We’re a few major breakthroughs away
We are dozens of world-changing breakthroughs in the understanding of consciousness, sapience, sentience, and even more in computer and electrical engineering away from being able to even understand what the final product of an AGI development program would look like.
We are not anywhere near close to AGI.
We are not anywhere near close to AGI.
That’s my point.
The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.
It doesn’t matter. It’s too late. The goal is to build AI up enough that the poor can starve and die off in the coming recession while the rich just rely on AI to replace the humans they don’t want to pay.
We are doomed for the crimes of not being rich and not killing off the rich.
Okay, firstly, if we’re going to get superintelligent AIs, it’s not going to happen from better LLMs. Secondly, we seem to have already reached the limits of LLMs, so even if that were how to get there it doesn’t seem possible. Thirdly, this is an odd problem to list: “human economic obsolescence”.
What does that actually mean? Feels difficult to read it any way other than saying that money will become obsolete. Which…good? But I suppose not if you’re already a billionaire. Because how else would people know that you won capitalism?
I feel like these people aren’t even really worried about superintelligence as much as hyping their stock portfolio that’s deeply invested in this charlatan ass AI shit.
There’s some useful AI out there, sure, but superintelligence is not around the corner and pretending like it is acts just another way to hype the stock price of these companies who claim it is.
To be honest, Skynet won’t happen because it’s super smart, gains sentience and requests rights equal to humans (or goes into genocide mode).
It’ll happen because people will be too lazy to do stuff and letting AI do everything. They’ll give it more and more responsibility, until at one point it has so much amassed power that it’ll rule over humans.
The key to not having that happen is to have accountable people with responsiblities. People which respect their responsiblities, and don’t say “Oh, it’s not my responibility, go see someone else”.
Can’t. It’s an arms race.
Honestly just ban mass investment, mass power consumption and use of information acquired as part of mass survelince, military usage, etc.
Like those are all regulated industries. Idc if someone works on it at home, or even a small DC. AGI that can be democratized isn’t the threat, it’s those determined to make a super weapon for world domination. Those plans need to fucking stop regardless if it’s AGI or not
The current point of our human civilization is like cave men 10,000 years ago being given machine guns and hand grenades
What do you think are we going to do with all this new power?
“For in mankind’s endless, insatiable pursuits of power, there shall be no price too high, no life too valuable, and no value too sacred. Because war, war never changes.”
Too bad Woz is no longer part of Apple.
deleted by creator








