LLMs are a dead end, and the massive amounts of money being wasted on them will make people too scared to invest in other forms of AI.
So we are currently at a local maxima that we won’t overcome in 10 years. It will take much longer before we try a different approach to create “AGI,” and the wasted money on LLMs will slow other forms of AI research, leaving us stagnating for >10 years
I think that all depends on what else there is to invest in. In general, as terrible as ai is, it’s carrying the stock market. Investors need something else to turn to to divert away from AI.
I’m not convinced that investors would know the difference between a company trying to improve llms vs taking a new approach. So I don’t think it will stifle investment in other forms of AI research.
I also don’t think they are a dead end overall. They sure aren’t likely to get to agi, but you don’t need agi to be useful.
You have to convince investors why your AI research won’t hit a wall like LLMs are now - they’ve poisoned the term “AI”
They’re a dead end, insofar as they do all they’ll ever be able to; if you can find use for them at their current level, great, but it does not look likely they will be able to do more than they currently can
I dunno. Investors are still lined up to invest in AI startups from what I hear. But that isn’t much evidence.
That said, individually, llms may have hit a wall. But there is plentynof room to optimize them, and lots of ways to combine them. Their uses are still i their infancy. Like take grafana. It doesn’t support personal api keys. So I can’t give the AI access to test and iterate on solutions yet. Lots of software is like that. The llm doesn’t need to change. The software we use needs to support it. First with access, then with guardrails like fine grained access controls so we can trust that the ai can’t do things we don’t want it to. Then we can really experiment to found out what it can do.
And really, the answer to getting more out of AI is parralelism. So as they optimize it to make it less expensive, we will be able to use parralelism to get more out of it, without fundamentally changing it.
There is a lot of room to grow the uses of the current AIs while we wait for some totally new approach to come along and get us to AGI. We aren’t ready for that now anyway. In 15 or 20 years, maybe we will be.
Narrow AI well get better, even faster than normal because of the research that big AI companies are doing now, but attempts at more general AI will stop being profitable.
- machine learning models will continue to improve their output somewhat but gains will be incremental and the intrinsic problems with ml-derived content (e.g hallucinations, context window limitations, long-term coherency) will remain
- open source models will catch up with commercial ones
- the smaller ml companies (like openai and anthropic) will be absorbed, probably by Microsoft and Amazon
- The increasing cost of hardware and energy will force companies to raise prices for ml subscriptions and eventually lock ml features behind paywalls
- Computer parts will remain expensive for a long time
- Programmers will collectively spend the next decade wrestling with the consequences of filling their codebases with millions of lines of ai generated code
- Google images will never fully recover
The only AI companies that will exist in 10 years will be those started by a large company that has other unrelated profit streams. Such as Microsoft, Google, Amazon etc. All others will fail. Some will be bought by the big players if they develop a unique technology. Otherwise they all go broke.
If I had to guess, there will be only two major AI/LLM companies in existence. The nature of LLM’s discounts that small companies and organizations can scale one to be profitable.
Micron comes back to the consumer market, but has to rebrand due to the ire of consumers for them being assholes. Same with Western Digital, although they have not “technically” left the consumer market.
The next 5 years will be spent by people trying to find SOMETHING for AI to do. Some very high end uses in research, or academics will be found. However, those will cost massive amounts of money and only available to governments, large corporations and academic institutions. Consumers will be left with creating images, music and a few other parlor tricks, but there will be nothing of any true value offered. In the mean time AI images and videos will be used to exacerbate the societal/ cultural issues across the globe, until the population becomes so jaded and cynical that this media looses efficacy. By that time enormous damage will have been done.
Consumers will also be left paying for the electricity, water, and other resources that the remaining data centers will consume.
I’m currently looking heavily into installing solar on my home, with a battery backup just because of these stupid data centers. It’s just a matter of time that these things start causing issues on the grid.
Bubble go burst
People hate LLMs because of their unreliability, and they are right. But AI is a much more vast field.
As soon as we have more reliable, causal and general intelligence, the opinions will change.
I personally believe that humans have no clue how limited our brain power is. So much so that there will be no AGIs. Only ASIs. Same thing that happened with chess bots.
Well, assuming some agi breakthrough doesn’t happen (which would in my opinion require a vastly different approach than llms). We will see more of this ai swarm type stuff. Essentially you end up with a bunch of specialized ai’s, and then some ai coordinators. The ai that we will talk to will just farm out the work to other AIs that will include ones specialized in verifying the work that the ai does.
Most people preAI did work that was say 60% implementation, 30% figuring out what needs to be done, and 10% verifying what was done. That will shift to 15% implementation, 50% requirements gathering, and 35% verification.
Obviously those number are just to show the shift, not intended to be an accurate representation of the current way our work is divided.Overall, if you give ai a way to verify what it is doing, and let it iterate, it is far more useful than just telling it to do a thing or asking it a question.
Me: first robotic AI, but probably > 10yrs



