To be honest, I think we’re losing credibility. I don’t know what else to put in the description.
It’s making hobbyist computing expensive, it’s potentially eliminating some of the few actually enjoyable jobs (art, creative works), it’s making websites and applications less secure with vibe coding, and it’s allowing for even more convincing propaganda/bad faith actors to manipulate entire populations…
But hey, at least Elon Musk gets to make naked pictures of kids and still be a billionaire. So there’s that.
Solving a problem we ahouldnt have with a tool that works only some of the time.
My thoughts are that the USA is in a far worse position now to shoulder and recover from the coming bubble pop, crash, and financial crisis that the mass implementation of AI is about to cause than they were in 2008-2009 when the last crash hit.
Could be wrong though. Maybe all the datacenters will get built on time, and be powered by a sudden breakthrough in nuclear fission, and maybe ~44% of people on the earth will sign up to paid plans with OpenAI so that they can become profitable.
GIGO in its purest form.
AI don’t like it.
AI is a tool, it can be used for good, and it can be used for bad. Right now, the business world is trying to find ways to make it work for the business world - I expect 95 percent of these efforts to die off.
My preference and interest is in the local models - smaller, more specialized models that can be run on normal computers. They can do a lot of what the big ones do, without being a cloud service that hrvests data.
I have a lot of thoughts on this because this is a complicated topic.
TL;DR: it’s breakthrough tech, made possible by GPUs left over from the crypto hype, but TechBros and Billionaires are dead set on ruining it for everyone.
It’s clearly overhyped as a solution in a lot of contexts. I object to the mass scraping of data to train it, the lack of transparency around what data exactly went into it, and the inability to request one’s art from being excused from any/all models.
Neural nets as a technology have a lot of legitimate uses for connecting disparate elements in large datasets, finding patterns where people struggle, and more. There is ample room for legitimately curated (vegan? we’re talking consent after all) training data, getting results that matter, and not pissing anyone off. Sadly, this has been obscured by everything else encircling the technology.
At the same time, AI is flawed in practice as it’s single greatest strength is also its greatest weakness. “Hallucinations” are really all this thing does. We just call obviously wrong output that because that’s in the eye of the beholder. In the end, these things don’t really think, so it’s not capable of producing right or wrong answers. It just compiles stuff out of its dataset by playing the odds on what tokens come next. It’s very fancy autocomplete.
To put the above into focus, it’s possible to use a trained model to implement lossy text compression. You ship a model of a boatload of text, prose, and poetry, ahead of time. Then you can send compressed payloads as a prompt. The receiver uses the prompt to “decompress” your message by running it through the model, and they get a facsimile of what you wrote. It wont’ be a 1:1 copy, but the gist will be in there. It works even better if its trained on the sender’s written work.
The hype surrounding AI is both a product of securing investment, and the staggeringly huge levels of investment that generated. I think it’s all caught up in a self-sustaining hype cycle now that will eventually run out of energy. We may as well be talking about Stanley Cups or limited edition Crocs… the actual product doesn’t even matter at this point.
The resource impact brought on by record investment is nothing short of tragic. Considering the steep competition in the AI space, I wager we have somewhere between 3-8x the amount of AI-capable hardware deployed than we could ever possibly use at the current level of demand. While I’m sure everyone is projecting for future use, and “building a market” (see hype above), I think the flaws and limitations in the tech will temper those numbers substantially. As much as I’d love some second-hand AI datacenter tech after this all pops, something tells me that’s not going to be possible.
Meanwhile, the resource drain on other tangent tech markets have punched down even harder on anyone that might compete, let alone just use their own hardware; I can’t help but feel that’s by design.
We are still figuring out what the current crop of LLMs are useful for, and we have many more innovations to look forward to.
Name an AI company that isn’t supporting the rise of fascism.
AI is fascism. if you don’t reject AI you accept fascism.



