LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn’t be used for most things that are not serious either.
It’s a shame that by applying the same “AI” naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would’ve been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as “AI”. A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of “AI” tech, because they’ve learned not to trust anything branded as AI, due to being let down by LLMs.
LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:
- writer’s block - get something relevant on the page to get ideas flowing
- narrowing down keywords for an unfamiliar topic
- getting a quick intro to an unfamiliar topic
- looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
Some things it’s terrible at:
- deep research - verify everything an LLM generated of accuracy is at all important
- creating important documents/code
- anything else where correctness is paramount
I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.
- narrowing down keywords for an unfamiliar topic
- getting a quick intro to an unfamiliar topic
- looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.
Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I’ve spent hours trying different combinations until I found a handful of specific keywords that worked.
Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.
Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.
I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it’ll be more effective. We have some local models at work that I use, and they’re pretty helpful most of the time.
No search engine or AI will be great with vague descriptions of niche subjects because by definition niche subjects are too uncommon to have a common pattern of ‘close enough’.
Which is why I use LLMs to generate keywords for niche subjects. LLMs are pretty good at throwing out a lot of related terminology, which I can use to find the actually relevant, niche information.
I wouldn’t use one to learn about a niche subject, but I would use one to help me get familiar w/ the domain to find better resources to learn about it.
I will say I’ve found LLM useful for code writing but I’m not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.
It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)
Exactly. Vibe coding is bad, but generating code for something you don’t touch often but can absolutely understand is totally fine. I’ve used it to generate SQL queries for relatively odd cases, such as CTEs for improving performance for large queries with common sub-queries. I always forget the syntax since I only do it like once/year, and LLMs are great at generating something reasonable that I can tweak for my tables.
I always forget the syntax
Me with literally everything code I touch always and forever.
Because the tech industry hasn’t had a real hit of it’s favorite poison “private equity” in too long.
The industry has played the same playbook since at least 2006. Likely before, but that’s when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.
This time it’s AI, last it was crypto, and we’ve had web 2.0, 3.0, and a few others I’m likely missing.
But yeah, it’s sold like a panacea every time, when really it’s revolutionary for like a handful of tasks.
That’s because they look like “talking machines” from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don’t give that feeling anymore.
It is truly terrible marketing. It’s been obvious to me for years the value is in giving it to people and enabling them to do more with less, not outright replacing humans, especially not expert humans.
I use AI/LLMs pretty much every day now. I write MCP servers and automate things with it and it’s mind blowing how productive it makes me.
Just today I used these tools in a highly supervised way to complete a task that would have been a full day of tedius work, all done in an hour. That is fucking fantastic, it’s means I get to spend that time on more important things.
It’s like giving an accountant excel. Excel isn’t replacing them, but it’s taking care of specific tasks so they can focus on better things.
On the reliability and accuracy front there is still a lot to be desired, sure. But for supervised chats where it’s calling my tools it’s pretty damn good.
Just add a search yesterday on the App Store and Google Play Store to see what new “productivity apps” are around. Pretty much every app now has AI somewhere in its name.
I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft’s speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.
It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.
I don’t know basic solutions that are super good, but whisper sbd the whisper derivatives I hear are decent for dictation these days.
I have no idea how to run then though.
I’d compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
And let it suck up 10% or so of all of the power in the region.
So no different than answers from middle management I guess?
At least AI won’t fire you.
Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.
It kinda does when you ask it something it doesn’t like.
I’m in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.
I’ve tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that’s both wrong and doesn’t verify anything.
I’m aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it’s not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don’t even have hopes for AI to apply those lessons in new contexts. In a way, it’s been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.
I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…
So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.
It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.
So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.
However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.
That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.
The ones being implemented into emergency call centers are better though? Right?
Yes! We’ve gotten them up to 94℅ wrong at the behest of insurance agencies.
They’ve done studies, you know. 30% of the time, it works every time.
“Gartner estimates only about 130 of the thousands of agentic AI vendors are real.”
This whole industry is so full of hype and scams, the bubble surely has to burst at some point soon.
Wrong 70% doing what?
I’ve used LLMs as a Stack Overflow / MSDN replacement for over a year and if they fucked up 7/10 questions I’d stop.
Same with code, any free model can easily generate simple scripts and utilities with maybe 10% error rate, definitely not 70%
I’m far more efficient with AI tools as a programmer. I love it! 🤷♂️
Definitely at image generation. Getting what you want with that is an exercise in patience for sure.
Wow. 30% accuracy was the high score!
From the article:Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It’s a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
⚫ Gemini-2.5-Pro (30.3 percent)
⚫ Claude-3.7-Sonnet (26.3 percent)
⚫ Claude-3.5-Sonnet (24 percent)
⚫ Gemini-2.0-Flash (11.4 percent)
⚫ GPT-4o (8.6 percent)
⚫ o3-mini (4.0 percent)
⚫ Gemini-1.5-Pro (3.4 percent)
⚫ Amazon-Nova-Pro-v1 (1.7 percent)
⚫ Llama-3.1-405b (7.4 percent)
⚫ Llama-3.3-70b (6.9 percent),
⚫ Qwen-2.5-72b (5.7 percent),
⚫ Llama-3.1-70b (1.7 percent)
⚫ Qwen-2-72b (1.1 percent).“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
Are you arguing they should have built a test that makes AI perform better? How are you offended on behalf of AI?
I haven’t used AI agents yet, but my job is kinda pushing for them. but i have used the google one that creates audio podcasts, just to play around, since my coworkers were using it to “learn” new things. i feed it with some of my own writing and created the podcast. it was fun, it was an audio overview of what i wrote. about 80% was cool analysis, but 20% was straight out of nowhere bullshit (which i know because I wrote the original texts that the audio was talking about). i can’t believe that people are using this for subjects that they have no knowledge. it is a fun toy for a few minutes (which is not worth the cost to the environment anyway)
I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn’t order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.
If you want to use AI, I’m not going to use your services or products unless I’m forced to. Looking at you Xfinity.
I notice that the research didn’t include DeepSeek. It would have been nice to see how it compares.
70% seems pretty optimistic based on my experience…
For me as a software developer the accuracy is more in the 95%+ range.
On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.
On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.
Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so
Inser into (column1,…,column n) Values row1, Row 2 Row n
Addig a new column with test data for each row is a PITA, but copilot handles it without issue.
Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.
So yeah, it’s way overrated for many-many things, but for programming it’s a pretty awesome productivity tool.
For your database test data, I usually write a helper that defaults those columns to base values, so I can pass in lists of dictionaries, then the test cases are easier to modify and read.
It’s also nice because you’re only including the fields you use in your unit test, the rest are default valid you don’t need to care about.
Reading with CEO mindset. 3 out of 10 employees can be fired.