Artificial Generalized Incompetence
The United States of America. A nation ruled by word salad.
How about the outlet checks and finds out?
I did, and I couldn’t get low-temperature Gemini or a local LLM to replicate it, and not all the tariffs seem to be based on the trade deficit ratio, though some suspiciously are.
Sorry, but this is a button of mine, outlets that ask stupidly easy to verify questions but dont even try. No, just cite people on Reddit and Twitter…
though some suspiciously are.
Some? A huge portion are. Numerous others have replicated it with visual proof. I agree that the news sites should be verifying it, but NYT did and also documented their proof.
Appears to be that calculation minimum of 10%
Thanks, much appreciated.
Are you annoyed that they didn’t try to replicate it, or that they’re disparaging LLMs?
That they didn’t try to replicate it.
I mean, I’m not going to spend time trying to duplicate their results, but it wouldn’t even slightly surprise me. Cops have been using ChatGPT to streamline their bullshit cop-lingo incident reports, to the extent that it’s caught the notice of lawyers and judges… 100% I believe that the dolts who shit out Trump’s tariff rates used it too.
Did ChatGPT come up with the color of the sky? AI chatbots ChatGPT, Gemini, Claude and Grok all return the same color for the sky, several X users claim.
Yea but we can all agree on sky color but the numbers Trump posted are questionable at best
the point is chat GPT is trained on ideas people have already had. it’s not inventing Trump’s economic theory out of thin air.
The sky color is part of the training data. How did the LLMs include the training data before it existed?
All the search engines search the same internet, find similar text, output it using similar formulas.
Except these AI systems aren’t search engines, and people treating them like they are is really dangerous
They are. They record the data, stealing it. They search it (or characteristics of it), and reprint it (in whole or in part) upon request.
Viewing it as something creative, or other than a glorified remixing machine is the problem. It’s a search engine for creative works they’ve stolen, and reproduce parts of.
They search the data-space of what they’re “trained” on (our content, the content of human beings), and reproduce statistically defined elements of it.
They’re search engines that have stolen what they’re “trained on”, and reproduce it as “results” (be that images or written text, it has to come from our collective data. Data we created). It’s theft. It’s copywrite fraud. Same as google stealing books (which they had to he sued over the digitizing of, and enter into rights agreements over).
Searching and reproducing content they’ve already recorded (aka stolen without permission), is absolutely part of what they are. Part of what they do.
Don’t stan for them or pretend they’re creative, intelligent, or doing anything original.
The real lie is that it’s “training data”. It’s not. It’s the internet, and it’s not training - it’s theft, it’s stealing and copying (violating copyright). Digital stealing, and processing into a “data set”, a representation or repackaging of our original works.
The basic graphing technology used by AI is the same pioneered by Alta Vista and optimized by Google years later. We’ve added a layer of abstraction through user I/O, such that you get a formalized text response encapsulating results rather than a series of links containing related search terms. But the methodology used to harvest, hash, and sort results is still all rooted in graph theory.
The difference between then and now is that back then you’d search “Horse” in Alta Vista and getting a dozen links ranging from ranches and vet clinics to anime and porn. Now, you get a text blob that tries to synthesize all the information in those sources down to a few paragraphs of relevant text.
That simply isn’t true. There’s nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data
“data gathering” and “training data” is just what they’ve tricked you into calling it (just like they tried to trick people into calling it an “intelligence”).
It’s not data gathering, it’s stealing. It’s not training data, it’s our original work.
It’s not creating anything, it’s searching and selectively remixing the human creative work of the internet.
You’re putting words in my mouth, and inventing arguments I never made.
I didn’t say anything about whether the training data is stolen or not. I also didn’t say a single word about intelligence, or originality.
I haven’t been tricked into using one piece of language over another, I’m a software engineer and know enough about how these systems actually work to reach my own conclusions.
There is not a database tucked away in the LLM anywhere which you could search through and find the phrases which it was trained on, it simply doesn’t exist.
That isn’t to say it’s completely impossible for an LLM to spit out something which formed part of the training data, but it’s pretty rare. 99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.
It’s searched in training, tagged for use/topic then that info is processed and filtered through layers. So it’s pre-searched if you will. Like meta tags in the early internet.
Then the data is processed into cells which queries flow through during generation.
99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.
Yes it does - the fact that you in particular can’t recognize from where it comes: doesn’t matter. It’s still using copywrited works.
Anyways you’re an AI stan, and defending theft. You can deny it all day, but it’s what you’re doing. “It’s okay, I’m a software engineer I’m allowed to defend it”
…as if being a software engineer doesn’t stop you from also being a dumbass. Of course it doesn’t.
You’re still putting words in my mouth.
I never said they weren’t stealing the data
I didn’t comment on that at all, because it’s not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn’t.