Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?
Just use Wolfram Alpha instead
Would you trust six mathematicians who claimed to have solved a problem by intuition, but couldn’t prove it?
That’s not how mathematics works: if you have to “trust” the answer, it isn’t even math.
I wouldn’t bother. If I really had to ask a bot, Wolfram Alpha is there as long as I can ask it without an AI meddling with my question.
E: To clarify, just because one AI or six will get the same answer that I can independently verify as correct for a simpler question, does not mean I can trust it for any arbitrary math question even if however many AIs arrive at the same answer. There’s often the possibility the AI will stumble upon a logical flaw, exemplified by the “number of rs in strawberry” example.
Here’s an interesting post that gives a pretty good quick summary of when an LLM may be a good tool.
Here’s one key:
Machine learning is amazing if:
- The problem is too hard to write a rule-based system for or the requirements change sufficiently quickly that it isn’t worth writing such a thing and,
- The value of a correct answer is much higher than the cost of an incorrect answer.
The second of these is really important.
So if your math problem is unsolvable by conventional tools, or sufficiently complex that designing an expression is more effort than the answer is worth… AND ALSO it’s more valuable to have an answer than it is to have a correct answer (there is no real cost for being wrong), THEN go ahead and trust it.
If it is important that the answer is correct, or if another tool can be used, then you’re better off without the LLM.
The bottom line is that the LLM is not making a calculation. It could end up with the right answer. Different models could end up with the same answer. It’s very unclear how much underlying technology is shared between models anyway.
For example, if the problem is something like, "here is all of our sales data and market indicators for the past 5 years. Project how much of each product we should stock in the next quarter. " Sure, an LLM may be appropriately close to a professional analysis.
If the problem is like “given these bridge schematics, what grade steel do we need in the central pylon?” Then, well, you are probably going to be testifying in front of congress one day.
Most LLM’s now call functions in the background. Most calculations are just simple Python expressions.
Yes. I was aware of that, but I was manipulated by an analog device
Yes, with absolute certainty.
For example: 2 + 2 = 5
It’s absolutely correct and if you dispute it, big bro is gonna have to re-educated you on that.
I NEED TO consult every LLM VIA TELEKINESIS QUANTUM ELECTRIC GRAVITY A AND B WAVE.
Nope, language models by inherent nature, xannot be used to calculate. Sure theoretically you could have input parsed, with proper training, to find specific variables, input those to a database and have that data mathematically transformed back into language data.
No LLMs do actual math, they only produce the most likely output to a given input based on trained data. If I input: What is 1 plus 1?
Then given the model, most likely has trained repetition on an answer to follow that being 1 + 1 = 2, that will be the output. If it was trained on data that was 1 + 1 = 5, then that would be the output.
Using a calculator or wolfram alpha or similar tools i don’t trust the answer unless it passes a few sanity checks. Frequently I am the source of error and no LLM can compensate for that.
It checked out. But, all six getting the same is likely incorrect?.
Don’t know. I’ve never asked any of them a maths question.
How costly is it to be wrong? You seem to care enough to ask people on the Internet so it suggests that it’s fairly costly. I’d not trust them.
I’ve used LLMs quite a few times to find partial derivatives / gradient functions for me, and I know it’s correct because I plug them into a gradient descent algorithm and it works. I would never trust anything an LLM gives blindly no matter how advanced it is, but in this particular case I could actually test the output since it’s something I was implementing in an algorithm, so if it didn’t work I would know immediately.
That’s rad, dude. I wish I knew how to do that. Hey, dude I imagined a cosmological model that fits the data with two fewer parameters then the standard model. Planke data. I I’ve checked the numbers, but I don’t have the credentials. I need somebody to check it out. This is a it and a verbal explanation for the model by Academia.edu. It’s way easier to listen first before looking. I don’t want recognition or anything. Just for someone to review it. It’s a short paper. https://youtu.be/_l8SHVeua1Y
Well, I wanted to know the answer and formula for future value of a present amount. The AI answer that came up was clear, concise, and thorough. I was impressed and put the formula into my spreadsheet. My answer did not match the AI answer. So I kept looking for what I did wrong. Finally I just put the value into a regular online calculator and it matched the answer my spreadsheet was returning.
So AI gave me the right equation and the wrong answer. But it did it in a very impressive way. This is why I think it’s important for AI to only be used as a tool and not a replacement for knowledge. You have to be able to understand how to check the results.
no, LLM’s are designed to drive up user engagement nothing else, it’s programmed to present what you want to hear not actual facts. plus it’s straight up not designed to do math
no, once i tried to do binary calc with chat gpt and he keot giving me wrong answers. good thing i had sone unit tests around that part so realised quickly its lying
Just yesterday I was fiddling around with a logic test in python. I wanted to see how well deepseek could analyze the intro line to a for loop, it properly identified what it did in the description, but when it moved onto giving examples it contradicted itself and took 3 or 4 replies before it realized that it contradicted itself.
Yes more people need to realize it’s just a search engine with natural language input and output. LLM output should at least include citations.
But, if you ran, gave the problem to all the top models and got the same? Is it still likely an incorrect answer? I checked 6. I checked a bunch of times. Different accounts. I was testing it. I’m seeing if its possible with all that in others opinions I actually had to check over a hundred times each got the same numbers.
my use case was, i expect easier and simpler. so i was able to write automated tests to validate logic of incrementing specific parts of a binary number and found that expected test values llm produced were wrong.
so if its possible to use some kind of automation to verify llm results for your problem, you will be confident in your answer. but generally llms tend to make up shit and sound confident about it
I mean, I don’t know why you wouldn’t just use something other than an LLM in that case
You cannot trust LLMs. Period.
They are literally hallucination machines that just happen to be correct sometimes.