Wouldn’t removing your ovaries and fallopian tubes make you not “fertile” by definition?
Yes, it contradicts itself within the next couple of sentences.
As per form for these “AIs”.
Hey, be fair, the “I” in “LLM” stands for “intelligent”. Please continue consuming the slop.
It also contradicts itself immediately, saying she’s fertile, then immediately saying she’s had her ovaries removed end that she’s reached menopause.
How can she be fertile if her ovaries are removed?
Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.
A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.
It’s not even words, it “thinks” in “word parts” called tokens.
It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e.
[
and ]+[
in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely. ]+In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.
Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.
You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.
Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.
People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).
NGL, I learned some things.
In short: BONK
It probably thought you were Elon Musk.
This is why no one can find anything on Google anymore, they don’t know how to google shit.
Everyone in this post is the annoying IT person who says “why don’t you just run Linux?” to people who don’t even fully understand what an OS is in the first place.
Installing a whole new OS is not good comparison to browser. We all downloaded chrome using internet explorer at some point before.
You are included in my initial assertion
It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.
Deepseek also gets this wrong.
So she is in heat …
ddg isn’t really any better with that exact search query. all ‘fashion’ related items on the first page.
you get the expected top result (imdb page for the film ‘heat’, which you have to scroll through to determine your ‘answer’) by using simply: angelina jolie heat
deleted by creator
It’s not helpful for OOP since they’re on iOS, but there’s a Firefox extension that works on desktop and Android that hides the AI overview in searches: https://addons.mozilla.org/en-US/android/addon/hide-google-ai-overviews/
I think Gemini is “in heat”