Meanwhile Google search results:
- AI summary
- 2x “sponsored” result
- AI copy of Stackoverflow
- AI copy of Geeks4Geeks
- Geeks4Geeks (with AI article)
- the thing you actually searched for
- AI copy of AI copy of stackoverflow
Even adding, “Reddit” after a search only brings up posts from 7 years ago.
The irony is that Gemini Pro is actually better than ChatGPT (which is not saying a ton, as OpenAI have completely stagnated and even some small open models are better now), but whatever they use for search is beyond horrible.
yeah, but at least we can vet that shit better that the unsourced and hallucinated drivel provided by ChatGPT
Ugh. Don’t get me started.
Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.
Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.
I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.
The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.
AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.
Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.
I know this is off topic, but every time i see you comment of a thread all i can see is the pepsi logo (i use the sync app for reference)
You know, just for you: I just changed it to the Coca Cola santa :D
Wait, when did you do this? I just tried this for my town and researched each aspect to confirm myself. It was all correct. It talked about the natives that once lived here, how the land was taken by Mexico, then granted to some dude in the 1800s. The local attractions were spot on and things I’ve never heard of. I’m…I’m actually shocked and I just learned a bunch of actual history I had no idea of in my town 🤯
I did that test late last year, and repeated it with another town this summer to see if it had improved. Granted, it made less mistakes - but still very annoying ones. Like placing a tourist info at a completely incorrect, non-existent address.
I assume your result also depends a bit on what town you try. I doubt it has really been trained with information pertaining to a city of 160.000 inhabitants in the Netherlands. It should do better with the US I’d imagine.
The problem is it doesn’t tell you it has knowledge gaps like that. Instead, it chooses to be confidently incorrect.
Only 85k pop here, but yeah. I imagine it’s half YMMV, half straight up luck that the model doesn’t hallucinate shit.
Last night, we tried to use chatGPT to identify a book that my wife remembers from her childhood.
It didn’t find the book, but instead gave us a title for a theoretical book that could be written that would match her description.
Both suck now.
I have to say, look it up online and verify your sources.
GPTs natural language processing is extremely helpful for simple questions that have historically been difficult to Google because they aren’t a concise concept.
The type of thing that is easy to ask but hard to create a search query for like tip of my tongue questions.
And then google to confirm the gpt answer isn’t total nonsense
I’ve had people tell me “Of course, I’ll verify the info if it’s important”, which implies that if the question isn’t important, they’ll just accept whatever ChatGPT gives them. They don’t care whether the answer is correct or not; they just want an answer.
Well yeah. I’m not gonna verify how many butts it takes to swarm mount everest, because that’s not worth my time. The robot’s answer is close enough to satisfy my curiosity.
For the curious, I got two responses with different calculations and different answers as a result. So it could take anywhere from 1.5 to 7.5 billion butts to swarm mount everest. Again, I’m not checking the math because I got the answer I wanted.
This is entirely Google’s fault.
How long until ChatGPT starts responding “It’s been generally agreed that the answer to your question is to just ask ChatGPT”?
deleted by creator
deleted by creator
deleted by creator
Have they? Don’t think I’ve heard that once and I work with people who use chat gpt themselves
Just duck it bro. (Add !chat to your query or use ai assistant in results)
I wonder where people can go. Wikipedia maybe. ChatGPT is better than google for answering most questions where getting the answer wrong won’t have catastrophic consequences. It is also a good place to get started in researching something. Unfortunately, most people don’t know how to assess the potential problems. Those people will also have trouble if they try googling the answer, as they will choose some biased information source if it’s a controversial topic, usually picking a source that matches their leaning. There aren’t too many great sources of information on the internet anymore, it’s all tainted by partisans or locked behind pay-walls. Even if you could get a free source for studies, many are weighted to favor whatever result the researcher wanted. It’s a pretty bleak world out there for good information.
Google isn’t a search engine any more. It stopped being that some years ago.
Now it’s more accurately described as a shitty content feed that can be weakly filtered using key words.
Might as well. All the sites are just AI articles anyway
Just ask Elon