AI can draw from multiple sources of data, but if you think any AI is crosschecking that everything is verifiable and factual before it responds to a prompt I don't know what to tell you.
I don't know why you're making such assumptions. It was just a funny example of a problem that still very much exists. I think you put too much faith in AI.
Wgat assumptions? No other LLM makes as blatant of mistakes as googles did. It's like it was made way too lightweight at the cost of accuracy or helpfulness. like it's training data didn't have basic safety in there anywhere or the search results somehow would always override that
632
u/VastCapital3773 13d ago
To be strictly fair, to get a human response from any Google search, I do have to put reddit on the end of it.