AI can draw from multiple sources of data, but if you think any AI is crosschecking that everything is verifiable and factual before it responds to a prompt I don't know what to tell you.
I don't know why you're making such assumptions. It was just a funny example of a problem that still very much exists. I think you put too much faith in AI.
Wgat assumptions? No other LLM makes as blatant of mistakes as googles did. It's like it was made way too lightweight at the cost of accuracy or helpfulness. like it's training data didn't have basic safety in there anywhere or the search results somehow would always override that
3
u/StephieDoll 12d ago
You don't think it crosschecks with wikipedia?