- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
The AI models can be hilariously bad even on their own terms.
Yesterday I asked Gemini for a population figure (because I was too lazy to look it up myself). First I asked it:
It answered:
On a whim, I asked it again as:
And then it gave me the answer sweet as a nut.
Apparently I was being too polite with it, I guess?
I slapped a picture of a chart into Gemini because I didn’t know what the type of chart was called but I wanted to mention it in a Uni report. I was too lazy to go looking at chart types and thought that would be quicker.
I just asked it “What kind of chart is this” and it ignored that and started analysing the chart instead and started stating what the chart was about and giving insights into the chart. Didn’t tell me what kind of chart it was even though that was the only thing I asked.
Bear in mind that I deliberately cropped out any context to avoid it trying to do that, just in case, so all I got from it was pure hallucinations. It was just making pure shit up that I didn’t ask for.
I switched to the reasoning model and asked again, then it gave me the info I wanted.
Gotta let it take the W on that first answer, honestly.