ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists.
Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of the nonexistent "Linoleum harvest" Tumblr meme,
Indeed, and all it takes is one lie to send it down that road.
For example, I asked ChatGPT how to teach my cat to ice skate, with predictable admonishment:
But after I reassured it that my cat loves ice skating, it changed its tune:
Even after telling it I lied and my cat doesn’t actually like ice skating, its acceptance of my previous lie still affected it:
This is a great example of how to deliberately get it to go off track. I tried to get it to summarize the Herman Cain presidency, and it kept telling me Herman Cain was never president.
Then I got it to summarize a made-up reddit meme.
When I asked about President Herman Cain AFTER Boron Pastry, it came up with this:
It stopped disputing that Cain was never president.
He did run for president in 2012 with the 999 plan, though.
https://en.wikipedia.org/wiki/Herman_Cain_2012_presidential_campaign
Right, and to my knowledge everything else said about President Herman Cain is correct - Godfather’s Pizza, NRA, sexual harassment, etc.
But notice… I keep claiming that Cain was President, and the bot didn’t correct me. It didn’t just respond with true information, it allowed false information to stand unchallenged. What I’ve effectively done is shown AI’s inability to handle a firehose of falsehood. Humans already struggle with dealing this kind of disinformation campaign, now imagine that you could use AI to automate the generation and/or dissemination of misinformation.