ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists.
Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of the nonexistent "Linoleum harvest" Tumblr meme,
The entire concept behind a LLM is that the machine is designed to make up stories, and occasionally those stories aren’t false. To use it for anything besides that is reckless.
Even AI-generated fiction can be reckless if it contains themes that are false, harmful, or destructive. If it writes a story that depicts genocide positively and masks it through metaphor, allegory, parable, whatever, then yes it’s just “a made up story” but it’s no less dangerous than if it were an Op Ed in a major new outlet.