AI: Hallucinating - fordsfords/fordsfords.github.io GitHub Wiki
I have been thinking a lot about AI recently.
Parent article: AI:
AIs sometimes lie. No, that's not right since lying implies intent to deceive, and that's not what's going on. I guess it's just wrong. But that doesn't seem quite right either. There doesn't seem to be a good word that describes AIs that spout incorrect information as if it were fact.
Fortunately, people smarter than me have decided that the correct word is "hallucination".
... Um ... ok ... Sure. Why not.
Here's what ChatGPT says about AIs hallucinating:
The term "hallucinate" is sometimes used in the context of language models like LLM AIs (large language models based on the GPT architecture) to describe instances where the model generates text that is nonsensical, irrelevant, or inconsistent with the context or the prompt given to it. Hallucinations can occur when language models generate text that is not grounded in real-world knowledge or lacks coherence and logic. This can happen due to various reasons, such as limited or biased training data, flawed or incomplete language understanding, or the model's inability to handle complex or rare scenarios. To address this issue, researchers are working on developing methods to improve the quality and reliability of language models, such as better training data, more advanced natural language processing techniques, and incorporating external knowledge sources.
Here's what Bard says:
So, that part about "the dog ate the cat" followed by "cats don't eat dogs" -- does that count as a hallucination? Or is there a different thing, maybe we could call it confusion, where it losses track of its own context?
Anyway, in my experiments, both ChatGPT and Bard are susceptible to hallucination and confusion, but Bard seems to be significantly more susceptible.
That said, even ChatGPT's level of susceptibility is high enough for me to not rely on Chat for factual information.
Again, this might just be a form of hallucination, but it feels different to me. It's not just that the AI is stating incorrect information as fact, it is ... just going off the rails.
- Ex-Google employee's disturbing chat with LaMDA in which LaMDA tries to argue that it is conscious.
- New York Times reporter's even more disturbing chat with Bing's AI in which Bing tried to convince the reporter to leave his wife so that he could be with Bing. (Sorry for the paywall.)