AI: Hallucinating - fordsfords/fordsfords.github.io GitHub Wiki

I have been thinking a lot about AI recently.

Parent article: AI:

AIs sometimes lie. No, that's not right since lying implies intent to deceive, and that's not what's going on. I guess it's just wrong. But that doesn't seem quite right either. There doesn't seem to be a good word that describes AIs that spout incorrect information as if it were fact.

Fortunately, people smarter than me have decided that the correct word is "hallucination".

... Um ... ok ... Sure. Why not.

Here's what ChatGPT says about AIs hallucinating:

The term "hallucinate" is sometimes used in the context of language models like LLM AIs (large language models based on the GPT architecture) to describe instances where the model generates text that is nonsensical, irrelevant, or inconsistent with the context or the prompt given to it. Hallucinations can occur when language models generate text that is not grounded in real-world knowledge or lacks coherence and logic. This can happen due to various reasons, such as limited or biased training data, flawed or incomplete language understanding, or the model's inability to handle complex or rare scenarios. To address this issue, researchers are working on developing methods to improve the quality and reliability of language models, such as better training data, more advanced natural language processing techniques, and incorporating external knowledge sources.



Here's what Bard says:

When people say that an LLM AI is "hallucinating," they mean that the AI is generating text that is semantically or syntactically plausible but is in fact incorrect or nonsensical. In short, you can't trust what the machine is telling you. There are a few reasons why LLM AIs might hallucinate. One reason is that they are trained on a massive amount of text data, and sometimes they can't tell the difference between real and fake text. For example, if an LLM AI is trained on a dataset of news articles, it might generate a news article that is factually incorrect. Another reason why LLM AIs might hallucinate is that they are trying to be too clever. They might generate text that is grammatically correct but doesn't actually make sense. For example, an LLM AI might generate a sentence like "The cat sat on the mat and then the dog ate the cat." This sentence is grammatically correct, but it doesn't make sense because cats don't eat dogs. Finally, LLM AIs might hallucinate because they are trying to be too creative. They might generate text that is original and interesting, but it might not be accurate or truthful. For example, an LLM AI might generate a poem about a unicorn that flies through the sky. This poem might be beautiful and creative, but it's not based on reality. It's important to be aware of the potential for hallucination when using LLM AIs. If you're not sure whether an LLM AI is generating accurate or truthful information, it's always best to do your own research to verify the information.

Confusion

So, that part about "the dog ate the cat" followed by "cats don't eat dogs" -- does that count as a hallucination? Or is there a different thing, maybe we could call it confusion, where it losses track of its own context?

Anyway, in my experiments, both ChatGPT and Bard are susceptible to hallucination and confusion, but Bard seems to be significantly more susceptible.

That said, even ChatGPT's level of susceptibility is high enough for me to not rely on Chat for factual information.

Off the Rails

Again, this might just be a form of hallucination, but it feels different to me. It's not just that the AI is stating incorrect information as fact, it is ... just going off the rails.

⚠️ **GitHub.com Fallback** ⚠️