Dr Wajdi Zaghouani, an associate professor at Northwestern University in Qatar, a Qatar Foundation partner university — defines AI ‘hallucination ‘ as the production of information that appears to be true but is, in fact, false or fabricated.
“Imagine it as someone confidently telling you a story that seems believable, but the events of that story are completely wrong. I’ve seen some fascinating cases in my research,” he explained.
One common example is when AI systems generate fake academic citations; they create paper titles that sound legitimate, with realistic author and journal names, but the papers don’t exist.
“During my work in Arabic Natural Language Processing, I came across systems that generate fake Arabic proverbs which sound authentic, but have no basis in the culture. They capture the linguistic style perfectly, yet produce entirely fictional cultural content,” he noted.
It’s a perspective that raises critical questions about AI systems. What if the machine is making mistakes? Why should we avoid involving it in big decisions? Who holds responsibility for these mistakes? Does it feel pressure as humans do? Does it avoid saying “I don’t know?”, or is it simply drowning in an endless flood of data? Reflecting on what lies behind AI errors, and how they can be addressed, Dr Zaghouani says: “Unfortunately, large language models like ChatGPT or Claude are essentially very sophisticated pattern-matching machines. They learn from massive amounts of text and become good at predicting which word should come next in a sentence. “But they don’t actually ‘know’ facts the way we do. When an AI system generates a fake name or an incorrect fact, it’s because the patterns in its training data suggest that’s what should come next.
“AI is excellent for general knowledge questions, creative tasks, writing assistance, planning trips, and explaining concepts – basically, anything where you can easily verify the answer or where being slightly wrong isn’t catastrophic.”
So how can we benefit from AI without being deceived by its results? “Users should be wary of facts provided without cited sources, especially dates, numbers, or quotes,” explains Dr Zaghouani. “They should also watch out for information that seems too convenient or perfectly fits what they want to hear. If they are researching something controversial and AI provides exactly the evidence they were hoping for, they should double-check it.”
Essentially, users of AI systems should be wary of taking the answers they receive from them for granted or considering them beyond question; and take care when gathering information through AI, as the issue extends beyond ethics to the realm of more serious consequences. But is there a way to stop these AI hallucinations? Or are they now a fact of life? “This is the million-dollar question in our field right now,” is Dr Zaghouani’s response. “Complete elimination of them is extremely challenging, because hallucination stems from the fundamental way these models work. They are probabilistic systems that generate the most likely next word, not knowledge databases that look up facts.
“However, we’re making significant progress. Techniques like retrieval-augmented generation, in which AI searches a database of verified information before answering, can help dramatically. We’re also developing better training methods and ways to teach models to say “I don’t know” more often, rather than guessing confidently.”
Related Story