WHEN AI HALLUCINATES, IT'S JUST BEING MORE LIKE US

By: Chris Kawaja 
 
A major criticism of AI is its tendency to hallucinate - fabricate answers with a sense of authority and confidence that is not justified based on its knowledge. People are up in arms and suing companies for creating false or misleading content. 
 
But what if we entertain an alternative viewpoint – that AI's inclination to confidently fabricate explanations and answers based on limited or incomplete knowledge actually makes its behavior strikingly humanlike? It turns out that if we define AI hallucinations as fabricating answers with a sense of authority and confidence that isn't justified, we humans hallucinate all the time. 
 
Have you ever gone into a room with a purpose, forgotten why you were in there, and then said to yourself something like “oh, I must have come in here to get my comb”? You go back downstairs only to realize it was actually to bring down a book or pick up your journal. But in the moment of doubt, notice what your brain did: it made up an explanation rather than admit it didn’t know anything. That, my friends, is a hallucination. 
 
And it doesn't stop there. When I was at Align Technology, the makers of Invisalign, I noticed a fascinating pattern. People would tell the company's founding story in a way that emphasized their own contributions more than I could ever corroborate. One senior person confidently told the founding story as him asking the founder a key question, which then led to the founder's flash of insight and the first patent. Not a single person agreed – of course, THEIR version was right, the version that had them making a bigger contribution. Our brains have a knack for rewriting history to make our ego feel good. 
 
I've observed that overconfidence is especially evident in two domains: politics and diet. 

On my TikTok account @upwarding, I once shared the results of a Harvard paper studying 15,000 or more people that linked weight gain to red meat and french fries consumption, and weight loss to eating more vegetables and yogurt. I simply stated the results of the study, but people came out swinging from their respective "camps," ready to "prove" that I was wrong. One lady even wrote, "destroy this man”. We are so quick to dismiss the source of information if it doesn’t align with our existing beliefs. 
 
Though diet debates might seem inconsequential on the surface, this unwavering attachment to our beliefs has far-reaching consequences, causing significant damage to the social fabric of our society, in my opinion. Look at how hard it is to get politicians to meet in the middle, especially as the explosion of media outlets lets people access only information that exactly confirms what they previously thought. Rather than, as Justice Learned Hand put it, "the spirit of liberty is the spirit of not being too sure we are right,” we are now in a society where if you don’t have the same version of “right” as someone else they will not shop at your store, try to get you fired from the media and so on. Rational discussion be damned. 

So when it comes to AI, yes it hallucinates. But when I point it out, at least it has no ego and says “I’m sorry I was mistaken”. I only wish we humans would drop our egos and do more of the same.