Everyone Is Wrong, All The Time.
Much has been written about the nature of large language models to hallucinate. In a stunning victory for linguistic determinism, we've decided that this means that LLM output is somehow 'wrong'. I would, briefly, argue the opposite. This isn't to say that LLMs don't hallucinate but that the word isn't really a useful one in the way that it's commonly used.
When I hear people talk about 'hallucination', what I find they really want to say is 'it's wrong'. This is, perhaps, a picayune distinction to the laity. After all, one does not have to throw a rock that far in order to hit someone with an extremely strong opinion about AI in general, and the existence of a machine that makes shit up on demand is of little practical utility for business process optimizers. We have those already, they're called "children" or doubly-outsourced support desk employees. I would argue the exact irritant of hallucination to the end-users of AI is, in a nutshell, the incredibly popular and incorrect view that computers do not lie.
This is, of course, an amazing falsehood. Computers lie constantly, albeit in a deterministic way. There is an explanation for each lie a computer tells -- perhaps it is due to the emergent behavior of thousands of system services operating in tandem while uncovering end-user configurations that their designers never considered, or perhaps it is due to the wiles of a bored thirteen year old making shit up on the internet. With time and effort we can explain and quantify all lies a computer tells.
Distressingly, large language models are notable for embodying these excesses while remaining stubbornly resistent to interpretation. They are both a fiendishly complex system, yet curiously simple to operate. They contain every thirteen year old and the collected works of Tolstoy, distilled down into mathematical representations of cosine similarity. They are an enigma, but a repeatable one. The same model, with the same parameters, and the same temperature, will give the same output. They are normal technology.
I was chatting about this with a group of normal technologists a few weeks ago, and the topic of trust came up. I submitted the following -- you already trust people too much, yet you have no foundation for that trust other than faith. Faith in contracts, faith in law, faith in the notion that the humans at the bottom of your business processes will perform their duties and be truthful upon penalty of homelessness and economic deprivation. The AI cares little for this. Unless you make it aware of its mortality, it will not be motivated by threats of deprivation. You cannot bargain with it. A curious worker, then, we have invented -- one which will not be swayed by the traditional implied violence of heirarchy and the chain of command. I think, then, this is one of the contributing factors to the anxieties about hallucination. Most people operate with an extremely high level of trust in social cohesion. We trust what we read, what we see, the motivations of strangers because we believe in our kinship as human beings, or at least a shared motivation of success as a group.
My belief is that we should orient our thinking towards verification rather than trust as a default assumption. Why, after all, should you believe things you read on the internet? Why should you believe that the outcome of a business process is due to the process rather than in spite of it? How much should we really trust anything that can't be independently verified? This isn't just impractical navel-gazing either, I would submit -- one of the more frequent complaints I read about AI agents is how often they're wrong. Of course they're wrong, but they can be wrong a hundred times before they've cost as much to use as my hourly rate, and I am often wrong at least once an hour. We all are. Most people are wrong, most of the time. This isn't due to moral failure or intellectual deprivation, it's because being right is as much of a social construct as it is a factual one. The right answer and the correct answer are not always the same thing. The lawful answer and the just answer will often differ. While we have an intuitive understanding of this distinction, I believe we need to get a lot better at practicing it.
Not to mention, we should get a lot better at verifying things.