For all of 2023, large language models (LLMs) like GPT-4 have dominated news cycles. But throughout this period, the idea that a model might hallucinate — i.e., make up information when answering a user question – has been a primary concern.
"How can we trust that the model won't make mistakes?"
"What do we make of the fact that ChatGPT can get it wrong?'"
These are valid questions. But the premise they're based on — namely, that the models powering experiences like ChatGPT are meant to be infallible — is what's wrong. Not the models themselves.
That's because "hallucinations" aren't a bug: they're a feature. LLMs aren't static databases with all the answers. Rather, they are translation layers — interpreters — that can help us sift through information, reason better, and accomplish more than we could on our own.
Sam Altman, CEO of OpenAI, said it himself on a recent podcast: "too much of the processing power, for lack of a better word, is going into using the model as a database instead of using the model as a reasoning engine."
Okay, very well. But what's a business leader to do about this, anyway? And how can you leverage AI in your workflows while actually reducing risk?