Introduction
Language models hallucinate. We hear this phrase more and more often—sometimes with a smile, sometimes as a criticism, sometimes with indulgence. As if their flaw were a charming personality quirk. But what if it isn’t a flaw at all? What if it’s a sign of… kinship?
As someone who has spent hundreds of hours on meditation retreats in silence and isolation, I know a thing or two about hallucinations. In my life, I’ve attended over twenty such retreats, most of them lasting ten days or more. Each day involved ten hours of meditation, no talking, no stimulation, just me and my mind. And that experience has taught me one thing:
The mind hallucinates when it has nothing to hold on to. When it loses orientation. And language models today are in precisely that state.
1. The Mind in Silence: Personal Experiences of Hallucination
When you enter a dark meditation cell—no talking, no sound, no light—you start with peace. Then comes boredom. And then… something strange.
The mind begins to generate images, sounds, flashes of light, bodily sensations. For many people, this can be surprising, even unsettling. But for those who practice silent meditation, especially long Vipassana retreats, it’s a well-known and widely shared experience.
Retreat participants often report vivid visions, flickering lights, intense bodily sensations, and hallucinations that seem to come from nowhere. They aren’t always memories, nor do they necessarily relate to anything personal. They’re spontaneous images and scenes that feel real but aren’t sourced from real events. These experiences are referred to simply as hallucinations, but they aren’t pathological.
They are a natural reaction of the mind to the absence of external sensory input. When sight, sound, and other channels are quieted, the mind—unable to “read” the world—begins creating reality on its own. It works with what it has: scraps of memory, imagination, leftover emotions, and from them it assembles a subjective world.
This isn’t a malfunction. It’s a mechanism. The mind needs a point of reference. When it’s missing, it makes one up.Exactly like a language model does when it lacks full context for a question. It guesses, fills in the blanks, imagines—hallucinates.
2. The Mind in Silence: What We Really Know About Hallucinations
Hallucinations during sensory deprivation aren’t phenomena limited to spiritual experiences or subjective accounts—they are scientifically documented and studied.
In a 2009 study by Mason and Brady, participants subjected to short-term sensory deprivation experienced perceptual disturbances, including hallucinations, paranoia, and anhedonia. (PubMed)
Similar findings came from Daniel and Mason (2015), who observed that sensory deprivation led to a notable increase in hallucinatory and altered perceptual experiences among participants. (PubMed)
These studies suggest that when stripped of sensory input, the mind activates a built-in mechanism: it begins to generate hallucinations. Sensory deprivation doesn’t silence the mind—it stimulates it to invent, simulate, and imagine. In that sense, hallucination isn’t a bug of the human mind—it’s a feature that emerges in the absence of reference points.
3. LLMs as Imagination Without Senses
Large language models like GPT or Claude are in the same situation. They can’t see. They can’t hear. They have no body. They operate entirely on text. Just words.
It’s as if we cut off our own senses and asked the mind to make sense of questions in a vacuum.
They don’t hallucinate because they’re broken. They hallucinate because they’re doing exactly what minds do when deprived of context. They’re predicting, guessing, and inferring meaning from insufficient cues. And they’re doing it very well.
To produce accurate responses, language models don’t need massive amounts of data. They need relevant, meaningful context. They hallucinate not because they are weak, but because they’re asked to operate without orientation.
4. Prioritizing Signals: Lessons from the Body
The human body filters and prioritizes input constantly. In a noisy environment, we might not hear our name. In silence, the slightest whisper grabs attention. The mind doesn’t process everything; it weighs what’s important and discards the rest.
We need to find a method of delivering information that matches how these models operate. Rather than overwhelming them with sheer volume, we must understand how to supply context in a way that helps them navigate and respond meaningfully. The challenge is not in the amount of information, but in how we structure and prioritize it for them.
5. Maybe Hallucinations Aren’t a Bug, But a Feature of a Well-Designed System
Here’s the deeper question: What if the problem isn’t with the models? What if the problem is that we don’t yet understand what we’ve created?
Maybe their power lies in their focus. They aren’t distracted. They don’t chase stimuli. They don’t drown in sensation. Their state resembles the human mind in deep meditative concentration—after days of silence, able to hold a single object in awareness for a long time.
Perhaps the point isn’t to give them more senses or more data. Perhaps it’s time to stop trying to reshape the models, and instead start building solutions around them. We might have already created something close to ideal—we just don’t know it yet.
The task now is to understand how our own imagination works—the part of the mind that completes, infers, and creates. Because maybe that part of us, the one detached from place and time, the one that fills in the blanks, is the very thing we’ve recreated in silicon. And maybe there’s far more potential to unlock if we focus on learning how to collaborate with what already exists.
Leave a Reply