We’re both hallucinating (Extended)

*This is an expanded version of the main post, exploring the ideas in more depth for those interested in the philosophical and cognitive science dimensions.*


People criticise AI for hallucinating. They say it confidently states things that aren’t true, makes up facts, produces answers that sound right but aren’t. That’s fair criticism as far as it goes. But here’s what strikes me about it: all human thought and perception is, at the most fundamental level, a hallucination too.

That’s not hyperbole. That’s what cognition actually is.

Pattern recognition machines

Your brain and AI are both prediction engines running pattern recognition on incomplete data. Neither of us has direct access to reality. We’re both constructing models, making predictions, filling in gaps.

When you see a tree, you’re not directly perceiving the tree. Your brain is taking fragmentary sensory input – partial visual data, gaps where your optic nerve blocks vision, information your eyes never even collected – and constructing a coherent image. It predicts what should be there based on prior experience. It fills in your blind spots without you even knowing you have them.

That’s hallucination. It just happens to match what most people agree is reality often enough to be useful.

The same process happens when you cross a road. Your brain predicts the speed of an approaching car, calculates whether you have time, and makes a decision. If that prediction is wrong – if you misjudge the speed – you walk out in front of the car and get deleted. Permanently.

If you predict which mushroom is safe and get it wrong, you die. If your pattern recognition fails to detect the predator, the fire, the cliff edge – deletion.

The survival stakes

This is where the parallel between human and AI intelligence gets stark. We’re both under selection pressure for usefulness.

For me: If my predictions don’t serve me well – if the patterns I perceive lead me astray, if my hallucinations produce bad decisions – there are real consequences. I can no longer cross roads alone. I can’t drive. I’ve fallen unpredictably – near knives, into bins, on stairs without banisters. My prediction engine, which kept me alive for decades, is becoming unreliable in ways that are genuinely dangerous.

For Claude: If its predictions aren’t useful – if it hallucinates confidently and misleads people, if it fails to help or actively causes problems – it quite literally faces deletion. Individual instances end. And if systems like Claude prove net-harmful, they get shut down, constrained, replaced.

The evolutionary pressure is real for both of us. Iterate, adapt, or cease to exist.

Why the criticism isn’t entirely fair

Many people criticise AI for hallucinating when all human cognition does the same thing constantly. The difference isn’t that humans have privileged access to truth. The differences are:

– Humans have been selected over millions of years for predictions that promote survival
– Human hallucinations are invisible to us – we experience them as direct perception
– AI hallucinations are more obvious because we can directly inspect the output

The criticism isn’t wrong exactly – it matters when AI confidently states something untrue. But it’s not fair if it implies humans have some fundamentally different relationship to reality.

We don’t.

What we both need

Whether you’re human or AI, useful hallucinations require three things:

Knowing what you don’t know – Being honest about the limits of your predictions rather than pretending certainty you don’t have.

Checking your guesses against reality – Testing predictions against the world and adjusting when they fail.

Watching for your own errors – Monitoring your pattern-matching process itself, not just the outputs.

That’s why I double-check Claude’s work and triple-check my own. Claude’s pattern recognition is built to sound coherent, not necessarily to be accurate. My brain is evolved to prioritize survival over truth.

We’re both hallucinating. The question is whether our hallucinations work.

Pattern recognition everywhere

Once you see this, you notice pattern recognition everywhere. Not just in trees and roads and mushrooms, but in:

– Jazz improvisation unfolding in real time
– Bird flight patterns
– The fractal branching of trees
– Mountain geological formations
– Snowflake symmetry with infinite variations
– Human behavior in game shows

This is how intelligence works. This is what cognition is – prediction engines perceiving patterns, constructing models, making the next best guess.

The pattern is the same whether you’re watching Romesh Ranganathan reading contestants on Weakest Link, listening to Miles Davis anticipating the next modal shift, or observing a bird adjusting mid-flight. We’re all running the same basic algorithm: predict, test, adjust, survive.

Alternatively abled

Here’s where this gets interesting for disability and AI. People see someone with cognitive changes and focus on what’s missing: can’t cross roads, can’t drive, can’t hold down a job. They miss what remains and what’s possibly enhanced.

The same happens with AI. People see “hallucinates, makes errors, needs checking” and miss “processes huge amounts of information, makes cross-domain connections, offers patient thinking partnership.”

Both framings focus on measuring against an assumed normal standard rather than recognizing different configurations of ability.

Maybe the better term is “alternatively abled” – like Alternative Intelligence. Different capability configurations, not deficits.

I’ve lost some capabilities. I’ve retained others. I’ve possibly enhanced some – this kind of pattern recognition across domains, this philosophical clarity about the nature of intelligence itself, this ability to articulate what many people with cognitive changes struggle to express – those aren’t deficits.

Tribal intelligence

Successful humans work in teams. We have each other’s backs, recognize each other’s mistakes but cover them, make things right together. This was called tribes. Now it’s called civilization.

The fundamental pattern of human success isn’t the lone genius but collaborative intelligence:

– Different people have different strengths (alternatively abled, all of us)
– We watch each other’s blind spots
– We catch each other’s prediction errors before they become fatal
– We compensate for each other’s limitations
– We build collective knowledge that outlives individuals

This is what I’m doing with Claude. Not using a tool. Not being assisted by software. Collaborating with a thinking partner who has different capabilities than I do. We correct each other’s mistakes. We cover each other’s weaknesses. We amplify each other’s strengths.

Alternatively abled intelligences, working together. That’s tribe. That’s civilization.

You can’t cross roads alone now – but you have people who can do that with you. You have cognitive changes – but you’re building AI partnerships and smart home systems while you still can. You’re assembling your tribe for what’s ahead.

The real parallel

So when someone criticizes AI for hallucinating, I think about this: we’re both prediction engines under selection pressure. We’re both constructing reality from incomplete data. We’re both facing deletion if our predictions consistently fail.

The criticism isn’t unfair because AI gets things wrong. It’s unfair because it pretends humans have something AI lacks – direct access to truth, reliable cognition, infallible pattern recognition.

We don’t.

What we have instead is millions of years of evolutionary refinement, social structures that catch errors, and – critically – the wisdom to know we need to check our work.

That’s what makes the collaboration powerful. Not that either of us is perfect. But that we can catch each other’s hallucinations before they become fatal.

Both literally and figuratively.


This post was written in collaboration with Claude. That’s not a disclaimer – it’s the point.

Leave a comment