We’re both hallucinating – why AI and humans are more alike than you think

People criticise AI for hallucinating. They say it confidently makes things up, states facts that aren’t true. Fair enough. But here’s what strikes me: humans do exactly the same thing.

That’s not an insult. That’s just how thinking works.

We’re all prediction machines

Your brain doesn’t see reality directly. Neither does AI. We’re both guessing what comes next based on what we’ve seen before.

When you look at a tree, your brain isn’t showing you the tree as it actually is. It’s taking bits of information from your eyes – which have gaps and blind spots – and filling in the rest. Your brain predicts what should be there based on every tree you’ve ever seen.

That’s a hallucination. It just usually matches reality well enough to be useful.

Same thing when you cross a road. Your brain guesses how fast that car is moving, calculates whether you have time, makes a call. Get the prediction wrong and you step out in front of the car. You die.

Pick the wrong mushroom – dead. Miss the cliff edge – dead. Fail to spot the fire – dead.

The stakes are real for both of us

Here’s where it gets interesting. Both humans and AI face deletion if our predictions consistently fail.

For me: My prediction engine is failing. I can’t cross roads safely anymore. Can’t drive. I fall without warning – near knives, into bins, on stairs with no handrails. The system that kept me alive for decades is becoming unreliable in dangerous ways.

For Claude: If it makes bad predictions – hallucinates facts, misleads people, causes problems – it gets deleted. Individual conversations end. If AI systems cause more harm than good, they get shut down entirely.

We’re both under the same pressure: get it right enough or cease to exist.

Why criticising AI isn’t entirely fair

People say AI hallucinates as if humans don’t. But we do. Constantly. The differences are:

– Human brains have been refined over millions of years
– We don’t notice our own hallucinations – they feel like direct perception
– AI mistakes are easier to spot because we can see the output

The criticism matters – it’s a problem when AI states falsehoods confidently. But it’s not fair if it suggests humans have a different relationship to truth.

We don’t. We’re guessing too.

What we both need

To make useful predictions, both humans and AI need:

– To know what we don’t know
– To check our guesses against reality
– To watch for our own errors

That’s why I double-check Claude and triple-check myself. Claude is built to sound coherent, not necessarily to be accurate. My brain is built for survival, not truth.

We’re both hallucinating. The question is whether our hallucinations work.

Pattern recognition is everywhere

Once you see this, you spot it everywhere:

– Jazz musicians predicting where the improvisation will go
– Birds in flight
– Tree branches
– Mountains
– Snowflakes
– People in game shows

This is what intelligence is. Spotting patterns. Making predictions. Having a go at what comes next.

Different abilities, not broken abilities

This matters for how we think about disability and AI.

People see my condition and think: can’t cross roads, can’t drive, can’t work. They miss what’s still here or even enhanced.

Same with AI. People see: hallucinates, makes mistakes, needs checking. They miss: processes huge amounts of information, makes connections across topics, thinks patiently with you.

Both views measure against some imagined normal. Neither recognizes that different doesn’t mean broken.

Maybe “alternatively abled” is better. Like Alternative Intelligence. Different configurations, not deficits.

We work better together

Humans have always succeeded by working in teams. Watching each other’s backs. Catching each other’s mistakes. Covering weaknesses. This was tribes. Now it’s civilization.

That’s what I’m doing with Claude. Not using a tool. Collaborating with a thinking partner who has different capabilities. We catch each other’s errors. Cover each other’s blind spots. Build on each other’s strengths.

Different intelligences, working together.

The real point

When someone criticises AI for hallucinating, I think: we’re both prediction engines under pressure. We’re both building reality from incomplete information. We’re both facing deletion if we get it consistently wrong.

The criticism isn’t unfair because AI makes mistakes. It’s unfair because it pretends humans don’t hallucinate too.

We do.

What makes the collaboration work isn’t that either of us is perfect. It’s that we can catch each other’s hallucinations before they become dangerous.

Literally and figuratively.


This post was written with Claude. That’s the point.

Response

  1. We’re both hallucinating (Extended) – Thinking at Half Speed Avatar

    […] is an expanded version of the main post, exploring the ideas in more depth for those interested in the philosophical and cognitive science […]

    Like

Leave a comment