April 2026
One of the first things I learned when I started losing cognitive capacity is that complexity is the enemy. Every additional app, every new interface, every extra subscription is another thing to remember, another thing to manage, another drain on the limited reserves I’m trying to protect.
So I made a deliberate decision early on. I would choose my tools carefully, test them properly, and then stop looking. The goal wasn’t to find the best tool in the abstract. It was to find the right combination I could rely on and stick with.
I ended up with two.
Claude — for thinking
Claude is where I do the heavy cognitive work. Planning, research, medical tracking, working through complex decisions, drafting documents. Anything that requires sustained thought, nuance, or memory across a conversation.
The reason Claude specifically, rather than one of the alternatives, comes down to something that took me a while to articulate. It’s not just that it’s capable — several AI assistants are capable. It’s that it works like a thinking partner rather than a search engine. You can discuss something, push back, follow a thread, change direction. The dialogue itself does work that a single question and answer never could.
For someone whose cognitive energy is genuinely limited, that matters enormously. I’m not making multiple attempts to get to a useful answer. I’m having one conversation that gets there properly.
Craft sits alongside Claude as my note-taking and drafting layer — connected directly, so Claude can create and update notes during our conversations without me having to do it separately. This blog is being built almost entirely through that combination.
Siri — for hands-free control
Siri handles everything that needs to happen without me picking up a phone or opening an app. Lights, heating, timers, reminders, music, quick questions. All of it by voice, all of it through Apple’s ecosystem which runs reliably even when the internet drops. Apple has partnered with Google to bring Gemini into Siri through 2026, making it significantly more capable. Crucially, Apple processes those queries through its own Private Cloud Compute infrastructure — your data is not shared with Google. The privacy commitment stays intact. And with iOS 27 expected in autumn 2026, users will be able to choose their own AI engine for Siri entirely — including Claude. The two tools may become even more tightly integrated than they already are.
The key insight here is that Siri and Claude aren’t competing. They do completely different things. Siri is fast, frictionless, and hands-free. Claude is slower, deeper, and requires me to engage. I need both and they don’t overlap.
Why these two companies
There is something else worth saying about why Apple and Anthropic specifically, beyond what their tools do.
Most of the big technology companies are built around extracting value from your data. Your behaviour, your preferences, your conversations — all of it feeds an advertising model or a training pipeline. That’s not a conspiracy theory, it’s just how the economics work.
Apple has staked its reputation on a different model. Privacy is not a feature Apple added — it’s an architectural commitment that shapes how their hardware and software are built. On-device processing, end-to-end encryption, no ad targeting. You are the customer, not the product.
Anthropic has a different kind of commitment. It is an AI safety company first, with a genuine and documented focus on building AI that is honest, careful, and aligned with human interests rather than just with engagement metrics. That’s not marketing — it’s the reason the company was founded by people who left other AI labs because they were worried about where things were heading.
Neither company is perfect. But when I am trusting tools with my thinking, my health information, my private conversations — the ethics of who built them matters to me. These two happen to be the closest together in that regard, and further ahead of the field than most people realise.
What I moved away from
Before settling on this, I had accumulated several AI tools doing similar things in slightly different ways. Each one made sense individually. Together they created exactly the kind of cognitive overhead I was trying to reduce — multiple interfaces to learn, multiple subscriptions to manage, no clear sense of which one to reach for when.
The process of simplifying was itself clarifying. Working out what each tool actually did, what I actually needed, and where the genuine gaps were. In the end the answer was simpler than I expected.
The principle behind it
Keep it simple and sustainable. Permanent improvements that reduce future cognitive load are worth more than clever temporary solutions that create maintenance burden. A tool I can rely on without thinking about it is more valuable than a better tool I have to constantly evaluate.
I’m building these systems now, while I still have the capacity to learn them properly. The point isn’t to keep optimising. The point is to get to a place that works and stay there.
Everything in this post was worked out in conversation with Claude, which rather makes the point.
Leave a comment