There’s a lazy narrative forming about AI.
That it’s for shortcuts.
For automation.
For replacing thinking.
I’ve probably helped shape that narrative myself.
I’ve openly challenged shallow, copy-paste uses of AI — especially in community and place-based work where trust, judgement and lived experience actually matter.

So there was a slightly ironic moment a few weeks ago.
I was having coffee with some of our community research team, talking through the thinking behind our Principles of Community-Owned Learning, when someone pointed out:
“You critique AI a lot… but you use it all the time.”
Fair point.
Then this week a colleague asked how I actually use it.
Not theoretically.
Not philosophically.
Practically.
So this is an attempt to explain.
Because my experience hasn’t been superficial or lazy at all.
If anything, it’s the opposite.
I use AI to slow my thinking down.
To test it.
To challenge it.
To map it.
To make it better.
Not to outsource it.
Over the last year, AI has quietly become one of the most useful learning tools in my professional life — not because it gives me answers, but because it forces me to explain what I think I think I know.
And that’s where the learning happens.
AI as a thinking partner, not a content machine
Most people approach AI like a vending machine:
“Give me a plan.”
“Write me a strategy.”
“Summarise this.”
That’s not how I use it.
I use it more like a sparring partner.
I upload:
- our Theory of Change
- strategies
- funding bids
- policies
- governance papers
- insight logs
- meeting notes
- frameworks
Then I ask harder questions:
Where are the gaps?
What assumptions am I making?
What doesn’t line up?
What would Sport England challenge here?
What would a sceptical Board member pick holes in?
What have I missed?
It’s less “write this for me” and more:
“Stress test my thinking.”
That’s a very different relationship.
Mapping the whole system (not just documents)
In Hartlepool, we’re dealing with complex, place-based work:
- Pride in Place
- Place Expansion
- Hartlepool Sport
- HOP
- College partnerships
- VCSE infrastructure
- social value and community wealth building
- multiple funders
- multiple boards
- multiple accountabilities
Nothing exists in isolation.
The risk in systems like this is fragmentation:
everyone working hard… but slightly sideways to each other.
So I started using AI to map everything together.
Not just plans.
But relationships.
I use it to:
- map our Theory of Change against delivery
- align strategy with funding language
- cross-reference insight with investment decisions
- sense-check governance and role clarity
- translate local learning into “funder-safe” language
- turn messy notes into coherent narratives
- compare what we say we do with what we actually do
It’s basically become my systems whiteboard.
Something that can hold complexity without dropping threads.
Because humans forget.
AI doesn’t.
Learning faster (and more honestly)
The biggest benefit hasn’t been productivity.
It’s been honesty.
AI has no ego.
It doesn’t protect your pet project.
It doesn’t avoid awkward truths.
It doesn’t get defensive.
It doesn’t sugar-coat gaps.
It just says:
“This doesn’t connect.”
“This outcome isn’t measurable.”
“This role is unclear.”
“This is probably over-claiming.”
If you’re serious about improvement, that’s gold.
It’s like having a brutally fair colleague who has read everything, remembers everything, and isn’t worried about your feelings (a role I have often found myself playing in real life).
Uncomfortable sometimes.
But incredibly useful.
Leadership, legitimacy and clarity
For me, this has become a leadership tool.
Not a tech toy.
Leadership in place-based work isn’t about having all the answers.
It’s about:
- clarity of thinking
- coherence of action
- legitimacy in the room
AI helps me tighten all three.
Before I walk into a Board or partner meeting, I’ve usually already:
- tested the argument
- checked the logic
- pressure-tested the risks
- simplified the language
- mapped it back to strategy
So when I speak, it isn’t off-the-cuff.
It’s thought through.
That creates legitimacy.
Not because “AI wrote it”.
But because the thinking has been sharpened.
It’s closer to using a calculator in engineering than cheating on an exam.
The work is still yours.
You just make fewer avoidable mistakes.
“Ask Dave” – my alter ego
I also use AI in a slightly weirder way.
I gave it an alter ego.
Ask Dave.
Dave is:
- pragmatic
- slightly blunt
- allergic to jargon
- unimpressed by theory for theory’s sake
- obsessed with one question: does this actually work in real life?
When I’m overcomplicating something, I ask:
“What would Dave say?”
And the answer is usually:
“Stop being clever. What problem are you actually solving?”
It sounds trivial, but it’s grounding.
Especially in policy-heavy environments where it’s easy to disappear into language and forget people.
Dave pulls everything back to:
residents
volunteers
delivery
reality
If a plan wouldn’t make sense to Dave, it probably isn’t ready yet.
The real point
AI hasn’t replaced my judgement.
It’s improved it.
It hasn’t made me faster at producing documents.
It’s made me slower and better at thinking.
And in complex systems work — Pride in Place, neighbourhoods, partnerships, governance — thinking well is the job.
If you treat AI as a shortcut, you’ll get shallow outputs.
If you treat it like a mirror, a critic and a thinking partner, it becomes something else entirely:
A learning accelerator.
Not artificial intelligence.
Augmented judgement.
In short:
I don’t use AI to write for me.
I use it to think more clearly, lead more coherently, and earn legitimacy through better decisions.
And when in doubt…
I just Ask Dave.

Leave a reply to Arix Fïen Cancel reply