Beyond the Hype: Why Women in CX Must Lead the Next Era of Human-Centred AI
By Clare Muscutt, Founder and CEO of Women in CX
If Berlin was where the conversation began — questioning how CX teams could harness AI without losing the human heart of experience — Miami was where the truth surfaced.
The hype is everywhere with AI promising instant insights, automation promising frictionless journeys and vendors promising transformation with a single signature.
From keynote stages to boardroom mandates, 2025 has become the year of “Do AI now, figure it out later.”
But as our final UnConference panel in Miami made crystal clear:
The gap between AI’s promise and AI’s practice is widening fast and CX leaders, especially women, are being left to pick up the consequences.
To explore this tension, I was joined by four extraordinary voices at the sharp edge of data, insights, behaviour and automation:
Jill Donahue — VP Business Analytics, CallMiner
Sheila March — VP Healthcare Team Lead, Walker Information
Jennie Lewis — Senior Manager, Customer Insights, Airdship
Valerie Peck — Independent Expert, Elevate Consulting
Together, they peeled back the hype to reveal where AI is truly adding value… and where it’s quietly eroding trust, empathy, and inclusion.
When AI Makes Sense — and When It Absolutely Doesn’t
We opened with the misconceptions — and Jill brought the receipts.
“People think AI can do everything. Or that it will replace people. Or that customers don’t want to talk to bots. These are all wrong.”
She laughed as the room snapped their fingers in recognition. But her final point hit hardest:
“Everyone thinks prompting is some mystical code. It’s not. You can just talk to it.”
Jennie grounded the conversation in real-world examples: the Ulta virtual try-on tool, the evolution of Sephora’s ColorIQ, the Duolingo “Lily” video calls improving language confidence.
All small but powerful examples of AI making people’s lives easier.
But the failures? They were lurking everywhere, too.
“We’re seeing a lot of AI slop,” Jennie added. “Tools generating wrong, weird, or biased results because nobody kept a human in the loop.”
She wasn’t exaggerating, with examples ranging from the infamous Taco Bell drive-thru meltdown to McDonald’s automated order chaos, and the “cancel my service” loops so hostile that they trap people until they rage-quit.
Valerie named the deeper problem:
“Trust is ephemeral. A well-designed AI builds trust. A poorly designed one destroys it instantly.”
And as others pointed out, that destruction doesn’t just damage customer experience, it damages agents, too.
“Autosummarisation saves time,” Sheila noted, “but agents told us it removed their breathing time between calls. And suddenly, burnout increases.”
AI wasn’t the issue; the implementation was: the lack of testing, the lack of governance, the lack of thinking about the humans — customers and employees — in the system.
Organisations Aren’t Struggling With AI. They’re Struggling With Everything Around It
Sheila broke it down with brutal honesty:
“If your data is bad, your AI will be bad. If your governance is weak, your AI will be weak. If your people don’t understand why you’re doing it, your AI will fail.”
It echoed what we’d heard in our other panels, too:
Journey management fails without governance.
Listening fails without action.
AI fails without intention.
Jennie brought in an insight that made the room fall silent:
“There’s inherent bias in AI because it only knows what you’ve trained it on. If only developers train it… you only get developer bias.”
It was a reminder that inclusion is not a philosophical idea – it’s a design principle – where the more diverse the inputs, the more inclusive the outputs.
To support this, we shared the “Aequitas” tool – a free bias-detection engine built by Women in Tech to scan AI outputs for discriminatory patterns, and exactly the kind of practical, accessible solution this industry desperately needs.
“There’s inherent bias in AI because it only knows what you’ve trained it on. If only developers train it… you only get developer bias.”
Human-Centred AI Isn’t a Feature. It’s a Leadership Choice.
Jill cut through the noise with clarity:
“Keep the human in the loop. Always. And test AI with things you already know first. Build trust in the tool before you trust the tool.”
Valerie then dropped a line that instantly became a new UnConference mantra:
“Do the right thing. Do the thing right.”
It sounds simple, but it isn’t! Not when “the right thing” conflicts with quarterly cost targets, or when “doing it right” demands slowing down implementation, fixing the data, designing the change, and properly training people.
But if trust is the currency of experience (and it is!), then rushing AI without guardrails is the fastest way to bankrupt your brand.
So What’s the Real Risk?
That AI will replace us… Or that it will replace our humanity?
As automation accelerates but trust plummets, personalisation increases while inclusion erodes, and customer journeys get faster but also more frustrating, the panel returned, again and again, to the same truth:
AI is not inherently good or bad; it’s a mirror. And right now, too many organisations don’t like what it’s reflecting.
The real danger isn't the technology — it’s our willingness to hand it the keys without asking the hard questions:
What problem are we solving?
Who benefits?
Who gets harmed?
Who is missing from the room?
What would this feel like if I were the customer?
These are the questions women in CX are uniquely equipped to ask and urgently need to.
Final Reflection: The Future of AI Will Be Decided by How We Show Up Now
If this UnConference made anything clear, it’s this:
AI is not the protagonist of this story. We are.
Algorithms are not consciously choosing whether to make experiences more empathetic, more inclusive, or more exploitative; people are. And right now, AI in CX sits at a crossroads.
One path is the easy one: optimise for speed, chase efficiency, remove humans and hope nobody notices the erosion of trust.
The other path is harder, but it’s the only one that leads anywhere worth going:
Start with the customer problem
Design for trust, not transaction
Keep humans in the loop
Govern the data before deploying the tool
Treat inclusion as a non-negotiable
Demand evidence, not promises
Test, test, and test again!
This path is a leadership choice, not a technical accident, and this is where women in CX matter more than ever. Across every panel in Miami, we saw the same truth: women ask different, better, and more human questions — questions that prevent harm, create equity, and build trust — not break it.
AI will transform CX with or without us. So, the first question we need to ask is: who is in the room when those decisions are made?
Women in CX cannot afford to sit this one out. We understand the frontline, the emotion, the nuance, the reality of human need.
So the imperative is this: don’t just “use” AI… shape it, challenge it, humanise it, and most importantly, lead it.
Because in five years, customers won’t remember the model version or the vendor name, they’ll remember whether AI made them feel more valued or more excluded. That outcome is not written in the code, but it will be written in our choices.
The future of AI in CX will not be human-centred by default. It will be human-centred because women like us insisted on nothing less.
Ready to Move Beyond? Register Your Interest for the 2026 WiCX UnConference Series.
The movement we sparked in Berlin and Miami in 2025 is expanding globally.
In 2026, Women in CX hope to host UnConferences across:
🇬🇧 UK
🌍 EMEA
🇺🇸 USA
🌎 LATAM
If you want to be part of the only CX event designed by women, for women — where the agenda is co-created with our community — this is your moment.
✨ Register your interest today to be the first to access dates, locations, early-bird tickets, and community-led speaking opportunities
