Beyond the Hype: Why 95% of AI Pilots Fail in CX — and What the 5% Do Differently

By Clare Muscutt, Founder and CEO, Women in CX

At a recent WiCX Talk Trends panel, we brought together CX, design, behavioural science, and AI leaders to confront a difficult reality: despite the momentum around AI, much of it is not translating into meaningful customer experience improvement.

According to MIT’s “GenAI Divide” research, around 95% of generative AI pilots deliver no measurable business impact. For many CX leaders, that figure doesn’t feel surprising — it feels familiar.

Across organisations, pilots stall. Proofs of concept linger. Technology launches, but experience doesn’t meaningfully change.

So during our discussion, we asked a simple but urgent question:

Why is so much AI failing to improve customer experience — and what does it take to build the 5% that actually transform it?


The Reality Check: Where AI in CX Starts to Go Wrong

One of the strongest themes from the panel was that most AI initiatives don’t fail because of bad intent. They fail because they begin in the wrong place.

Too often, organisations start with the technology rather than the customer problem.

As Design Lead at Lloyds Banking Group, Lily McNally explained during the discussion:

“AI isn’t plug-and-play. If the problem isn’t clearly defined from a customer perspective upfront, the experience can end up being more annoying than helpful.”

That misalignment creates a familiar pattern. AI is implemented quickly, expectations are high, and success is measured through adoption, speed, or cost reduction — rather than whether the experience actually improves.

The result is technology that looks promising on paper but fails to land in practice.

AI isn’t plug-and-play. If the problem isn’t clearly defined from a customer perspective upfront, the experience can end up being more annoying than helpful.
— Lily McNally

AI Is Bigger Than Chatbots — and Always Has Been

Another misconception surfaced early in the conversation: when organisations say “AI in CX,” they often still mean chatbots.

But the practitioners on the panel were clear that the real opportunity lies elsewhere.

Elena Zhizhimontova, VP of Applied AI at ujet.cx, described the shift:

“When people say AI in customer experience, they think chatbots. But there are so many other AI agents that are just as powerful — sometimes more. The best interaction is the one that never needs to happen because the issue was already fixed.”

Preventative insight. Workflow support. Identifying product issues through customer signals. Removing friction before a customer ever needs to reach out.

In these scenarios, AI enhances experience by reshaping systems — not just automating conversations.

Why So Many AI Pilots Fail to Scale

As the discussion unfolded, several root causes emerged again and again.

1. Starting with capability, not need
Projects often begin with what the technology can do rather than what customers and employees actually need.

2. Measuring hype, not experience
Adoption and speed can look impressive, but they don’t necessarily correlate with improved outcomes.

Sandrea Morgan, Head of Customer Support at Adanola, shared how she approached this differently:

“Some parts of the business care about hype metrics. But my focus was CSAT and employee experience. I benchmarked both at the beginning — because that’s how you know if it’s actually working.”

3. Underestimating the role of design and integration
AI is not plug-and-play. It requires thoughtful integration, guardrails, and human oversight.

4. Ignoring emotional context
Customer journeys — particularly in financial, health, or support scenarios — are often deeply personal. Cold, transactional automation can erode trust quickly.

5. Treating AI as a product, not a change programme
Implementation isn’t just technical. It’s cultural. Employees need to shape it, test it, and trust it.

Behavioural science and emotional intelligence expert Sandra Thompson framed it through a human lens:

“We’re wired for survival and connection. When decisions are rushed, we don’t always think clearly about the human. And then we wonder why this shiny thing doesn’t work out as we thought it would.”

“Some parts of the business care about hype metrics. But my focus was CSAT and employee experience. I benchmarked both at the beginning — because that’s how you know if it’s actually working.”

Sandrea Morgan

What the 5% Do Differently

While failure dominated headlines, the conversation also revealed what success looks like in practice.

Organisations that scale AI effectively tend to share a different mindset.

They define use cases before choosing technology

Rather than chasing what’s new, they start with the highest-impact journeys.

Sandrea described her approach:

“I defined the use cases first — for customers and for employees. It’s easy to get distracted by what’s shiny, but if you centre decisions on what’s right for the customer and the team, you can’t get it wrong.”

They benchmark before they build

Baseline data clarifies whether AI is improving experience — or simply changing workflows.

They involve frontline teams early

Trust grows when employees help shape the solution.

“It was our advisors who tested the bot,” Sandrea explained. “They shaped the tone of voice and accuracy. They’re the ones handling the contacts — so they know what good looks like.”

They prioritise outcomes over novelty

Sophisticated AI doesn’t guarantee a meaningful experience.

Elena emphasised the importance of focus:

“Prioritise outcomes over novelty. Start with the biggest customer problems, fix those, and improve from there.”

The Human Factor: Culture, Trust, and Inclusion

As the panel moved into implementation realities, one message became clear: culture determines whether AI succeeds.

When employees feel technology is being imposed on them, resistance grows. When they help shape it, adoption follows.

Sandrea described bringing her team into the process:

“We treated it like bringing in a new teammate. You wouldn’t just give someone a process document and expect them to be perfect — so why would you expect that from technology?”

Trust also hinges on transparency.

Lily emphasised the importance of clear expectations:

“If AI can only support a specific use case, be upfront about it. Set expectations early and design for trust — not just functionality.”

Inclusion must be intentional, too.

Elena highlighted the risk of biased workflows:

“We need to test across accents, accessibility needs, and different contexts. Otherwise, some customers move smoothly through the experience while others get stuck.”

Responsible AI is not neutral. It is designed — or it isn’t.

We need to test across accents, accessibility needs, and different contexts. Otherwise, some customers move smoothly through the experience while others get stuck.
— Elena Zhizhimontova

Leadership: From Experimentation to Responsibility

As AI becomes embedded in CX, leadership expectations are shifting.

This is no longer about running pilots. It’s about building organisational capability — aligning data, design, governance, operations, and culture.

It also demands a different leadership mindset.

One that balances efficiency with empathy.
One that sees technology decisions as ethical decisions.
One that recognises experience is shaped by systems, but defined by people.

Sandra reflected on the role women play in this shift:

“Women often connect things differently — the human, the organisational, the emotional. If we led more in this space, there would be less failure.”

Beyond the Hype: What CX Leaders Must Do Next

As the conversation closed, the advice from the panel was strikingly consistent.

  • Define the problem before the tool.

  • Design with customers and employees — not just for them.

  • Measure what matters, not just what’s easy to track.

  • Treat implementation as change, not deployment.

  • And prioritise experiences that build trust, not just efficiency.

Lily captured the essence of the shift:

“AI should reduce the noise and the mental load so we can focus on the thinking and feeling — the human things we actually want to do more of.”

The next generation of customer experience will undoubtedly be shaped by AI.

But its success will not be determined by algorithms.

It will be determined by leadership. By whether organisations choose novelty or outcomes.
Speed or responsibility. Automation alone — or human-centred transformation.

Because AI can make experiences faster and smarter.

But only if it makes them better for people, too.


Thank You to Our Partner — and Join the Conversation

This conversation was hosted by Women in CX in collaboration with ujet.cx, whose support enables us to bring practitioners, technologists, and leaders together to challenge the status quo and share what’s really happening inside organisations.

We host free-to-attend webinars and discussions for women and allies working in customer experience, insight, design, service, and technology throughout the year — creating space for honest dialogue, shared learning, and practical progress.

To stay up to date with the latest thinking, community news, and upcoming events, join the Women in CX community.

👉 Join the free Women in CX community

Next
Next

‘AI Voice Agents Are Not a Contact Center Tool – They’re an Operating Model Decision’ by Stacy Dye