Will AI mean the end of call centres? A proof led answer from EBI

Matthew Doel
Founder EBI

TL;DR 

  • AI will not replace your call centre team, but it will change the work they do. 
  • With the right blend of flows, generative AI and human handover, even complex journeys such as mortgage pre-application, care pre-assessments, debt support and insurance claims can already run safely in production today. 
  • Our customers are averaging over 40% automation on common enquiries, with 80% and above in some property management use cases. 
  • The foundations that matter are structured flows, clear knowledge sources, human in the loop controls and a right to talk to a person. 
  • You can launch an AI assistant in about ten minutes using your website content, then deepen it with flows, integrations and live chat as you prove value. 

The question keeps coming up in boardrooms and planning meetings: will artificial intelligence (AI) replace human customer service, or will they work side by side?

Behind this question is a real pressure. Contact volumes are rising, expectations are higher than ever, and call centre costs are one of the largest lines in the service budget. Leaders are rightly asking how far AI can go without damaging trust, brand or compliance.

Our view, grounded in over a decade of delivering AI solutions, is simple: AI will not replace your team, but it will change what they do. With the right implementation, including structured flows, guardrails and a clear path to a person, journeys such as mortgage pre-application, debt support, refunds, know your customer (KYC) checks and claims are already running in production today.

Analysts also point to rapid progress. Gartner forecasts that by 2029 agentic AI (systems that can plan and act across steps with limited supervision) will autonomously resolve around 80% of common service issues. In practice, EBI customers are already averaging over 40% automation today and reaching 80% and above in property management use cases. We therefore see 2029 as a conservative horizon when implementation quality is high. Many service leaders plan to explore or pilot customer facing conversational AI in 2025-26, which matches what we see on the ground. Treat these as forecasts, not certainties, but note the direction of travel.

You do not have to wait for a perfect future state. You can start now, in a controlled way.

Hybrid by design: Flows + NLP + LLM, not free roaming agents

The strongest results do not come from a single free roaming chatbot. They come from combining structured flows and generative AI within a clear operating model.

In AI Studio – our platform for building, hosting and managing AI assistants – a flow is an explicit, testable path for a journey such as refunds, KYC checks, amendments or claims. Natural language processing (NLP) triages the users request by detecting intent and, when matched, runs the appropriate flow.

If an intent has not been trained yet, or a flow is not available, the assistant can, if required, fall back to a large language model (LLM) grounded with retrieval augmented generation (RAG). Flows can include LLM nodes and LLMs can call flows. This gives you levers to balance administration overhead, natural exchanges and control.

Flows ensure the right steps happen in the right order, variables are captured cleanly, rules are enforced and live chat is offered when confidence dips. This hybrid pattern avoids the “chatty but unreliable” trap and keeps brand, compliance and outcomes under your control.

A quick example. A mortgage pre-application journey can gather consent, perform identity checks, collect documents, run a basic affordability policy and then hand the conversation to a regulated adviser when appropriate. The assistant does the heavy lift. The person handles advice. You get lower effort for customers, faster cycle time for the business and clear auditability for risk.

Key takeaway: you do not have to ask an LLM to make up journeys on the fly. Model the journeys as flows, and let AI handle the natural language around them.

Human‑in‑the‑loop by default, with named levers

“Human in the loop” (HITL) should not be a slogan. It should be configured, measured and easy to explain to your risk, legal and operations teams.

On our platform, service teams can:

  • Set confidence thresholds per journey.
  • Detect sensitive topics such as bereavement or debt.
  • Redact personally identifiable information (PII) before model calls.
  • Enforce refusals for regulated advice.
  • Enable automatic handover to live chat or phone agent.
  • Place low confidence answers in a review queue.

All of these controls are visible and testable, with realtime metrics that show what the assistant is doing, where confidence drops and when people are stepping in.

Accessibility and privacy are first class concerns too. AI Studio aligns to WCAG 2.1 AA for accessibility, GDPR (General Data Protection Regulation) for privacy, and offers United Kingdom or European Union data residency options.

This is how you ship AI that is useful and safe.

Regulators are moving in the same direction. Some analysts expect the European Union to mandate a customer’s right to talk to a human by 2028. If you design purposeful escalation now, you will be ready if and when this becomes law.

Key takeaway: make it very clear when and how humans step in. This reassures customers, agents and regulators.

Addressing three lines from the BBC article, constructively

A recent BBC article raised three concerns that come up in almost every leadership discussion. It is worth addressing them directly.

1. “Let us see what it looks like in five years’ time – whether an AI can do a mortgage application, or talk about a debt problem. Let us see whether the AI has got empathetic enough.”

You do not need five years.

With flows, policy grounded content, guardrails and HITL, these journeys can run safely today.

  • For mortgage pre-application, assistants can collect consent and evidence, verify identity, run basic affordability checks and book an appointment with a regulated adviser at the right moment.
  • For debt support, assistants can recognise sensitive language, slow the tone, acknowledge emotion, present options from approved guidance and offer a human immediately.

The difference is not science fiction. It is careful implementation rather than a free roaming agent.

2. “This is a very expensive technology.”

It does not have to be.

You can launch on AI Studio from £49 per month on Starter, with Standard at £99 and Pro at £149 for teams that need more headroom. Enterprise plans start from £2,000 per month when you need enterprise features like SSO and private instances.

Many small and medium sized enterprises that opt for a managed service alongside AI Studio pay well under £1,000 per month, depending on scope. For many organisations that is significantly lower than the cost of one additional full time agent, and the assistant is available every hour of every day.

3. “When Salesforce first put the platform through its paces it learned lessons about how to make the AI seem more humanlike.”

That experience is common. Many teams start by experimenting with a general assistant, then learn through pilots that they need stronger guardrails, tone control and a clearer escalation path.

We have shipped production assistants since 2014, so empathy prompts, tone control, refusal rules and safe escalation are built into our approach from the outset.

The goal is not to seem human. The goal is to resolve tasks in a human sounding, policy correct way, and to hand off to a person when the situation genuinely needs one.

Key takeaway: design for empathy, safety and escalation from day one, instead of hoping they will emerge from experimentation.

Proof in production, not promises

The most important question for leaders is not “What does the roadmap say?”. It is “Where is this working in production, and what results are customers seeing?”.

A few examples of outcomes from EBI built and managed assistants:

  • Legal & General Insurance reports that 83% of customers using its assistant prefer it to phone or email.
  • Mytime Active reached 97% routine resolution within six months, easing pressure on its contact teams.
  • Stena Line regularly resolves up to 99% of routine travel questions, with 99.88% success on the most popular queries, through its assistant Stina.
  • Cooper for Coop Sweden answered around 91% of common questions for more than three million members.

Our platform AI Studio supports 130+ languages, and to date over 20,000 organisations worldwide have signed up to build on it.

These are the kinds of proof points leaders can scrutinise. The exact numbers will evolve over time, but the pattern is clear: well designed assistants can handle the bulk of routine contacts, freeing human teams to focus on judgement calls and complex support.

Key takeaway: do not just compare vendors on features. Ask for production metrics that show automation, customer satisfaction and containment.

What matters most in implementation

Knowledge management matters more, not less, in the era of LLMs.

If your policies and “how to” steps are scattered or out of date, your assistant will echo that confusion. This is why we:

  • Ground answers in approved sources using RAG.
  • Model journeys as flows rather than loose conversations.
  • Escalate with purpose when confidence drops or when someone explicitly asks for a person.

It is also why we measure weekly, not quarterly.

In AI Studio, realtime dashboards and metrics surface a short list of key performance indicators (KPIs) that drive service cost and experience, for example:

  • Automation rate: the percentage of enquiries resolved without human intervention.
  • Transfer rate: how often the assistant passes a conversation to a person.
  • Average response time: how quickly customers receive a helpful answer or handover.
  • Confidence rate: the %age confidence the AI has.

These indicators show whether AI is reducing recontact and freeing your team to handle exceptions.

Key takeaway: you do not need twenty metrics. You need a small set that connect clearly to cost, experience and risk.

Cost, speed and governance

One of the most common questions leaders ask is “How quickly can we start, and how much governance will we need?”.

You can be live in about ten minutes using just your website URL, then deepen the assistant with integrations and flows over time. Start small, prove value early and scale the journeys that move the needle.

If you prefer a hands off approach, our team can fully manage training, governance and reporting while you retain control of policy and voice. You stay compliant, you reduce operational noise and you create space for your human team to do the judgement calls only they can do.

For enterprise buyers, the options are there when you need them, including:

  • Data residency controls.
  • Single sign on (SSO) and Security Assertion Markup Language (SAML) based access.
  • Private instances for organisations that need dedicated infrastructure.

That mix of guardrails and speed is why leaders can start now rather than waiting for the perfect future state.

Practical next steps for leaders planning 2026

If you are responsible for a call centre or service operation, here is a simple way to turn this into a 2026 plan.

  1. List the ten tasks that drive the most volume and cost. 
Think about calls, emails and chat. Include both customer facing and internal support where volumes are high.
  2. Scope each as a flow with required inputs, validation and outcomes. 
For example, “change of address” might require customer identity, current address, new address and effective date.
  3. Connect the systems customers care about first. 
Integrations let the assistant perform actions, not just answer questions. Prioritise them based on request volumes.
  4. Set the confidence and compliance rules that trigger a handover to a person. 
Include sensitive topics such as vulnerability, complaints and regulated advice.
  5. Curate your knowledge sources. 
LLMs repeat what they are given. Make sure policies, guides and help articles are correct, clear and version controlled.
  6. Put live chat in the loop from day one, not as a last resort. 
This reassures customers and lets agents see where the assistant helps or struggles.
  7. Track your KPIs weekly and publish the results to your teams. 
Celebrate successes, highlight where human expertise is most valuable and involve agents in designing new flows.

If regulation catches up, you will already be operating with a right to talk to a human in mind.

Where to go next

Ready to try this in your own contact centre?

  • Launch an AI assistant today using your website content, then add flows, integrations and live chat as you go.
  • If you prefer a guided path, book a discovery call with our team.
  • For deeper reading, explore our Features, Integrations, Savings calculator, Pricing and Case studies to see what good looks like in production.

AI is not the end of call centres. Done well, it is the end of long waits, repetitive questions and unnecessary effort – for customers and for your teams.

FAQs

Will AI reduce call centre headcount?

Over time, AI will reduce the need for people to handle routine contacts. In most organisations we work with, the first phase is about improving customer experience and optimising their team, not large scale redundancies. Human teams shift towards exceptions, complex support and higher value conversations.

What is the safest first journey to automate?

Start with high volumelow risk enquiries such as FAQs, live opening hours, simple status checks or booking changes. Use these as a proving ground for flows, guardrails and escalation.

How do we keep our AI assistant compliant?

Combine policy grounded content, refusal rules for regulated topics, confidence thresholds, human in the loop controls and regular reviews of real conversations. Involve risk and legal teams early so they help design the guardrails.

How fast can we go live with AI Studio?

You can launch an assistant based on your website content in around ten minutes. The deeper work is in designing flows, connecting systems and aligning guardrails with your policies, which you can then iterate over weeks and months.