Blog

How to scale AI pilots correctly

We look at how to take AI projects from pilot to production and ensure the results scale too.

In our experience only a quarter of AI proof of concept projects make it all the way to the full-scale production environment.

There are some formidable obstacles to taking a successful proof of concept project all the way to business as usual, including technology integration, making the business case, financials, team and leadership buy in, and a host of others.

Let’s take a look at the process step by step.

The route from PoC to production

From proof of concept to production is usually a three-step process, and that’s assuming every stage goes smoothly.

The proof of concept is very limited in scope, with the aim of proving one or two specific things about a technology. For example: Can the technology do what you need? Will customers use it? Can it integrate with this particular system? Could it deliver this business benefit or ROI?

It may need a series of small-scale PoC projects to satisfactorily answer your questions, particularly if you have a series of questions which are dependent on one another.

PoCs are generally conducted in siloed IT environments, meaning they are not properly integrated with business as usual systems, and with a very limited subset of real users and customers.

At the end of the PoC phase the second step is a full pilot project. This involves taking the learning from one or several PoC projects and building a limited-scale version of the eventual production solution.

That means it does operate in the real world, with real users and customers, and integrates with the production IT environment. However it is still limited to small samples of users and customers, and it doesn’t replace any live systems in the current production environment. Instead it runs alongside existing systems while it is tested and refined in the real world.

The goals of a pilot project are to demonstrate that the solution can integrate with the production environment, provide the required functionality, replace whatever systems it is supposed to supersede, and deliver the target return on investment.

At the conclusion of the pilot project, which may run in several iterations, the new solution is either abandoned, sent back to the drawing board for further testing, or given the greenlight for the full production environment.

Production scales the whole solution to an industrial level and it eventually becomes part of business as usual.

Two types of roll-out projects

Scaling and rolling out pilot projects requires a different mindset. During the PoC phase you can, and indeed should, try many different approaches to see how results are affected. Less so in the pilot phase, and when it comes to production you should be totally focussed on how to systematically get things done at scale.

That said, there are two types of roll-out programmes requiring different approaches:

1. Incremental improvement: In these types of programmes you continue essentially the same activities as in the pilot programme (but without the experimentation); that is incrementally improving performance and widening the scope of the project’s activities.

For example, if you were rolling out a chatbot project, you would continue training it on customers’ questions and responses, and reiterating a new version of the bot every few weeks. The objectives are to get better at answering queries on topics where the bot is under-performing, and also to continue training it on new topics as they are brought up by customers. You would also keep expanding the audience for the chatbot by opening it up to new customer segments on a regular basis.

2. Applying the learning: The second type of roll-out is where you apply the lessons from one or more pilot projects to completely new projects that are larger in scale and much closer to the harsh real world of business-as-usual.

For example, let’s say you have proved that a certain percentage of customers will interact with a chatbot for a few simple transaction types. You have demonstrated what the time and cost savings could be to your business and the customer, at least on a small scale, and critically you also know the limitations – what customers and agents do and won’t do; what’s technically difficult; and what else is possible but was beyond the scope or budget of the pilot.

This learning needs to be applied to other areas of the business. You might be in a position to spend a little more money to integrate the bot slightly deeper with your business systems, opening up new and perhaps more complex functions. The point is that you are not just continuing to do what you did in the pilot; rather you are applying what you learned to new projects and functions.

Attention, money, and resources

What do all pilot projects need when they grow up? Exactly the same as most people, they need love, money, and resources.

When it comes to scaling technology projects, the most important people are not the members of the transformation or innovation team that ran the pilot; their love and attention are taken as read. No, the people who will make or break the project are the staff in the business department that will eventually be responsible for deploying the technology, and the leaders of that department, who hold the budget that funds the project.

While any of these people might personally have a tangential interest in the technology itself, for its own sake, their main drivers are to deliver results for their team, department, customers, and boss. The type of self-interested questions they will be asking are: Will this technology save me and my team time to focus on more important, interesting, or challenging work? Will the money that comes out of my budget for this technology generate more revenue or profit, or allow me to save time and money elsewhere?

Ideally the project is tied into the mission and existing KPIs of the department whose budget is paying for the whole thing. This means its managers and staff are naturally incentivised to help. Whoever is paying ought to be someone with an interest in solving the problem the project is setting out to solve. If not, the budget and attention will both dry up. Projects with the wrong people in charge, with inappropriate KPIs, that live in the wrong business unit, and which try to solve the wrong challenges will inevitably fail.

So it might be surprising that we suggest not handing the technology or project over to the business-as-usual teams just yet. AI is still technically and culturally a big challenge for most people, and responsibility for it should remain for a while yet with the people who understand those challenges. Of course the day-to-day team should be involved in the production project – gradually more so as it rolls out – and eventually the transformation team should handhold the operations team during a transition period, but not until the technology is fully integrated into the business.

A thirst for data

For a machine learning project, such as a chatbot or voice bot, the quantity and quality of data available for ongoing training is one of the most critical success factors. For a chatbot your ideal data is real, live chat sessions and for a voice bot it is phone conversations.

Your PoC phase might have started with little to no data – indeed you might have hand-crafted data to get going – but you should have gathered transcripts or recordings as you go to continuously train the system. During pilot projects this process – of data collection and training – should have accelerated as the project scaled and new use cases were added.

Now that the chatbot is entering production the need for constant training is not going to stop. In fact, as use cases are added to a chatbot in production it becomes ever more thirsty for data.

Our take on data acquisition for the training of machine learning systems is not to over-complicate. While most companies these days have, if anything, an over-abundance of data on their customers and transactions, a lot of this will be ‘locked’ inside siloed systems, in proprietary or incompatible formats, and will need a lot of data preparation expertise and time before it can be used for training a bot on use cases.

For this reason we prefer to use real-world data where available. If it is not, then our preference is to fake it either by having agents play the role of the bot in interactions with real customers, or by writing up scenarios – intents and expressions – for likely use cases and feeding these to the bot. In our experience by the time you have analysed your company data and got it into a relevant format for use, you could have already launched and brought a bot up to speed with real customers using the ‘fake’ data method.

These methods also work very well when it comes to training up production bots to handle new intents and recognise new expressions. With a bot in full production, maximising the return on investment usually means handling more customer interactions and more use cases, enabling agents to focus only on the value creating or difficult cases that require empathy or ingenuity. This will mean constant training and iteration of the bot.

Positioning – internally and externally

For any new thing to be adopted, even embraced, people need to either to experience the benefits for themselves or have them demonstrated. While you can demonstrate benefits using empirical data from your pilot projects, people tend to have a stronger reaction if they can feel something for themselves, either by watching a demonstration or having a go.

If the goal of your new technology project is to, say, cut the costs of providing customer service, and / or deliver better customer experiences, how do you demonstrate that it does that to the customer service and CX leaders in your organisation?

The answer is that your pilot projects should have proven the proposition that by automating a certain percentage of customer interactions you can enable customer service agents to spend more time on the interesting and challenging calls. These are likely the interactions that agents relish getting their teeth into anyway, and also the ones that most greatly impact customer satisfaction.

The new technology also needs to be positioned with customers, otherwise they will just keep with the old way of doing things that they are used to. For that reason the marketing team needs to buy in to the project so that they can positively sell the innovation to customers. When it comes to AI and automation, too many companies are waiting for the technology to be perfect, or for their users to accept it culturally before rolling out.

Forward-looking companies that set their own agendas flip that around by marketing the benefits of a new technology to their staff, leaders, and customers. If you are rolling out a chatbot, it’s easy with no positive marketing for the customer to just see that as the company saving money by taking away live agent support. If you proactively control the message, however, what you’re telling customers is that they can now do in 1 minute what used to take 10. What’s not to like about that?

Coming Soon: Why AI fails? White Paper

  • A whitepaper exposing failed AI projects and roll-outs
  • Focused on contact centre and customer services solutions
  • ‘Laid bare’, real life examples (including some of our own painful lessons).

Fill in your email below and we’ll let you know when the white paper is available to download …

  • This field is for validation purposes and should be left unchanged.