DIFFICULT: Create a complete technology system
Technology companies who’ve been working in AI for many years are building their own LLMs, but this takes decades of experience, a master group of tech leaders working as one, and investment in the millions. This route isn’t for customer service leaders simply looking to up their game, although the biggest corporations are starting to give it a go.
Bloomberg, the finance experts, built their own LLM using 40 years worth of their own data. If you’re curious about what it takes to do this, you can watch their Head of Machine Learning Strategy, David Rosenberg, talk about the “staggering amount of data”, millions of hours of AI training, and millions of dollars it took to create a relatively small LLM in his talk at the Toronto Machine Learning Summit.
And don’t forget, once you have your LLM, you’ll also need an AI assistant, and if you want it to work seamlessly with your customer support team and genuinely help customers, you’ll need more tools for it to use alongside the LLM:
- Natural language processing (NLP), so your AI assistant can reliably understand everything your customers ask for.
- The ability to train your AI assistant by reviewing conversations to add and improve its responses and track its success, so it’s always at peak performance.
- The ability to integrate your AI assistant with your business systems (usually through APIs), so it can do more tasks for your customer service team to free up their time.
- The option to have your AI assistant speak in multiple languages, collect instant customer feedback, and transfer to your agents using live chat for unusual or sensitive queries.
- Enterprise grade security throughout, including compliance and GDPR, change management, cloud security, vulnerability management, and much more.
- Consider RAG (Retrieval Augmented Generation) too, which can help alleviate some of the errors LLMs come up with.
RISKY: Use a chatbot with an LLM link (”wrapper”)
You can pay for an AI chatbot that’s linked to a specific LLM (usually via ChatGPT) to make the experience appear more conversational, but it’s simply two different systems working side by side. The chatbot is a wrapper around the LLM, so it takes questions from customers, but is checking for answers via the LLM.
The problem is, there’s often no way to check or edit the information the LLM comes back with before it goes out to your customers and that’s not a sensible long-term option for support leaders. We’re already seeing PR disasters for companies likely getting bad advice and plugging their AI chatbot directly into an LLM, so with a little prompting, it can be made to say anything.
DPD sadly suffered a case like this when they upgraded their AI chatbot and a customer was able to encourage it to criticise them. Since LLMs have artificial intelligence and no concept of what’s right and wrong, a disgruntled customer was able to get the chatbot to say the courier company is “slow, unreliable, and their customer service is terrible.” This kind of thing can be avoided when there’s a human in the loop (HITL) approving content, and if you’re using an AI chatbot platform that stores consistent responses for when they’re needed and has strict prompt engineering processes in place for when an LLM takes over.
EASY: Launch an AI assistant with LLM capability securely built in
You can create an advanced AI assistant with LLM capability built in where use of the technology is managed by an experienced AI provider. Expert in the technology, they’ll make sure your AI assistant is only ever powered by an LLM resource where it’s most beneficial, and you’ll have the option to approve all content it generates before it goes out to your customers.