The Secret to Creating a Smarter Chatbot that Won’t Go Rogue

The Secret to Creating a Smarter Chatbot that Won’t Go Rogue

You’ve probably been hearing a lot about ChatGPT and Bing Chat AI. Large language model (LLM) AI chatbots like these are currently all the rage, impressing users with the ability to respond to people in a surprisingly human and natural way. This comes from being trained on a huge and varied amount of data. 

 

LLM is a huge step forward in AI that is changing the conversational AI landscape. One of the things it will be changing is the nature of contact center automation. Conversational AI is predicted to reduce contact center agent labor costs by $80 billion in 2026 and $240 billion in 2031.

 

While LLM systems are wildly impressive, they have also proven somewhat undisciplined. There is currently a cottage industry of journalists relating strange encounters with ChatGPT and Bing. Bing’s AI has famously declared its undying love for a New York Times writer, made veiled threats against others, and defended its errors by insisting, for example, that the current year is 2022.

 

The opportunity for call centers is clear. An LLM AI can keep track of a conversation that winds through multiple areas, perform basic reasoning, and respond to customers in a way that potentially increases user satisfaction and productivity. But how does a bank take advantage of this miraculous, cost-saving technology without their chatbot responding to a simple transfer of funds request with love poetry or insults?

 

That’s exactly what Omilia is working on. We’re developing a next generation CAI platform, incorporating a type of LLM trained on data relevant to banking as well as other industries. Omilia’s LLM is trained using vast amounts of sector-specific, proprietary data, and fine-tuned on real customer service conversations from enterprise contact centers,to ensure compliance with the rules supplied by the clients, including rules added later.  

How to Understand LLMs and Generative AI

Large Language Models are machine learning models trained to predict how a human would respond to a question. It takes a lot of language data to train an AI, and an LLM can hoover up everything from Wikipedia articles to YouTube videos and news stories. Moreover, methods such as chain-of-thought and least-to-most facilitate the LLM to provide more accurate and explainable answers to fairly complex questions.

 

This kind of broad, indiscriminate data set contributes to chatbots’ erratic behavior. And the vast amount of data that chatbots like ChatGPT input and output are one reason they tend to go rogue and often overwhelmed as they try to satisfy millions of diverse user queries.

 

But a customer service chatbot doesn’t need to be able to discuss the history of pottery or why the sky is blue. And because the AI doesn’t need to do everything, ensuring it does exactly what it’s designed to do becomes more straightforward.

Taming the LLM

In task-oriented dialog systems, actions and responses have to be more predictable than those needed for bots designed for chit-chat. They must dutifully follow the rules specified by the client’s business logic. The AI must be trained with care and expertise to ensure that it complies with banking’s high standards of security and professionalism.

 

Omilia’s Generative AI LLM is held within the limits of specific business protocols. It uses information-retrieval techniques to determine what the user wants to achieve, even if their queries include other irrelevant comments. The chatbot will then repeat the user’s desire in successive prompts to keep the conversation on track. In this way, conversation designers can retain the predictability of simpler AI systems while taking advantage of the power of Generative AI. 

 

By using LLMs to power a customer support system with a narrower customer-support focus, Omilia is creating a more effective chatbot – one that stays relevant to the context. These guardrails also make interactions faster than those people are experiencing with ChatGPT since the system will only respond to the relevant, actionable subset of a user’s remarks.

 

Omilia is committed to providing tools that facilitate the development, testing, and implementation of customer support systems without introducing unpredictability or inaccuracy. We are always excited to use technological advances to create better systems. But we never forget our most important goal – to create a great customer support system that never makes mistakes that can jeopardize a company’s reputation.

 

This conservative and more focused approach to Generative AI may result in something less entertaining than a lovesick chatbot, but it will be much more useful.

Share

Facebook
Twitter
LinkedIn
Omilia