
September 2023 Releases
So, you’ve reached a point of conflict.
You know where things are headed. Your boss, the board, they keep insisting you need to implement AI agents. You know you need to do it. You know you’re going to do it.
But all you keep hearing in your head is that 2005 chatbot voice, screwing up people’s names, misunderstanding questions, and sharing sensitive information with customers.
You’re worried about putting your best leads and customers in front of that agent.
We get it. It’s unfamiliar territory. And that’s scary. We’re not here to downplay your fears. We’re here to chat about them.
Because, ultimately, if you’re holding back on AI because you’re afraid it might say the wrong thing, you’re missing the point. Here, we’re going to help you overcome that fear, so you can start tapping into the value of AI agents.
Many contact center leaders worry that AI agents will either:
And guess what… They will. It IS going to happen.
Your AI Agent is going to say the wrong thing at times. It’s just part of the process.
Doesn’t matter what AI company you’re working with. Doesn’t matter if you’re building them in-house or completely outsourcing. Doesn’t matter what the agent is for. It’s always going to happen when you’re first implementing an agent (regardless of what some AI companies tell you).
Let’s really air it out.
The equivalent to blurting out your ex’s name on a first date. Embarrassing. Sloppy. Probably stopping a new relationship in its tracks before it could ever start.
The AI mispronounces an important customer’s name. Repeatedly goes off-script. Responds to, “I want to cancel my account,” with, “That’s great to hear!”
Or worst of all, the AI shares information that’s wildly inaccurate or non-compliant.
You fear that AI agents will make your customers feel like numbers, and go rogue in a way that makes you look like you’ve lost control of your contact center.
It’s cringe-inducing. It can be very costly. It can be career-threatening for you (if you’re in a highly-regulated industry).
1. Humans screw up, too.
They say the wrong thing at times. They sometimes misspeak or mispronounce names. They go off-script. They forget details (or their training altogether). They go rogue under pressure and give false information.
You’ve seen it. You’ve fired for it.
The risk of AI can be more easily mitigated because of automated QA, and the fact that you have ultimate control over what they can and cannot do or say.
Is there risk? Sure. But there isn’t any additional risk above the risk you’re taking with human agents.
2. AI agents only need to be taught once.
Once they learn, they’re set. For good.
Over time, they’ll actually become less risky, and more predictable than your human agents at handling routine tasks.
So, it’s not a matter of whether AI agents are perfect or not, but about how they stack up against humans, and about how you’re able to mitigate risk through greater levels of control.
How you’re able to mitigate your AI fears will depend on how you build your agents, and who you’re working with to do so.
When you work with Regal to implement AI agents, we’ll always work with you to build, test, and monitor your AI agents, to make sure the scenarios above never become a reality.
To mitigate the fear of AI agents saying things wrong, you can use the Regal platform to easily set up and fine-tune:
To make sure AI agents are staying within your intended confines, you can use Regal to natively track:
Escalation rate tells you how often the AI hands off to a human. You can see if they’re escalating when they should be.
A high escalation rate might signal that the AI is confused, unhelpful, or saying things that frustrate customers.
A low escalation rate means your customers' needs are being met and resolved. That means the AI is confidently handling questions, staying in-bounds, and making customers feel heard.
In short, escalation rate is your leading indicator of whether your agents are pleasant enough to talk to.
At what rate is your AI handling calls end-to-end?
Containment rate gives you an additional layer into how your agents are handling calls. On one hand, it is just the inverse of escalation rate. But, it also gives you a sense of how competent your agents are in handling conversations—i.e., are your contacts actually having their issues resolved.
High containment shows that the AI is sticking to the script, providing contextually accurate answers, and getting customers to actual resolutions.
Low containment would mean that your agents are misunderstanding needs, or responding to contacts in a way that’s simply not helpful.
Just like with humans, you can run QA on your AI agents, but with way more transparency. Transcripts are available in real-time. Every line the AI said can be audited.
With Regal, you can qualitatively review individual conversations or automate reports on call sentiment, outcomes, and agent behavior at scale to quickly flag deviations from scripts, tonal mismatches, and moments of customer confusion/drop-off.
With human agents, it can be very hard to know when and why conversations are going south. With AI, it’s easy to define your key revenue-driving factors and measure them at scale, so you can address issues in real-time, no re-training needed.
If your containment rates aren’t as high as you’d like (or similarly, your escalation rates as low as you’d like), you can easily QA calls en masse and identify where the issues lie.
Once you identify conversation flows that could be improved, either you can self-manage prompt or configuration updates, or the Regal team will work with you to fine-tune the mitigation strategies above where needed.
Because we know, in the end, it’s all about mitigating risk. And AI isn’t as risky as you’d think.
We’ll leave you with this…
Your competitors have the same fears you do. But a lot of them are already leveraging AI agents to drive more conversations and conversions.
It’s normal to be scared, but you can’t let the simple fear of an AI saying something wrong hold you back.
Ready to see Regal in action?
Book a personalized demo.