If you’ve ever interacted with a chat bot or voice bot and the conversation is going nowhere, then I’m sure you’ve had several instances of shouting “Agent Agent Agent!!!” on the phone, repeatedly hitting ‘0’ on the keypad, or typing “Get me a human!” into the chat window. You are not alone!
While a lot of us have had a decent amount of experience interacting with IVR-based flows (Press 1 for Billing etc…) or deterministic virtual agents (cue the robotic “In a few words, tell me what I can help you with..”), AI agents are relatively new on the scene with businesses looking to get traction quickly.
Unlike IVR or deterministic, flow-based agents, well-designed AI agents are capable of gracefully handling a library of customer queries and knowing when to transfer to a human agent without putting the customer through a series of unhelpful error messages.
So, when should AI agents attempt to handle conversations themselves versus transferring to a human agent?
It depends on the following:
- What is the goal behind deploying an AI agent?
- What will be the capabilities of the AI agent at launch?
Let’s say the goal of the AI agent is to reduce customer wait time for commonly-asked questions while providing an exceptional customer experience. An FAQ-based AI agent that is intelligently answering questions sourced from a website or knowledge base articles is a great solution. However, unless the AI agent is able to transact on behalf of the customer by making a reservation or paying a bill, for example, there will be a portion of the population that will want to speak with a human. In such cases, to provide the best customer experience, it is best to remove as much friction as possible in escalating the call to a human agent.
An even better experience would be for the AI agent to detect that the customer’s query needs a human, proactively offer up a human for resolution, and provide the human agent with a summary of what’s transpired thus far. At Cresta, our summaries include:
- Why the customer called / chatted in
- What the AI agent has tried thus far
- What still needs to be resolved and requires the live agent’s attention
Such a set up frees your human agents from answering repetitive questions allowing them to have more focused, complex conversations with customers. It also avoids customer frustration from having to repeat themselves multiple times.
Now, if you have an AI agent that is set up to transact on behalf of a customer and execute processes such as making a payment or returning an order through integrations with a POS (Point of Sale) or OMS (Order Management System), you are able to focus on chat / call deflection a lot more than with a purely FAQ-based AI agent. A well-designed transactional AI agent has tools in its arsenal to perform a complex task such as helping a customer book and pay for a flight, allowing for a higher scope of deflection while maintaining a professional and friendly conversational experience with the customer.
Given the AI agent’s transactional capabilities, especially for use cases that constitute a high volume of traffic for businesses, we can design it to confidently reassure customers that it can handle their request rather than defaulting to an immediate human agent transfer. Even nudging the customer a couple of times to give the AI agent a chance to proactively anticipate their request (“...are you calling about the order you placed on July 10?”) goes a long way in keeping the customer engaged with the AI agent.
For long-tail issues that are low traffic volume and require businesses to spend medium to high effort to build out integration capabilities, handing off to a human agent is a better solution for an initial launch. Over time, businesses can build out additional use cases to capture long-tail content. Cresta will be announcing human supervision functionality very soon, which will allow humans to make the decision for the AI agent instead of forcing an escalation!
With all of this, the AI agent still has to be intelligent enough to detect cues such as frustration or anger, acknowledge the customer’s feelings, and be empathetic. While designing the AI agent, if we know that escalating to a human isn’t going to solve the problem–for example, if the customer is requesting an out-of-policy refund–we can have the AI agent explain the policy without needlessly escalating. Businesses can determine the areas that make the most sense for an AI agent to address, or escalate when they know an ideal resolution would be to send the customer to a human.
Customers have also had years of bad experiences with deterministic bots, poor speech-to-text (STT) recognition, error handling, fallback, and have spent several minutes to hours in loops trying to get to a human.
These unpleasant experiences translate into preexisting biases when interacting with AI agents, and businesses will find customers asking for a human without trying to engage with the AI agent at all. The beauty of the AI agent is that its highly conversational nature lends itself to either L1 triage (“Let me get you some of the way there by figuring out why you’re calling / chatting”) or reassuring customers that it can help (“I can help you with most things that an agent can”, “Avoid long wait times and let me help you”). At Cresta, we partner with businesses to maximize goals while ensuring great customer experience by rooting our AI Agent design in historical conversations between customers and businesses’ top performers. Our rapid iteration cycles post production also allow us to adjust the AI agent’s behavior based on customer responses.
Ultimately, how persuasive to be or not is a strategic design consideration centered around your AI agent’s core purpose, its transactional capabilities, the quality of information codified in the prompt, and its adeptness at understanding customer cues. By giving AI agents the tools to efficiently and empathetically handle routine and transactional tasks, businesses can finally move beyond the "Agent Agent Agent!!" era.