We’re hearing that multiple companies are striving to optimize and streamline services across their businesses with AI, buying into the promises AI companies themselves are touting. However, this is all too new to really understand best practices for AI deployment. Beyond software development, we have very little evidence to justify the rate of adoption we’re seeing. Sadly, so many are jumping on the AI bandwagon out of FOMO – fear of missing out. They are not taking the time to understand how the blind deployment of these agents in the wrong way may have long-lasting detriments, many of which will be hard to recover from. It’s a misguided optimization that ignores the customer.
Here’s the latest example – deployment of AI agents as the front line for customer service.
Quite a sobering read, really. It’s a pathetic, absurd move on the part of these businesses that think that deploying AI agents as customer service is going to win over customers. But, don’t blame AI! Blame the AI vendors:
The widespread adoption of obstructive AI is not just a corporate decision; it is fueled by a booming industry of AI vendors that market their platforms with promises of massive cost savings. An analysis of marketing materials from leading vendors like Talkdesk, Intercom, Zendesk, and Five9 reveals that the primary value proposition is not better service, but radical efficiency achieved by reducing human involvement.
As an AI/ML researcher, let me explain what’s happening. AI models are algorithms that learn from data. They can be trained to optimize for whatever problem the AI engineer is tasked with solving using these powerful tools. In machine learning, we frame learning as an objective function – a simple, measurable quantity that we want the model to learn to minimize or maximize. It’s not clear what these companies are optimizing for. Any number of possible metrics are possible here.
- Minimize escalation rate – reward the model for closed tickets that do not involve humans
- Minimize average handling time, regardless of whether the customer is satisfied
- Minimize refund approval rate – this is pretty diabolical – let the model learn patterns in dialogue that fatigue the customer the most and lead to them not pursuing any refund!
This is not the fault of AI, but represents a deliberate design failure. AI companies will train models with whatever objectives they choose. They certainly have no reason to do otherwise. And when you fine-tune an AI agent to minimize escalation that involves humans, it will learn all sorts of delay tactics. Repeating responses, asking for clarification, asking for documents already submitted, offering irrelevant options, etc. There are many ways it can learn to fatigue a human!
A well-designed AI agent should be optimizing for customer-side outcomes. Why aren’t these companies optimizing for this value proposition? (I smell greed!) For example, maximizing customer resolution rates or minimizing downstream complaints being filed are just a couple. The moment the company’s cost function and the customer’s satisfaction function are treated as the same optimization target, the entire problem changes, and these AI agents would learn to improve their dialogues in ways that leave both the customer and the company happy. But that’s not where companies are at right now. Businesses are blindly jumping on the AI bandwagon.
Finally, it’s worth reminding all of us that when we’re engaging with AI models, we’re engaging with models that learned how to respond with streams of tokens, i.e. words that represent the most likely responses of words that optimize the AI model’s objective function. There is absolutely no affective intelligence – the capacity to perceive, interpret, and respond appropriately to human emotional states. You and I do this automatically. We’re pretty awesome that way. We can detect frustration in word choices. Hesitations and pauses are meaningful. The phrase, “your silence is deafening,” would be meaningless for an AI agent. We can hear distress in tone. We can adapt and adjust in real time (or not!) Even the most sophisticated models today, at best, can only predict a particular emotional state based on selected words, but can’t infer the meaning of that emotion in context. While this is an ongoing area of research, tonality in human voice is so wide and varied that it’s indeed a challenging problem for AI. None of these deployed agents are designed to properly channel empathy, moral intuition, compassion, dignity, respect, etc. Of course, a bigger question is, would we even want that? Personally, I would not. A compassionate AI agent is still just an algorithm.
I can’t help but wonder how long it will be before this practice reflects negatively on companies that deploy AI agents as their first-pass interface for their customer base. As an AI researcher who teaches students how to build these models, I find this (among a variety of other misguided uses of AI today) quite unfortunate; these choices only deepen the public’s distrust and growing weariness of AI.
So, here’s your PSA: Recent surveys and research show that humans have a strong preference not to interact with AI agents. (And, if you’ve been subject to interacting with an AI agent when you called for customer service, you get it!) For example, one recent online survey of 1,011 U.S. consumers found that 93.4% prefer interacting with a human over an AI agent. Another study suggested 70% of customers will abandon your brand after just one poor AI experience. It makes sense – if a company thinks so little of their customers that they are willing to make us talk to computers instead of humans, people will not remain loyal.
My most recent personal interaction with an AI agent is partly what drove this post. I had to place a call to my local healthcare provider recently. Even in healthcare, they are replacing their front desk receptionists with AI agents. The whole experience left me wanting to search for an alternative provider. I ultimately ended up needing a human agent anyway, who could have simply resolved my question in a small fraction of the time it took with the AI agent.
Let’s hope some of these companies learn to put humans back as the primary interface for the company and keep AI agents focused on what they are good at. If they don’t, I see a big opportunity for the smaller, local businesses to see a resurgence in many domains. Humans do not want to interact with AI! Why is this hard for businesses to understand? (Greed blinds common sense.)
The big Gartner report last month gives me some hope – they are predicting that half of the companies that laid off customer service staff will be forced to rehire by 2027.
AI is a tool to assist humans in their work. It should not replace humans, especially in human-facing roles. Companies that choose to use it this way will experience quite a backlash in business.