How viable is AI customer care?

With sentiment analysis for chatbots alongside ML tools to hide accents, customer service is undergoing a major change

A cartoon illustration of a chatbot peeking out of a smartphone against a pink background, surrounded by pastel chat windows.
(Image credit: Getty Images)

While not all business functions are being equally affected by AI technology, customer service is one area that’s experiencing significant change. Across the customer service tech stack, AI is delivering benefits such as scaled intelligence, reduced waiting times, and more consistent support quality.

According to a survey of executives by Bain & Company, almost a third (32%) of AI projects within customer service now make it past the pilot stage, the second-highest rate across all business functions, and a clear signal that AI has moved beyond experimentation in this domain. With businesses already seeing the benefits.

“The advantages of using AI chatbots are that you can offer customers 24/7 assistance and the speed of AI chatbots when surfacing information,” explains Andrew Leal, CEO and founder of Waggel, a pet insurance company that’s invested heavily in AI. “We’ve seen that AI-powered agent assistance tools can dramatically improve consistency, such as in claims.”

Elsewhere, Waggel has used AI to guide team members through processes, flagging missing information, or suggesting next steps based on historical outcomes. And AI also provides the ability to apply sentiment analysis scoring to almost all text and chat interactions.

“Quality assurance represents the strongest business case,” explains Anshuman Singh, CEO at HGS UK, which specializes in consumer engagement, digital CX, and business process management. “Traditional contact centres audit just one-to-three percent of interactions. But AI systems now enable 100% call and chat coverage with automated scoring and sentiment analysis.”

Pulling up the ladder

Clearly, there are many positives to using AI technology in customer care. But there are downsides, too, with four in ten business leaders now claiming that AI adoption has led to reduced head count, according to a global poll of 850+ business leaders from the British Standards Institution (BSI). And the BSI believes that there’s a balance to be found.

“When an organization opts to implement AI technology into their customer service operations, they should take a socio-technical approach,” says Laura Bishop, BSI’s digital sector lead for Artificial Intelligence & Cyber Security. “Their strategy must consider each style of customer interaction and determine those that would benefit from quick and consistent AI support and those more complex that require human support.”

To establish whether AI can deliver the best outcome, experts recommend reviewing business functions on a case-by-case basis. One example of this is the use of accent and identity masking technology, which alters the accents of call center workers to make them sound more familiar or local to listeners.

“Accent and identity masking technologies are now moving from prediction to reality. From a data and ethics standpoint, this is highly sensitive,” says Rohan Whitehead, data training specialist at the Institute of Analytics (IoA).

Under the EU AI Act and related guidance, those people interacting with your digital products must be clearly informed when they are interacting with AI systems, as well as AI-generated or AI-altered content, with deceptive or opaque practices being flagged. Alongside this, rather than addressing the legitimate communication challenges within customer care, some experts believe that these tools risk simply hiding workplace diversity from view.

“Voice is integral to both personal and professional identity,” argues Singh. “Requiring employees to alter theirs – especially without genuine consent – sends a damaging message: that their natural speech is inadequate. This can erode confidence and sense of belonging.”

Know your agent

AI agents are transforming customer service, having already replaced many day-to-day tasks. And within the next four years a 2025 Gartner report claims that agentic AI will resolve 80% of common customer service issues, which will only be fully possible once agents can interact with each other.

“We will see more interactions where agents speak to agents,” explains Livia Bernardini, CEO at Future Platforms, a digital product agency. “This marks a shift from traditional Know Your Customer to a model of Know Your Agent. As intelligent agents begin to transact on behalf of people, brands must behave in ways that are interpretable, transparent and trustworthy to both humans and machines.”

It’s clear that retailers are already embracing agentic AI for customer interactions. Simon James, head of Data Science and AI at Publicis Sapient, agrees. And he says that rather than attempting to build all-purpose problem solvers, the most effective approach involves breaking customer service into discrete components – each handled by specialized agents with well-defined scope.

“What's particularly exciting is the emergence of agent-to-agent interactions,” James adds. “As customers increasingly use AI assistants for routine needs, organisations that optimise for agent experience (AX) alongside traditional customer experience (CX) will capture exponentially more value.”

Human-AI collaboration

As with any conversation around the impact of AI, it’s hard to ignore the looming threat of job losses. However, while experts agree that role requirements will change they also believe that using AI will require strong governance and quality assurance, clear escalation pathways, and a willingness to course-correct quickly when something isn’t working as intended. All of which will require human input.

“Looking ahead to the late 2020s, my expectation is that customer service will become AI-first, with a human safety net,” says Whitehead. “A large share of simple contacts will be automated across voice and digital channels, but high-value, high-risk or vulnerable customers will continue to require human interaction, supported by increasingly capable AI assistants.”

Singh agrees, believing that the emerging customer service model will prioritize human-AI collaboration, not replacement. “AI will act as a ‘copilot’ for routine tasks and summaries, allowing agents to focus on complex problem-solving and empathy,” says Singh.

Collaborating in this way puts particular emphasis on ensuring that employees receive adequate AI literacy training, which will help them understand when and how these technologies should be applied.

“Beyond operational knowledge, staff need critical thinking skills to question and validate AI outputs rather than accepting them blindly,” explains Kate Field, global head human and social sustainability at the British Standards Institute (BSI). “Without this capability, organizations risk poor decision-making and customer service outcomes.”

By implementing AI in a strategic and collaborative way, chatbots used within customer service are now able to field simpler questions, whilst offloading to human agents when queries become more complex.

“The opportunities are immense and organizations that approach AI deployment strategically will capture significant competitive advantages,” James tells ITPro. “The key is building robust frameworks from the ground up – treating AI governance like any successful digital transformation.”

However, it’s also key to consider the implications of an AI-led service on vulnerable customers, including those who may struggle with communication. With UK regulators including the FCA, Ofgem, and Ofwat emphasising that firms remain accountable for the outcomes of AI systems, under existing frameworks such as the FCA's Consumer Duty.

“There is a huge risk of widening divides by companies assuming everyone understands how to engage with AI to get the answers they need – when in fact older people, or those with other vulnerabilities, may well not be comfortable,” warns Field. Businesses will need to bear all this in mind when it comes to mapping out their AI adoption plans.

Top 5 risks of using AI in customer interactions

Paul Dongha, head of responsible AI and AI strategy at NatWest Group, reveals where he sees the main risks in using AI in customer interactions, and what practical steps organizations can take to mitigate them.

  1. Complexity and empathy failure: chatbots struggle with ambiguous, non-standard, or highly emotional queries. The inability to deviate from scripts or show genuine empathy leads to customer frustration and abandonment.

    To mitigate: implement clear, immediate pathways for escalation to a human agent. Use sentiment analysis flags to highlight high frustration. Conduct extensive testing before deploying the AI, exposing it to varied chats in test scenarios.
  2. Data privacy and security risks: AI systems process and store massive amounts of sensitive customer data (voice recordings, personal details, transaction history). This increases the attack surface, creating significant new burdens for security and data governance.

    To mitigate: implement privacy by design by adopting techniques like federated learning or data minimization (only collecting data absolutely necessary). Ensure strict encryption and tokenization for all voice and biometric data handled by the AI system.
  3. Algorithmic bias and inconsistency: If the AI is trained on historical data that reflects past biases (e.g. in loan applications or sales history), the AI can perpetuate or amplify these unfair outcomes, leading to discriminatory service across customer groups.

    To mitigate: conduct algorithmic bias audits, and use quantitative metrics (like Disparate Impact Ratio) to test the AI on simulated demographic data. Regularly retrain the model with balanced, curated datasets to ensure fair outcomes.
  4. High implementation and maintenance costs: While operating costs can be low, the initial investment in integrating AI with legacy CRM/knowledge systems, and the ongoing costs of model retraining, updates, and compliance monitoring are substantial.

    To mitigate: focus initial AI deployments only on high-volume, low-complexity, repetitive tasks (e.g., basic FAQs) to quickly realize return on investment.
  5. User trust and acceptance challenges: Customers often prefer human interaction for sensitive issues. Over-relying on automation can decrease customer satisfaction and damage brand loyalty if the AI is perceived as a blocker to resolution.

    To mitigate: ensure transparency by clearly labelling all bots (e.g. "You are speaking with an AI assistant"). Furthermore, always allow customers the option to drop out of an automated conversation and be put through to a human customer service representative.
Dan Oliver
Freelance writer

Dan Oliver is a writer and B2B content marketing specialist with years of experience in the field. In addition to his work for ITPro, he has written for brands including TechRadar, T3 magazine, and The Sunday Times.