Generative AI: Benefits and risks for innovative customer service - The EE

Generative AI: Benefits and risks for innovative customer service

With predictions that AI (artificial intelligence) could boost the world economy by up to $15.7 trillion (€14.63 trillion) by 2030, and increase global growth by 4 yo 6% per annum, every business leader should be galvanised by and concerned with how AI will shape future commerce. When it comes to customer experience, the emergence of generative AI tools such as ChatGPT is sparking lively debate amongst the industry and beyond, says Agam Kohli, director, CX solutions engineering at Odigo.

Customer contact centres across the globe are already leveraging the predictive capabilities of AI to anticipate customer needs, preferences and behaviour and provide rich insights for customer agents.  

Generative AI takes this up a gear. As it evolves, more and more innovative use cases are bursting onto the scene its rapid adoption is breathtaking. 

While leaders should always support innovation, they must carefully weigh the rewards of generative AI against the possible risks. Here, we explore the implications of this new and rapidly developing technology, as well as the key considerations leaders should bear in mind when assessing its power and potential. 

Deployment requires careful consideration 

More than 60% of customer experience leaders worldwide expect that AI will give them a competitive advantage, indicating that AI is poised to make a real impact on the way companies interact with their customers. In fact, customer experience leaders were some of the early adopters of AI, harnessing new technologies to enhance customer interactions.

For instance, AI-driven chatbots a global market projected to be worth around $4.9 billion (€4.56 billion) by 2032 are already addressing routine inquiries, allowing human agents to focus on more intricate tasks. Impressive innovations in AI sentiment analysis have enabled businesses to understand their customers’ feedback, opinions, and preferences by identifying and extracting the emotional tone and attitude of words, enabling them to tailor responses accordingly and resulting in more impactful and personalised interactions. 

However, next-level generative AI is rapidly surpassing traditional chatbot capabilities, generating human-like text responses with far greater precision and detail. Easily integrated with existing CRM systems, such tools streamline access to customer history and preferences, instantly delivering highly personalised customer communications. And, as the technology evolves, it is becoming even more sophisticated. ChatGPT 4.0 shown to increase customer satisfaction by 20% will take AI-powered customer interactions to new heights, enabling more personalisation, accuracy and prediction than ever before. 

This technology undoubtedly holds enormous promise. However, deployment requires careful consideration. This is particularly important for industries with highly sensitive data flowing through their contact centres. In order to fully benefit from AI’s full power and potential, it’s crucial that companies invest the necessary time, expertise and resources to do it safely and get it right. 

Calls for tighter regulation gain momentum across the globe 

Alongside the fanfare of new tools, calls for greater AI regulation are growing louder. For instance, in Europe, GDPR already applies to AI systems, ensuring individuals’ data rights are upheld, and that algorithmic decision-making is explainable. The United States Federal Trade Commission (FTC) enforces regulations that require businesses using AI to be transparent about their data practices. Such frameworks aim to balance AI innovation with safeguarding individuals’ rights and privacy. 

Later this year, the UK will host the first AI regulation summit. This will kick-start an international effort to coordinate regulation. As AI technologies proliferate across borders and sectors, governments now recognise the urgent need to foster innovation while creating parameters that safeguard personal rights. 

With tighter regulation coming into force across the globe at speed, businesses should waste no time when it comes to engaging in regulatory discussions. They should not wait for policies to be mandated or forge ahead without frameworks. For businesses dealing with personal data and sensitive information, the safe, secure use of generative AI must be a priority. Companies implementing new AI tools must ensure they work within a robust and encompassing AI framework. 

For example, there have been numerous examples of bias in AI-developed tools, which exacerbate existing societal inequalities. Organisations must adopt rigorous testing and quality control mechanisms to ensure AI-generated content aligns with their brand values and ethics. Other risks include generative ‘hallucinations’ (false statements), not to mention cybersecurity, compliance, legal and intellectual property issues. Since some employees will already be experimenting with the latest generative AI technologies, creating such a framework should be a priority for leaders. 

Urgent need for workplace education 

Organisations must ensure they are investing in AI education and upskilling their workforce in AI technologies. Research shows that 49% of US employees say they need training on using AI tools, and in the UK, while one in five companies claim they are using AI, the majority (73%) aren’t investing in training for staff to upskill in AI technologies. Many contact agents will use tools based on AI and natural language processing and may see generative AI as a natural next step.

However, there are huge distinctions between yesterday’s AI programmes and today’s generative AI tools. And with reports showing that 60% of service workers worldwide don’t know how to get the most value out of generative AI at work, with over half saying they don’t know how to use the technology effectively or safely, there’s a real sense of urgency when it comes to the need for workplace education. 

Employers should consider working with employee voices to design frameworks that answer the need for knowledge on the front line. When anyone can easily access online generative AI tools like ChatGPT, employees must understand the limitations of these tools. With a robust framework, employees can experiment within safe parameters.

This is especially important when dealing with customer data. Being clear about generative AI’s potential and limitations allows employees to make informed decisions about its use (or not), promoting responsible, compliant use. It is also important to acknowledge the risk of over-reliance generative AI should be promoted as a tool that augments the role of agents, not one that replaces them. 

Such proactive education ensures a culture of transparency and accountability, where employees are empowered to use appropriate safeguards and judgements. In fact, the need for such human oversight should also help quell fears employees have that generative AI will replace them. It’s vital to remember that generative AI works alongside humans to enhance aspects of their work, not replace their very human talents. 

Paving the way for safe and compliant AI innovation 

Agam Kohli

The recent rush in interest in generative AI provides an unprecedented opportunity to reshape customer experience, not least within the contact centre. However, before organisations jump feet first into AI adoption, they must carefully assess the full extent of risks involved with this unregulated technology. The convergence of AI, customer service, and data security demands careful navigation. 

While businesses don’t know what future legislation will look like, they can educate themselves on the issues it seeks to address. And businesses can rest assured that there are roadmaps to support safe and compliant AI innovation. Knowledge is power with this knowledge, businesses can fully benefit from AI, while adhering to stringent data security protocols and robust ethical principles.

The author is Agam Kohli, director, CX solutions engineering at Odigo.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close