Turning the table on fraudsters equipping call centres with AI - The EE

Turning the table on fraudsters equipping call centres with AI

Nikolay Gaubitch of  Pindrop

In a rapidly advancing digital age, with the power of the internet at our fingertips, voice and the telephony channel is far more prevalent in our daily lives than people realise. However, this channel leaves numerous opportunities for fraudsters to exploit us, says Dr Nikolay Gaubitch, director of research at Pindrop.

One of the many challenges that contact centres that handle incoming calls face is the inability to confirm who is on the other end of the line. Confirming caller identity has traditionally been carried out via knowledge-based authentication (KBA) questions, but fraudsters are able to exploit this method with social engineering and stolen information. In order to improve knowledge-based authentication, call centres are adopting technology powered by machine learning and artificial intelligence (AI) as a new way to verify caller identity and assist call agents.

Keeping out of sight

There are several factors that attract fraudsters to the telephony channel, including that it’s inexpensive to use and that it allows them to feel anonymous, arming them with greater confidence in their attacks. Unlike physical and visual interactions, callers are able to keep their identities hidden, providing them with the perfect shield to hide behind when conducting fraudulent activity. While verification processes have helped to combat this issue, they still don’t completely solve fraud. The obvious indicators of deceitful behaviour, such as stumbling over words, are no longer a reliable detection method as fraudsters are far more skilful in the art of deception – so rarely make these mistakes.

The main purpose of call centres is to provide good customer service rather than treating all callers as potential fraudsters. However, criminals will seek to exploit this fact. Social engineering techniques allow adversaries to use agents as their very own data authenticators. Any information previously stolen or harvested by fraudsters can be used to manipulate call agents into unintentionally confirming the validity of the data. Once they know the information is legitimate, criminals can go on to further exploit their victims through accessing their bank accounts, or committing insurance fraud, as a few examples.

Fraudsters can take advantage of both human workers and automated machine systems. Interactive voice response (IVR) for example, is popular amongst fraudsters for harvesting data, as it can be a far quicker process than having to wait to speak to a human agent and then getting stonewalled. In principle, machines can be used to automate fraudulent activity and perform large-scale attacks in the IVR, ultimately increasing a fraudsters prize pot if successful.

Flipping the script with AI

Call centres are turning the tables with technology powered by AI and machine learning and the primary focus is detecting fraudsters before the damage is done.

Call centres are often inundated with calls, and the majority of them are genuine callers in need of help. As call centres don’t want to treat their customers as fraudsters, the challenge is to identify the bad callers in a sea of genuine customers. It is not feasible for human workers alone to review tens of thousands of calls and, in real-time, detect a fraudulent caller, so organisations are calling on technology to help. AI powered anti-fraud systems specifically designed for the telephony channel can analyse audio, voice, behaviour, and metadata for subtle signs that indicate a potential fraudster. The system can assess the risk of a call in real-time and therefore aid human fraud detection efforts. Put simply, these machines can be brought in to assist human workers in their roles on the front line of customer relations.

Additionally, AI can go one step further and assist with the evolving threat of voice synthesising. Whilst advanced technology, such as deep fakes, is not widespread in the fraudster community yet, the progress being made in that field strongly suggests the threat is quickly approaching. A powerful example of deep fake technology is present in the Anthony Bourdain documentary, which has been released with a re-creation of Bourdain’s voice, and it gives us an insight into a frightening reality. The power of AI could open up a broad spectrum of great threats should criminals successfully manipulate this technology. At the moment, fraudsters tend to use more rudimentary voice changing tools, but the path to AI voice synthesising is clear.

Future proofing is the strategic move for call centres ready to battle fraudsters with AI. As we’ve witnessed, despite huge advancements being made in technology, the human voice remains a powerful tool one that should not be underestimated. If businesses do not prepare against the evolution in voice fraud, they could find themselves victims of devastating attacks. And all because they were tricked by a voice.

The author is Dr Nikolay Gaubitch, director of research at Pindrop.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close