How AI enhances connected device manufacturers’ fraud detection and prevention - The EE

How AI enhances connected device manufacturers’ fraud detection and prevention

Judd Bagley of Everise

As sure as the sun rises in the east, when money is actively exchanging hands, scammers will shortly get a whiff and show up to take what they can. And so, says Judd Bagley, VP of marketing at Everise, it is in the market for connected devices, where Statista reports that in 2019, new smart phone and smart home product revenues in the US alone combined to reach just under $160 billion (€135.23 billion) – which is more than the GDP of Ukraine.

What makes fraud in these markets so pernicious is that it disproportionately impacts young, highly innovative companies who’ve already cleared the high hurdle of taking a viable product to market. Their cash reserves, reputations and reseller relationships are fragile, and assaults against them can do enduring damage. 

The good news is, teams of smart people are stepping into help connected device makers fight back, and they’re using artificial intelligence (AI) to tip the balance in their favour. 

Why connected device fraud?

Before explaining the role of AI in combatting fraud, it’s worth understanding what makes connected devices so attractive to scammers. 

First, consider the unique attributes of consumer-focused connected devices, such as mobile phones and smart home technologies. These devices add so much convenience as to very quickly move from “nice to have” to “can’t live without” status. They frequently iterate, adding features and form factor changes which apply social pressure on users to regularly upgrade. Finally, these products are priced low enough to be generally accessible, thus keeping sales volume high, while being priced high enough to make stealing them well worth a fraudster’s while. 

Now, consider new pressures on these smart device makers. Product launches of some new mobile phones, for example, amount to truly global, cultural events. Largely for that reason, tech journalists and some would-be buyers are eager to publicly complain about any wrinkle in the purchasing, supply or distribution process. In response, these companies permit some loopholes in their order management systems, intended to simplify the purchasing process by erring on the side of a smooth buying experience and speedy fulfillment. 

Put these factors together, and you have a system wide open to illegal activity, on a scale that would leave most stunned. Of course, fraudsters know this and learned how to illegally acquire and resell large quantities, inflicting losses on the order of many hundreds of millions of dollars each year. 

Detecting COVID-era threat vectors

Without giving away too much, a typical scam might work like this: a fraud ring places a large order from multiple physical addresses. Upon arrival, many “buyer’s remorse” claims are filed, and the scammers are instructed to return the items. The scammer returns empty boxes, for which refunds are issued upon arrival, leaving the scammer with a mountain of free phones and a few days to run off with the money and cover their tracks before the scam is discovered. 

For their parts, the customer support agents processing these claims lack the time, tools and training needed to know when any particular claim fits a fraudulent pattern. Their training is understandably focused on quick-and-friendly issue resolution, and not on poring through large data sets to make complex judgements on the fly.

To pick up that slack, progressive smart device makers are deputising dedicated fraud detection and prevention teams tasked with spotting illegal patterns and generating prevention policies. These teams are finding that the scale of the problem is such that it requires the help of AI-powered tools to tackle.

These tools consider the circumstances of each transaction, looking for risk factors such as the buyer’s contact information, location, IP address and payment card details, in addition to date and time (which surprisingly both matters, as it turns out). When these factors combine to achieve a pre-determined score informed by past incidence of fraud, the tools can flag transactions for further review.

Where AI falls short

Where artificial intelligence comes up short thus far is in consideration of cultural nuances influencing how purchases are made locally. For example, certain types of shipping addresses or use of freight forwarders may be highly correlated with fraud in one region and not in another. Knowing the difference often requires having someone on staff with local knowledge. 

Another shortcoming of AI is in the inability to catch typos intentionallyadded to contact information (such as a zero for the letter O or letter Z for the number 2) meant to make multiple instances of the same address appear distinct to AI, but identical to a delivery person.

The irony is, fraud detection teams often observe as fraudsters methodically test systems for vulnerabilities, quickly learning from failures and successes, in ways that bear a striking resemblance to machine learning. 

Dedicated teams equipped with AI-powered tools are extremely effective at identifying and preventing fraud, but the economics of doing so often only work out for larger connected device makers. The upstarts operating at smaller volumes and margins often can’t take on the initial capital investment. Additionally, both humans and AI need experiential data points to draw upon to make accurate predictions, and that learning period dissuades many. 

Fortunately, companies can outsource experienced AI-augmented fraud prevention teams, sufficiently altering the calculus to make such a service not only affordable, but one which smart device makers can’t afford not to have given the quick and substantial impact on their bottom lines.

To call both connected devices and AI “revolutionary” technologies is so obvious as to barely merit mention. But what is worthy of our attention is an examination of the ways AI is succeeding – and as yet occasionally failing – when deployed to help humans minimise the impact of bad actors in the new and quickly evolving consumer tech landscape.

The author is Judd Bagley is VP of marketing at Everise.

blogs about outsourced fraud prevention here

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close