Are deepfake videos the new blind spot for tech? - The EE

Are deepfake videos the new blind spot for tech?

It used to be said that seeing was believing. Now that Artificial Intelligence (AI) can be used to manipulate images and audio to create entirely fake but completely plausible videos, are we reaching the point where we cannot trust our own eyes? Malou Toft, VP EMEA, Milestone Systems reports.

Deepfake videos look and sound like the real thing, yet they are anything but genuine. Their creators use AI and machine learning to seamlessly stitch anyone into a scene. Many of these videos feature people that don’t exist, having been created from scratch by AI. Others feature people who most certainly do exist but are pictured saying things they never said.

Access is too easy

Such is the speed at which this technology is being developed that convincing deepfakes are now fast and easy to create, one report suggested it cost just over only $500 (€417.03) and two weeks to create a deepfake of Mark Zuckerberg. Indeed, the ease with which they can be created explains why the number of deepfake videos found online in one study doubled in the space of one year.

In December last year, an alternative Christmas Message using deepfake technology showed Queen Elizabeth II dancing around her desk and attracted 350 viewer complaints. The show’s producers were upfront about the use of AI to create the scenes and said they wanted to draw attention to the power of the technology.

While a dancing monarch may not pass the most basic plausibility test, the increasingly sophisticated use of deepfake technology to put words in politicians’ mouths makes it harder for viewers to differentiate truth from lie. A deepfake message from the CEO of a publicly-traded firm saying the wrong things to investors could ruin a company’s reputation and cause chaos in the financial markets.

Criminal tool

The technology also provides a powerful new tool for cybercriminals, and there have already been cases of financial fraud involving deepfake videos. The CEO of one UK-based firm transferred €220,000 to a supplier’s bank account, having been instructed to do so by his (deepfake) boss. Indeed, the cost of deepfake scams was expected to exceed $250 million (€208.55 million) in 2020, according to research firm Forrester.

It’s no surprise, therefore, that more than three-quarters of US adults (77%) want measures taken to restrict the publication of deepfake videos, according to the Pew Research Center. Given the concerns of the public, business, NGOs and Governments alike, what can be done?

Lessons can be drawn from other uses of video, where the integrity of images is paramount. Law enforcement and judiciary rely on video surveillance footage to investigate and prosecute criminal activity. When doing so, there can be no question over the veracity of the video images that capture a bank robbery in progress, for example.

Protecting video integrity

Maintaining the integrity of video images used for evidential purposes is ensured by a range of protective measures. Physical access to the video surveillance system is restricted to prevent unauthorised access, all the way from the camera to the final video file.

The video footage is made secure with encryption and watermarking, while the entire system is hardened to prevent cyber-attacks. Those who install, monitor, and rely on these video systems are trained on keeping them secure.

Technological solutions to the threats posed by deepfakes are emerging. Microsoft, Adobe and others are working on verification systems that will provide confidence to those that view videos online. This would help businesses from being duped by cyber-criminals but wouldn’t stop deepfakes from being uploaded on social media platforms.

To do so would require Big Tech to closely control the content shared on its platforms by comparing the source of a video to a social media account and the camera on the device that accesses it. Doing so is neither easy nor fool-proof.

If technology cannot currently provide a silver bullet to the deepfake issue, could regulation be the answer? The EU’s planned Digital Services Act will address how tech has exposed users “to a new range of illegal goods, activities or content.

Malou Toft

“US President Biden has previously indicated he would revoke section 230 of the Communications Decency Act, which protects social media from responsibility for the content they host. Yet tighter regulation raises valid concerns about the implications for free speech.

In the final analysis, education may provide one of the best tools in the fight against deepfake technology. Developing greater online literacy would help internet users better identify visual misinformation and stop the sharing of deepfakes.

Making the public aware of the convincing nature of many deepfakes should be a priority for all democracies. Ensuring that workers follow the correct procedures if they receive a TikTok from their CEO asking to transfer money would be sensible.

The author is Malou Toft, VP EMEA, Milestone Systems.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close