ETSI has released a group report on AI (artificial intelligence), addressing the application of AI, commonly known as deepfakes. The report ETSI GR SAI 011, released by securing AI (ISG SAI) group, focuses on use of AI for manipulating multimedia identity representations and illustrates consequential risks, as well as measures that can be taken to mitigate them.
“AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and, in extreme cases, can even create fake multimedia data from scratch. Deepfake can also manipulate audio and video files in a targeted manner, while preserving high acoustic and visual quality in the results, which was largely infeasible using previous off-the-shelf technology. AI techniques can be used to manipulate audio and video files in a broader sense, e.g., by applying changes to the visual or acoustic background. Our ETSI Report proposes measures to mitigate them”, says Scott Cadzow, chair of ETSI ISG SAI.
ETSI GR SAI 011 outlines many more immediate concerns raised by rise of AI, notably use of AI-based techniques for automatically manipulating identity data represented in various media formats, such as audio, video, and text (deepfakes and, for example, AI-generated text software such as ChatGPT although, as always per ETSI guidelines, the report does not address products or services). The Report describes different technical approaches, and it also analyses threats posed by deepfakes in various attack scenarios. By analysing the approaches used thr ETSI report aims to provide basis for further technical and organisational measures to mitigate these threats, on top of discussing their productiveness and limitations.
ETSI’s ISG SAI is standardisation group that focuses on securing AI. It has already released eight group reports. The group works to rationalise role of AI within threat landscape, and in doing so, to identify measures that will lead to safe and secure deployment of AI alongside population that AI is intended to serve.
Follow us and Comment on Twitter @TheEE_io