The Threat of Deepfakes and Their Security Implications
Advanced editing technology is making it easier for anyone to change online avatars or create synthetic personas.
Technology is making possible what we previously thought was unimaginable. Photos and audio of the deceased can now be brought to life. Advanced editing technology, once the exclusive domain of the movie industry, is now available to the average internet Joe. Anyone can download a mobile phone app, pose as a celebrity, de-age themselves, or add realistic visual effects that can spruce up their online avatars and virtual identities. All this and more can be made possible through deepfake technology — basically a form of artificial intelligence (AI) capable of creating synthetic audio, video, images and virtual personas.
Common deepfake techniques
Early forms of deepfakes were low-quality audio, video or images that were falsified by amateurs who superimposed ordinary faces on movie clips or made celebrities say absurd things. This digital manipulation has matured to create media that is often indiscernible to the naked eye.
According to Europol, common ways to create deepfakes include:
- Face Swap: Superimposing a face of one person onto another
- Attribute Editing: Altering characteristics of a person in the video such as speech, style, hair color, etc.
- Face Re-enactment: Transferring facial expressions from one video to another in the target video
- Fully Synthetic Material: Real material used to train machines on what people look like, resulting in a picture that is completely made up, for example, thispersondoesnotexist.com or www.generated.photos.
The implications of deepfakes for organisations
Although deepfake applications might seem like an innocent form of entertainment on the surface, it carries serious risks for businesses, governments and society as a whole. Threat actors can easily manipulate videos, swap faces, change expressions or synthesize speech to defraud and misinform individuals and companies. What’s more, people are being bombarded with information and it’s becoming increasingly difficult to distinguish between what’s real and what’s fake.
Attackers can combine social engineering and deepfakes to win the trust of victims, exploit their weaknesses and lead them to an action. For several industries, deepfakes can have terrifying implications.
In 2020, fraudsters used AI voice cloning technology to scam a bank manager into initiating wire transfers worth $35 million. There are other scams too where deepfakes come into play, such as ghost fraud, where the persona of a deceased person is used to access online services, apply for credit cards, secure loans and benefits, open a new account and take out loans that are never paid. Deepfakes are particularly concerning for property, which is experiencing a rapid rise in touchless claim processing. Scammers can upload altered, manipulated or synthetic photos and media on self-service platforms, which significantly amplifies the risk of fraud.
Disinformation in politics is a long-established practice. Deepfakes can be leveraged as a strategic tool for spreading disinformation, manipulating public opinion, stirring civil unrest and causing political polarisation. As a recent example, a deepfake video of Ukrainian president Volodymyr Zelensky, urging Ukrainians to lay down arms was broadcast on Ukrainian TV.
A threat actor wants to make a quick profit through stock manipulation. He creates deepfake profiles of leading influencers and shares them on social media and stock market forums. As the stock price jumps, the threat actor cashes out before the stock corrects. A deepfake video of Elon Musk promoting a fake trading platform went viral on social media.
Fake evidence (using deepfakes) can be planted in the court of law, proceedings can be delayed or manipulated, and further issues of cross-examinations may arise if one party testifies affirmatively concerning the deepfake video while the opposing party denies the contents of the video. For example, in a custody battle in the UK, doctored audio files and footage were submitted to the court as evidence.
According to the FBI, deepfakes can lead to Business Identity Compromise (BIC) attacks that can result in significant financial and reputational damages. Additionally, deepfakes can facilitate a variety of criminal activities such as online harassment and bullying, fraud and extortion, non-consensual pornography and online child exploitation. The FBI noted an emerging trend where malicious actors use deepfakes to pose as job interviewees to gain access to company systems.
How can organisations mitigate the risks of deepfakes?
Read the full article to find out…
SUPPORTING CONTENTVIEW ALL NEWS ARTICLES
Deepfakes and Biometric Security Breakthroughs
Brett Beranek discusses the potential, and challenges of biometrics in the security space, expanding upon deep neural networks and deep fakes.
Artificial Intelligence and Machine Learning Technologies
By invitation, Steve Durbin, Chief Executive of the ISF discusses Artificial Intelligence (AI) and Machine Learning (ML) technologies with Göran Wa...