Deepfakes: A new threat to personal privacy and identity

25 August, 2022

By Freddie Lichtenstein

Deepfakes: A new threat to personal privacy and identity

Concerns are growing over the increasing sophistication of deepfakes, and it seems that we’re entering a new era of data manipulation.

Referring to an image, video, or voice recording that has been edited to swap a subject for another’s likeness, deepfakes can be used with innocent intentions. A recent music video is just one of these examples, where the artist morphs into different celebrities and personalities.

However, it’s important to understand that the term itself rose from harmful origins, and the potential for deepfakes to create lasting harm is huge.

From manipulated political videos to exploitative adult imagery, the risks of deepfake technology can’t be ignored.

Understanding the danger of deepfakes

By their nature, deepfakes enable a creator to introduce an unwilling or unknowing participant into a digital scene by swapping their face and/or voice with that of the media’s subject.

As a result, influencers, politicians, media personalities, and others can be shown in any environment, doing or saying what the deepfake creator chooses. This is especially dangerous when, unlike the music video discussed above, the audience is ignorant to the fact that the images they see are false.

Currently, the two use cases of deepfakes with the most damaging consequences are political manipulation and adult imagery.

Scenario #1: political manipulation

As of 2022, deepfakes have been identified as a serious political risk.

Although the tech is in its early stages, deepfakes have been used within political atmospheres to disrupt proceedings, harm candidates, and endanger relations.

One deepfake video attempted to show that US House Speaker Nancy Pelosi was drunk, while more serious deepfake videos were deployed on both sides of the Russia/Ukraine war – urging people to surrender.

Political opponents have realised the potential that deepfakes offer. In a world of fast headlines and on-demand information, stopping these videos from gaining traction is becoming increasingly critical in order to prevent harm.

Scenario #2: adult imagery exploitation

On the 25th of May 2020, law graduate Noelle Martin received an email notifying her that deepfake adult imagery using her face was circulating online.

As she revealed in an interview with Vox, the title of the video included her full name and could be discovered by anyone – from potential employers to family members.

While deepfake videos of this nature were made in the past using various celebrities, this example showcases the rising capabilities, and popularity, of videos of this nature – removing a subject’s ability to consent, while also copying their likeness to potentially abuse or victimise members of the public.

Deepfakes with limited data

Noelle’s experience demonstrates an evolution in the ability of deepfakes to create convincing results with limited data. Creating a deepfake video used to require the use of a huge set of training data, covering facial expressions from every angle.

The more high-fidelity training data available, the more realistic a deepfake was.

Previously, this amount of data was only available for celebrities and actresses, but as deepfake models advance, and social media continues to play an ever-present role in daily life, it’s now increasingly possible to create a deepfake video of anyone.

It involves one of two techniques:

Individual targeting:

Individual targeting is just that – training a model by targeting one specific subject for one specific purpose. For example, the popular deepfake Tom Cruise TikTok (@deeptomcruise) videos use this technique. Traditionally, this type of deepfake is the most common, but a new type of technology is on the rise.

Subject-agnostic models:

These represent the possibilities of off-the-shelf deepfake solutions.

Subject-agnostic models are pre-trained models that can be applied with no prior training, meaning, unlike individual targeting models, they don’t need vast amounts of subject data to deploy. This significantly lowers the barrier to entry so almost anyone can create misleading or harmful media with just a single image.

Deepfakes are advancing

We’re currently in the early days of deepfake technology and can often visually distinguish between authentic and inauthentic, but this won’t always be the case.

As this tech evolves, it may become more commoditised, indistinguishable, and accessible than ever.

As deepfakes gain popularity, they may destabilise the world of online media. Sam Gregory, programme director for the NGO WITNESS, believes that our sentiments will shift to a “disbelief by default” mentality – holding all online media as false until proven otherwise.

However, countermeasures are already being made to combat deepfakes. Deepfake detectors are being researched and produced, and awareness initiatives are being set up. But when dealing with deepfakes, where does the responsibility lie?

The burden of responsibility

Creating and deploying a deepfake involves multiple parties, from those that create the technology itself to online hosts of videos and imagery.

With so much potential for deepfakes to invade our lives, establishing responsibility can help control and reduce the risk of misinformation, amongst many other consequences.

Deepfakes present an ever-growing danger through exploitative online imagery and political misinformation, so much so that it may change how we interact with online news and media.

As we continue to explore the role of deepfakes, and how best to combat them, we hope to help spread awareness of their dangers and understand their impact in greater depth.

For more industry thoughts and news, visit our blog.


Subscribe to the Newsletter