Your IP:Unknown

·

Your Status: Unknown

Skip to main content


What is a deepfake, and how can you recognize it?

Deepfakes are videos that have been doctored using artificial intelligence, and they’re a growing concern for governments, law enforcement agencies, and even individuals. The first step to protect yourself from such threats is education – learn how to recognize these doctored images and what potential harm they could bring to everyone. Let’s dive in.

Jun 20, 2024

11 min read

Deepfakes

What is a deepfake?

Deepfake definition

Deepfake is a form of synthetic media created by employing artificial intelligence (AI) and deep learning techniques. Deepfake technology produces realistic images, audio, or photos of people doing or saying something they never did by using preexisting images and videos.

Read more

You can call this an evolution of Photoshop, which can be used to doctor images of real-life people in situations they typically wouldn’t be in. Deepfakes mislead audiences into thinking that the portrayed individual performed actions or said statements they never actually did, often for malicious purposes.

How do deepfakes work?

Deepfakes leverage deep learning and generative AI to create highly realistic content. The core of deepfake technology lies in using neural networks, a technique called a “generative adversarial network” or GAN. A GAN consists of two main components: a generator and a discriminator. They train themselves to recognize patterns using algorithms. The generator creates fake images or videos, while the discriminator examines them and blocks content it recognizes as fake.

The generator and discriminator engage in a continuous loop. The generator improves its output to trick the discriminator, and the discriminator enhances its ability to detect fakes. Over time, the generator produces highly realistic fake media that can be difficult to differentiate from authentic content.

The process typically starts with a large dataset of images or videos of the target individual. The deepfake algorithm analyzes them to learn the subject’s facial expressions, body movements, and other nuances. The generator uses this training data to produce fake content.

What is an example of a deepfake?

Deepfakes showcase how AI can manipulate media in impressive ways. Imagine a video in which a celebrity’s face is seamlessly put onto another person’s body, making it appear as though the celebrity is performing actions or speaking words they never actually did. However, this is just one of the many applications of deepfake technology.

Textual deepfakes

Textual deepfakes involve using artificial intelligence to generate convincing fake text. This technology can create realistic articles and news stories or even impersonate someone’s writing style in emails and messages.

The AI analyzes large datasets of text to learn and mimic the way a person writes or speaks. While this can be useful for automating content creation, it poses significant risks. People can use textual deepfakes to spread false information or deceive individuals into believing they are communicating with someone they trust, causing significant harm.

Video deepfakes

Deepfake videos use advanced AI techniques to create realistic fake videos by swapping faces or altering appearances. The process involves using neural networks to map and animate facial features, making the altered video appear genuine.

AI-generated videos pose serious ethical concerns. Criminals can use such videos to fabricate events, create fake news, or impersonate individuals, leading to misinformation, reputational damage, and potential legal issues.

Audio deepfakes

Deepfake audio technology uses AI to generate realistic speech that mimics a person’s voice. The AI analyzes a target’s voice recordings and produces an audio track that sounds just like that person.

People can use this technology to create fraudulent recordings and impersonate others for malicious purposes. For example, scammers can use audio deepfakes to make fake phone calls and extract money.

Live deepfakes

Live deepfakes involve real-time manipulation of video and audio feeds to alter a person’s appearance and voice during live broadcasts or video calls. This technology can be used for virtual reality experiences, live performances, or interactive gaming, providing immersive and engaging experiences.

However, the potential for misuse is significant. Live deepfakes can enable impersonation during video conferences, spreading false information, or conducting fraudulent activities.

Deepfake nude bots

Deepfake nude bots use AI to create explicit images or videos of people by manipulating existing content without their consent. One notorious example is the Telegram deepfake bot. Researchers discovered that this bot forged nude photos of over 100,000 women. The bot’s AI algorithm replaces the clothed parts of an uploaded image with artificially generated nudity.

Cybercriminals shared the images online, mainly targeting women, with some being underage. Cybercriminals promoted the bot through various social media platforms, mostly Telegram and VKontakte.

Such bot cases raise deepfake threats to a new level because they involve private individuals and are dangerously accessible. If you have at least one photo uploaded somewhere online, you could easily become a potential victim of cyberbullying. Being alert online is crucial to preventing such threats. Limiting the content you share publicly can stop threat actors from using that content for malicious activities.

Is there a positive use case for deepfakes?

You can utilize deepfake technology for various positive purposes, such as creating realistic aged-up versions of missing people, realistic portrayals of historical events, or realism enhancement in gaming. However, producing and distributing deepfakes doesn’t require any form of agreement. Using another person’s image for content without their consent is ethically and often also legally wrong. The potential misuse of deepfakes can lead to misinformation, privacy violations, and reputational damage.

How are deepfake videos made?

Although deepfake AI technology is still evolving, it is already highly advanced and capable of producing realistic results quickly with minimal human intervention. Various free apps, such as Wombo and FaceApp, enable anyone to create deepfake media. While these programs may produce less convincing results, they represent the beginning of what is possible with this technology.

How to identify a deepfake

It’s not always easy to detect a deepfake. While some videos are clearly fake, with facial expressions giving off a surreal, uncanny valley effect, others are more sophisticated.

Several factors can help you determine whether you’re looking at a convincing deepfake or not. If the video contains a face, focus your attention there and look for these giveaways:

Smooth or blurry patches

The connecting points where the deepfake video overlay meets the face of the person underneath can sometimes appear oddly smooth and textureless. Even with better examples, any sudden head movements or changes in lighting can momentarily reveal blurry facial borders.

Inaccurate non-facial features

If the video represents a public figure, like a politician, you can find real images of that person to compare. Look at elements outside the main facial features that might not have been altered – hands, hair, body shape, and other details that don’t sync up between the video in question and older, more reliable visual sources.

Unnatural movement

While deepfake is a relatively new technology, in the last few years, it has improved a lot, and deepfake can now easily recreate a person’s movements. However, some details can still give a deepfake away.

If the subject’s body and head seem oddly stiff, it could be a sign that the video creators are trying to make it easier for the deep-learning AI to map an image onto the person’s face without having to track too much movement.

An unconvincing voice

Deepfake technology is rapidly evolving, but for now, training computers to create audio simulations seems to produce poorer results than synthesizing convincing deepfake images and video.

The creator of the deepfake has to choose between two options if they want their subject to speak – either use an AI-generated voice or an actor who can impersonate the source material. You can compare the voice to the audio of a famous person speaking, and you may notice some differences.

Are deepfakes illegal?

Creating deepfakes is not a crime. However, they are increasingly associated with criminal activities such as disinformation, fake news, and the creation of malicious adult content.

Legitimate applications exist, but the potential misuse overshadows them. Cybercriminals use deepfakes to create explicit videos and images illegally, often featuring female celebrities and public figures. There have even been reports of deepfake nude bots that can automatically generate this material.

While deepfakes are not inherently illegal, individuals who create or distribute them for illegal purposes may face criminal charges.

Is deepfake technology a threat?

Deepfake technology poses several very real threats to individuals and society at large. Convincing deepfakes can be extremely harmful if they are created as revenge porn and shared online, which is why many countries are beginning to institute laws to criminalize this activity.

However, deepfakes pose several other growing threats, which we’ll explore here.

Fake news and propaganda

A convincing deepfake video can be used as propaganda to smear political opponents and rival governments. For example, we can look to 2022 when, shortly after the Russian invasion of Ukraine, deepfakes appeared online showing the Ukrainian president surrendering.

While this video was exposed as fake, it’s easy to imagine how damaging this strategy could be once the technology becomes more challenging to detect. A deepfake could be used to smear a political opponent and influence voters. Alternatively, widespread use of this technology could discredit a genuinely incriminating video.

If detecting a deepfake video becomes impossible, fabricated media has the potential to supercharge the risks of fake news and fuel distrust in the mainstream media.

Scams and social engineering

Deepfake technology could also be used to trick people into exposing private information or even giving away money with phishing attacks. You might be suspicious if you receive a Facebook message from a friend explaining that they are stranded overseas and need urgent financial assistance. Perhaps you call or text them on a different app and discover their Facebook account has been hacked.

Now imagine the same scenario, but you’re sent a compelling video of your friend instead of a message. The audio and video seem genuine. You now have visual proof that they are stuck in an airport without enough money for the ticket home. Many people would send the money, feeling no need to ask for further reassurance — but what they’ve just seen could be a deepfake.

Thanks to the growing sophistication of generative adversarial networks and other machine learning systems, a fraudster could soon use these convincing videos to facilitate identity theft and aid in further scams.

The future of deepfakes

As technology improves, it will become increasingly difficult to detect deepfakes. If we reach the point where anyone with a computer and a basic understanding of VFX software can produce a video of the president saying something outrageous, we’re in trouble.

It’s already hard to spot deepfakes, but they could soon become impossible to detect. With a polarized political system, it’s not unlikely that a deepfake might be used as part of a covert campaigning strategy. Though the technology’s use in recent movies and TV shows is helping to raise awareness among the public, many people could still be misinformed by a convincing video.

One simple step you can take today to counter the risks posed by deepfake scammers is to limit how many images you post online. Creating a convincing AI-generated video relies on the system accessing photos and footage of the subject it’s trying to recreate. We suggest that you keep your social media profiles private and avoid posting regular images of your face.

Like what you’re reading?

Get the latest stories and announcements from NordVPN

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

We won’t spam and you will always have the choice to unsubscribe

FAQ

Also available in: Türkçe, Suomi, ‪한국어‬, Português, Português Brasileiro, Español, 繁體中文(台灣), Svenska, Nederlands, Polski, Italiano, Deutsch, 日本語, Français.


author Aurelija S png

Aurelija Skebaite

Aurelija is passionate about cybersecurity and wants to make the online world safer for everyone. She believes the best way to learn is by doing, so she approaches cybersecurity topics from a practical standpoint and aims to help people protect themselves online.