Deepfakes are videos that have been doctored using artificial intelligence, and they’re a growing concern for governments, law enforcement agencies, and individuals. These pieces of synthetic media can be very convincing, and could one day undermine our ability to trust our own eyes. But what is deepfake technology? And why is it so dangerous?
Deepfakes can have a variety of purposes, ranging from de-aging actors in Hollywood blockbusters to waging dangerous propaganda wars against political rivals.
Deepfake technology can create convincing videos of people, including politicians and celebrities, appearing to say and do things that they never actually did. Deepfake programs are relatively accessible and easy to use and may soon play a major role in the movie industry.
Combine this with the fact that artificial intelligence systems can also recreate the audio of specific human voices in the same way, and a good deepfake becomes a powerful — and potentially dangerous — tool.
Deepfakes are made with artificial intelligence systems, which use a process called deep learning to view and recreate media. In some cases, these deep learning systems use a generative adversarial network; essentially, the AI network is split into two rival components, which compete to learn and improve their output.
A deepfake usually involves video footage of a person, usually an actor. This is just the puppet onto which another face will later be projected. The artificial intelligence then views hundreds of images of a different individual and realistically recreates their features. The image of the new face is then mapped onto the movements of the actor’s, syncing expressions and lip motions.
Though deepfake AI is still being polished and improved, it’s already at a very advanced stage and can produce results rapidly with minimal human oversight.
Anyone can make deepfake videos and images, using a variety of free apps like Wombo and FaceApp. These are simple programs that don’t produce extremely convincing results, but they’re just the tip of the iceberg.
Visual effects artists have been looking for ways to de-age older actors and even bring deceased celebrities back to the screen. We’ve seen attempts to de-age Arnold Schwarzenegger in “Terminator: Dark Fate” and to resurrect Peter Cushing in “Rogue One: A Star Wars Story.”
These were not deepfakes but rather careful digital reconstructions built by visual effects artists. What’s striking about them is that many amateur VFX artists made their own versions of the relevant scenes from these movies and used deepfake software to generate even better facial recreations. This demonstrates an exciting and legitimate use-case for deepfakes.
You can view some of these convincing AI-generated videos on YouTube along with numerous fake videos of public figures like Donald Trump, Barack Obama, and Tom Cruise, all recreated through this same technology.
Deepfakes are quickly garnering a bad reputation as tools of disinformation, fake news, and malicious adult content. They are not illegal in and of themselves, however.
There are legitimate uses for deepfakes – their role in digital effects in movies being foremost among them – but they can also be used for illegal practices, which is one of the main reasons they’ve grown the public consciousness recently.
Deepfakes are sometimes used to illegally create pornographic videos and images, often using the likeness of female celebrities and public figures. There have even been reports of deepfake nude bots which can automatically generate this material.
If this wasn’t bad enough, these fake videos can also be part of the propaganda strategy of governments to try and undermine confidence in their enemies.
It’s not always easy to detect a deepfake. While some are clearly fake videos, with facial expressions giving off a surreal uncanny valley effect, others are more sophisticated.
Several factors can help you determine whether you’re looking at a convincing deepfake video or not. If the video contains a face, focus your attention there and look for these giveaways:
The connecting points where the deepfake video overlay meets the face of the person underneath can sometimes appear oddly smooth and textureless. Even on better examples, any sudden head movements or changes in lighting can momentarily reveal blurry facial borders.
If the video represents a public figure, like a politician, you can find images of that person to compare. Look at elements outside of the main facial features that might not have been altered; hands, hair, body-shape, and other details that don’t sync up between the video in question and older, more reliable visual sources.
At least for now, deepfakes look more convincing when the subject isn’t moving too much. If the body and head of the subject seems oddly stiff, it could be a sign that the creators of the video are trying to make it easier for the deep-learning AI to map an image onto the person’s face without having to track too much movement.
Deepfake technology is rapidly evolving, but for now training computers to create audio simulations seems to produce poorer results than synthesizing convincing deepfake images and video.
The creator of the deepfake has to choose between two options if they want their subject to speak – either use an AI-generated voice or an actor who can impersonate the source material. You can compare the voice to audio of a celebrity or politician speaking and you may notice some differences.
Deepfake technology poses a number of very real threats to individuals and society at large. Convincing deepfake videos can be extremely harmful if they are created as revenge porn and shared online, which is why many countries are beginning to institute laws to criminalize this activity.
Several other growing threats are posed by deepfakes, however, which we’ll explore here.
A convincing deepfake video can be used as propaganda to smear political opponents and rival governments. For an example, we can look to 2022 when, shortly after the Russian invasion of Ukraine, deepfake videos appeared online showing the Ukrainian president surrendering.
While this video was exposed as a fake, it’s easy to imagine how damaging this strategy could be once the technology becomes harder to detect. A deepfake could be used to smear a political opponent and influence voters. Alternatively, wide-spread use of this technology could be used to discredit a genuinely incriminating video.
If detecting a deepfake video becomes impossible, fabricated media has the potential to supercharge the risks of fake news and fuel distrust in the mainstream media.
Deepfake technology could also be used to trick people into exposing private information or even giving away money with phishing attacks. If you receive a Facebook message from a friend explaining that they are stranded overseas and need urgent financial assistance, you might be suspicious. Perhaps you call them or text them on a different app and find out that their Facebook account has been hacked.
Now imagine the same scenario, but instead of a message, you’re sent a completely convincing video of your friend. The audio and video seem genuine; you now have what seems like visual proof that they really are stuck in an airport without enough money for the ticket home. A lot of people would send the money, feeling no need to ask for further reassurance — but what they’ve just seen could be a deepfake.
Thanks to the growing sophistication of generative adversarial networks and other machine learning systems, a fraudster could soon use these convincing videos to facilitate identity theft and aid in further scams.
It will become increasingly difficult to detect deepfakes as the technology improves. If we reach the point where anyone with a computer and a basic understanding of VFX software can produce a video of the president saying something outrageous, we’re in trouble.
It’s already hard to spot deepfakes, but they could soon become impossible to detect. With a polarized political system, it’s not unlikely that a deepfake might be used as part of a covert campaigning strategy. Though the technology’s use in recent movies and TV shows is helping to raise awareness among the public, many people could still be misinformed by a convincing video.
One simple step you can take today to counter the risks posed by deepfake scammers is to limit how many images you post online. Creating a convincing AI-generated video relies on the system accessing photos and footage of the subject it's trying to recreate. We suggest that you keep your social media profiles private and avoid posting regular images of your face.
Want to read more like this?
Get the latest news and tips from NordVPN.