Your IP: Unknown · Your Status: ProtectedUnprotectedUnknown

Skip to main content

Everything you need to know about fake news

Social media has ushered in a new age of disinformation, fake news, and propaganda. In recent years, the number of countries using social media manipulation campaigns increased by 150%. What goals do these campaigns serve, how are they executed, and what can you do to see through the fake news?

Everything you need to know about fake news

Why use fake news?

A growing number of governments around the world are using fake news and deceptive online media practices. Back in 2018, over 70 countries were found to have used some form of social media manipulation, and that number is almost certainly higher now.

These activities were usually ordered by government agencies, political parties, or politicians. Sometimes, private contractors or civil organizations launched them instead. Such campaigns tend to focus on three key areas – pro-government or pro-party propaganda, propaganda attacking the opposing party, and messages to divide society.

Broadly speaking, the goal of these strategies is to mislead social media users locally and abroad. But why? When you dig deeper, it becomes evident that misleading information can have detrimental effects and can be used to:

  • Distract or divert conversations away from important issues. Magicians divert your attention from their hands to work their tricks and so does the media; as a result, the campaign’s organizers can achieve their goals in secret;
  • Incite violence, amplify hate speech, and increase polarization between religious, political or social groups. One such example was the Muslim genocide in Myanmar in 2017 and onwards, which was incited primarily by propaganda spread on Facebook. And since 2014, Russian-backed media outlets have repeatedly spread fake news stories about Ukrainian violence against people in the contested Donbas area, in an apparent attempt to spread devision between the regions;
  • Micro-target voters to influence presidential elections or other public votes like referendums. This occurred in the US Presidential election in 2016 and Brexit. To achieve this, bad actors need to harvest data about users’ demographics, hobbies, income, etc. to form strategies on how to polarize them.
  • Suppress fundamental human rights such as the right to freedom of expression or freedom of information. Propaganda puts you in a manufactured bubble of potentially false information, meaning that it becomes more difficult to identify what’s right and what’s real.
  • Influence global readers, which is especially important for authoritarian regimes. Popular social media channels that are banned for Chinese citizens are still used by the government to spread favorable news worldwide.

How does disinformation spread?

How does disinformation spread?

Disinformation spreads like wildfire on social media because people have started trusting social media as their primary source of information. Social media allows entities to gather and harness a lot of information about you. This makes it easier to target you, tailor messages to your taste, and make them more convincing.

For example, politicians are more likely to spend their money to target voters whose views ambivalent. It’s much easier and more cost-effective to convert someone who doesn’t have a strong opinion about certain issues yet.

How is this achieved?

  • Using bot accounts that are created to mimic real users or fake human accounts. Some might use a mixture of fake accounts and automated messages or hacked and stolen accounts.
  • Working with the public, including civil society organizations, youth groups, social media influencers, and volunteers who support the cause. Hiding behind these entities can make it incredibly difficult to recognize false information.
  • State-controlled media platforms. Viral news stories, especially in the form of video clips, can be promoted by state-backed media outlets. This is a common strategy within oppressive regimes, but it can also be very effective abroad. RT, an English-language broadcaster broadly seen as a mouthpiece for the Russian government, has been spreading pro-Russian propaganda for years, although European regulators have finally started limiting its reach since the war in Ukraine began.
  • Choosing different types of media to spread disinformation. You may not even consider them propaganda. Memes, videos, fake websites, social media posts, comments, and content produced by influencers can all be used as fake news and propaganda. In addition, the widely accessible AI chatbot ChatGPT is now often used to create content for various media platforms. However, the chatbot’s tendency to provide inaccurate information and convincingly mimic the tone and style of particular people have raised questions about ChatGPT’s security.
  • Choosing the right platform. Facebook is still one of the biggest platforms used to spread disinformation, followed by Twitter, Instagram, WhatsApp, and Youtube. Cybersecurity experts warn that Artificial Intelligence, Virtual Reality and the Internet of Things will soon also be used more widely to create and support propaganda.
  • Deepfake technology has become an increasingly effective tool of misinformation in recent years. Deepfakes are pieces of media involving convincing facsimiles of people’s faces and voices, generated by artificial intelligence. In 2022, after the outbreak of Russia’s war in Ukraine, a deepfake video circulated online showing the Ukrainian president, Volodymyr Zelensky, telling his country’s troops to surrender. The video was completely fabricated, of course.

If it’s becoming so hard for the average user to distinguish truth from fiction, should social media companies be responsible for monitoring what is posted on their platforms?

What is social media doing to stop this?

In the US, where most social media companies are based, a 1996 federal law states that social media platforms are not news publishers and thus are not responsible for content posted on their websites. However, fake news and hate speech are on the rise, and more and more people are pressuring these channels to weed out objectionable content. So what are they doing?

Facebook uses different technologies to pick up on fake news and fake accounts. They work with an external fact-checking company and employed thousands to check and handpick suspicious content. The social media giant has also made its ad-buying policies much stricter, especially for political campaigns. Twitter has targeted automatic accounts and bots, while YouTube altered its algorithms to make such content more difficult to find.

However, these methods are not foolproof. The algorithms are not advanced enough to be able to pick out fake news 100%, while handpicking leaves room for human error. Filtering also raises many questions, such as its implications for freedom of speech? What if your post was genuine but was flagged as suspicious? Does it serve any other purposes like favouring one political party but blocking ads and posts of the opposition?

With or without filtering, those create misinformation are constantly using new tools and techniques to fool us. So it’s important for YOU to take charge of the information you consume and learn to separate the wheat from the chaff.

Fake news: propaganda’s latest evolution

Propaganda has long been a part of warfare, but 2022 war in Ukraine has shown us how its evolving in the 21st century. A huge amount of fake news and media is being posted and shared online in relation to the Russian invasion.

We’ve already touched on Volodymyr Zelensky’s deepfake video, but there are many other instances of Russian authorities using fake or misleading videos to promote their cause. For example, Russian accounts posted videos that they claimed showed recent Ukrainian aggression against the Russian-backed Donbas region. However, metadata on the videos was quickly used to prove that they were several years out of date.

On social media, and especially on Twitter, images and videos from movies, video games, and other conflicts around the globe have all been passed off as being from the current war. Worryingly, these posts are often shared very widely by other accounts, who don’t realize they’re spreading fake news.

This highlights one of the biggest changes in modern propaganda. Where once misinformation was spread mainly through state media and other official channels, now individual internet users voluntarily share fake news, without verifying its authenticity.

How to see through disinformation

1. Identify

When you see a post on social media, take it with a pinch of salt and ask yourself:

  • Does this post evoke any emotions? If it evokes any negative emotions such as anger or resentment, you may want to question that news story and look for the facts. It may be that it was designed to achieve exactly that emotive reaction;
  • Does the post have any grammatical errors? Most trustworthy news sources carefully edit and check their work before publishing it, so errors might be signs of content created for a different purpose;
  • Could the images have been taken out of context or edited? Try searching for the original;
  • Does the post highlight any stereotypes or try to ignite cultural or religious separation?;
  • What is the source? Check the link of the post. Does it seem legitimate? Bad actors might be using techniques similar to spoofed URLs. They will try to fool you into thinking that the source is trustworthy;
  • Has this been reported elsewhere? Is this particular event or news story covered by newspapers you trust? If not, it may not be true.

2. Report

If you think you came across a fake news story, a fake account, or any other type of disinformation – report it. Most social media channels have integrated buttons that allow you to flag such content. If they don’t, contact them directly.

Don’t spread the disinformation any further – resist the temptation to share it with your friends. You never know, there might be someone who will believe it.

3. Prevent

It’s easier said than done, but you can prevent yourself from being sucked into a “propaganda bubble.” How? Only reveal the necessary information about yourself on social media and don’t overshare. This will make it more difficult to profile you and, as a result, serve you targeted information. You can make your profiles more private by following our tips on how to make your social media profiles private.

If you want a foolproof method – stop using social media. Or at least don’t consider social media a trustworthy news source. Go back to traditional media – TV, press, and news websites. And even then, consume information with your critical thinking hat on.