Google’s security team recently found software vulnerabilities that were being exploited by government hackers to fight terrorism. While tech companies usually keep this kind of information private, Google publicized their findings, apparently disrupting the whole operation. Why did they expose the counterterrorism efforts of a US ally? And were they right to do so?
Apr 01, 2021 · 3 min read
Google’s Project Zero is a team of security researchers that study zero-day vulnerabilities (security flaws that haven’t been patched yet) on popular software. They analyze mobile apps, web browsers, operation systems, and open source libraries. When the team discovers a vulnerability, they report it to the software vendors and inform the public.
Researchers from Project Zero, along with Threat Analysis Group (which fights hacking related to governments) have announced that throughout 2020 highly sophisticated hackers exploited 11 zero-day vulnerabilities in Android, iOS, and Windows. The exploits used infected websites to deliver malware to visitors. These findings led to the end of the hackers’ activities, and the bugs were fixed.
At first it seemed as though the story would end there, but an unexpected twist brought it back into the public spotlight. Soon after the initial fixes were implemented, Google revealed that those vulnerabilities were used by a US ally in counterterrorist operations. It’s unknown which allied government was implicated, or whether Google informed them before publicly announcing the discovery.
Google’s actions have rekindled a heated debate around the ethics of cybersecurity and the responsibility of private companies. Many commentators have been left wondering if Google did the right thing by compromising the operation, or if they had a duty to act as they did. While everybody agrees that governments have to take necessary measures to fight terrorism, should it be done by weakening security for everyone?
If researchers were able to find the zero-day vulnerabilities, they could have also been discovered by criminals and used to launch hacks and cyberattacks.
The recent SolarWinds hack has raised new suspicions that American government agencies were using encryption backdoors, which could eventually compromise the security of thousands of companies and institutions.
Let’s travel a couple of years back in time to understand how these situations can escalate. In 2017, the world was hit by a vicious ransomware attack called WannaCry, which targeted Windows users. Cybercriminals infected more than 200,000 computers in 150 countries and demanded ransom payments in Bitcoin.
Hackers used EternalBlue, a software vulnerability in Windows, which was exploited by the NSA (National Security Agency) for the last couple of years. The NSA didn’t warn Microsoft about EternalBlue and may have used it for their own purposes.
The information about this vulnerability was leaked one month prior to WannaCry attack by a hacker group called the Shadow Brokers. Criminals started exploiting EternalBlue to steal passwords from browsers and install malware on devices.
After WannaCry was released into the wild, Microsoft blamed the NSA for not sharing the information about the bug and accused them of putting users at risk. “We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits,” stated Brad Smith, the president of Microsoft, in a blog post following the attack.
Now let’s go back to Google’s decision to reveal the 11 vulnerabilities that were being used in the counterterrorism operation. Was the company just unhappy about foreign governments sniffing around their backyard? Or were these vulnerabilities too risky to be left unpatched?
No service provider wants to leave bugs in their software that could later be used by bad actors. With cyber attacks getting more sophisticated every year, ignoring unpatched flaws could lead to untold damage later on.
On the other hand, national security experts have argued that it might take a lot of time and resources to regenerate such exploits. What if they were a part of an important operation, which could have stopped potential terrorist attacks? There are no easy answers here.
We don’t know exactly why Google decided to go public and reveal sensitive information about these vulnerabilities. However, one thing is clear. Governments and corporations still haven’t found common ground on the subject of privacy and necessary threat-prevention. Until they do, tensions between them will continue to rise.
Want to read more like this?
Get the latest news and tips from NordVPN