Information has recently surfaced about a NordVPN breach caused by vulnerabilities in a third-party datacenter’s server. We’d like to give you a clear timeline of the events followed by some key facts about the NordVPN breach story.
A few months ago, we became aware of an incident in March 2018 when a server at a datacenter in Finland we had been renting servers from was accessed without authorization. This was done through an insecure remote management system account that the datacenter had added without our knowledge. The datacenter deleted the user accounts that the intruder had exploited rather than notify us.
The intruder did not find any user activity logs because they do not exist. They did not discover users’ identities, usernames, or passwords because none of our applications send user-created credentials for authentication.
The intruder did find and acquire a TLS key that has already expired. With this key, an attack could only be performed on the web against a specific target and would require extraordinary access to the victim’s device or network (like an already-compromised device, a malicious network administrator, or a compromised network). Such an attack would be very difficult to pull off. Expired or not, this TLS key could not have been used to decrypt NordVPN traffic in any way. That’s not what it does.
This was an isolated case, and no other servers or datacenter providers we use have been affected.
Once we found out about the incident, we first terminated our contract with the provider and eliminated the server, which we had operated since January 31, 2018. We then immediately launched a thorough internal audit of our entire infrastructure. We had to ensure that no other server could possibly be exploited this way. Unfortunately, thoroughly reviewing the providers and configurations for over 5,000 servers around the world takes time. As a result, we decided we should not notify the public until we could be sure that such an attack could not be replicated anywhere else on our infrastructure. Lastly, we raised our standards even further for current and future datacenter partners to ensure that no similar breaches could ever happen again.
We want our users and the public to accurately understand the scale of the attack and what was and was not at risk. The breach affected one of over 3,000 servers we had at the time for a limited time period, but that’s no excuse for an egregious mistake that never should have been made. Our goal is not to undermine the severity and significance of this breach. We should have done more to filter out unreliable server providers and ensure the security of our customers.
Since the discovery, we have taken all the necessary means to enhance our security. We have undergone an application security audit, are working on a second no-logs audit right now, and are preparing a bug bounty program. We will give our all to maximize the security of every aspect of our service, and next year we will launch an independent external audit of all of our infrastructure.
Our goal here is to notify and educate the public about this breach. That’s the only way we’ll be able to recover from this significant setback and make our security even tighter.
NOTE: Post updated 10/25.
UPDATE (10/26): We've published a detailed plan for how we're going to improve our security following this incident. Click here to check it out.