The history of cybersecurity
Cybersecurity has a long history — a history that is still being written today. The tools we use now to protect our data have their origins in millennia past and are still evolving to meet the threats and challenges of the future. Let’s explore the history of cybersecurity so far.
Table of Contents
Table of Contents
The fundamentals of cybersecurity
Before we journey through the history of cybersecurity, we need to understand the fundamentals of cybersecurity. The word cyber has its roots in “cybernetics”, a field of study related to communication and control systems and the flow of information. However, the key terms we really need to define are cybersecurity, malware, antivirus software, and encryption.
What is cybersecurity?
The term cybersecurity encompasses all areas of computer security and internet and network safety. Offline systems and devices are also included in this field, although the majority of cybersecurity threats relate to devices with internet connectivity. Cybersecurity protects data and devices from unauthorized access, and protects people from the threats posed by bad actors online.
To define cybersecurity, we also need to understand what this security is meant to protect against: cyberattacks. Most cyberattacks involve someone either trying to disrupt the normal operations of a network or connected device or trying to access parts of a network or device without authorization.
An example of the first option is a DDoS attack, in which attackers flood servers with artificially inflated traffic, causing a website to crash. In the second instance — unauthorized access — a hacker might try to bypass cybersecurity defenses and steal sensitive data from a company or individual.
Cyberattack methods and tools continually evolve, just like the cybersecurity systems built to repel them. The history of cybersecurity is, in its simplest form, the story of an arms race between attackers and defenders.
What is malware?
Malware is any kind of software created for a malicious purpose. A self-replicating virus, invasive spyware, browser hijackers — these are just a few of the thousands of malware variants out there, and new ones are continually being created.
Malware is usually installed on a victim’s device without their knowledge or consent. It can then do whatever its creator programmed it to do: for instance, steal data, encrypt files, or facilitate remote control of the host device.
Terms like virus, trojan, or ransomware all refer to different subsets of malware.
What is encryption?
Encryption is the process by which data is scrambled into indecipherable code to prevent unauthorized access. A digital “key” code is created which allows the intended viewer (or an application on their device) to unscramble the code.
Encryption doesn’t always have to be digital. Cryptography, as the process is known, has been used in some form for almost 4,000 years.
An early example of cryptography was found in the tomb of an ancient Egyptian nobleman Khnumhotep II, dating from around 1900 BC. A clay tablet from 1500 BC appears to contain an encrypted recipe for pottery glaze, noted down and encoded by a Mesopotamian farmer who wished to protect his intellectual property. Millenia later, the fundamental process of securing our valuable information still builds on these foundations.
Encryption today depends on “protocols,” systemized rules built into whatever program is carrying out the encryption. These rules govern how the data is scrambled, what key unscrambles it, and how that key is generated and verified. For example, most websites use an encryption protocol called HTTPS, which prevents your activity on the site being publicly visible.
Unlike antivirus software, which responds to threats when they’re detected, encryption is a way to proactively keep data safe, even when you’re not expecting an imminent threat.
What is cybersecurity software?
Cybersecurity software is any software that protects us from online threats and intrusions. The most common example of this is antivirus software, also referred to as anti-malware.
Anti-malware programs can do a lot to limit online risks. They can block our access to websites known for hosting malware, scan our devices for dangerous or unwanted files, and be automated to carry out security processes without human involvement.
The basic mechanism used by much of this software is a blocklist: a database (usually stored in the cloud) which contains lists of known threats. These could be dangerous websites and file types or even just certain actions that a program might take that seem suspicious. When the software detects something that matches an entry on its database, it takes steps to neutralize the threat.
The history of cybersecurity: the 1960s to the 2020s
Cybersecurity is a relatively new innovation, emerging in the second half of the 20th century, but it’s already gone through multiple iterations to become the collection of tools and strategies we use today. From the birth of the internet to global cyber conflicts, let’s explore the history of cybersecurity through the decades.
The 1960s: The birth of cyber
Though computers predate the internet (the first mechanical computer having been created in 1822 and the earliest electronic digital computer, known as the ABC, appearing in 1942), cybersecurity didn’t really come into the picture until computers began to be connected, forming networks. This started to happen in the 1950s, when the first computer networks and modems were developed. However, it was in the 1960s that the internet as we know it today began to take shape.
Prior to the invention of early forms of the internet, the only way to hack a computer was to physically access it. If someone did so illegally, the crime they would have been committing was trespassing, not hacking or cyber espionage.
The invention of the internet
In the late 1960s, the Pentagon’s Advanced Research Project Agency (ARPA) developed a system to allow computers to communicate with each other over large distances. Previously, most computers could only be networked if they were in the same area, and even then they were limited in their ability to exchange data. ARPA wanted to change that.
In 1969, ARPA’s new networking system (known as packet switching) was able to send a message from a computer at the University of California in Los Angeles across the state to a device at the Stanford Research Institute. Suddenly, multiple computers could send and receive packets of data, creating an internet network. Cyberspace was born.
Want to read more like this?
Get the latest news and tips from NordVPN.
The 1970s: A new rivalry
If the 1960s set the stage for the world of cybersecurity, the decade that followed introduced us to the main characters, the great rivals of our story: malware and cybersecurity software.
Creeper and Reaper
In 1971, just two years after the first message was sent across ARPNET, a researcher working on the project created Creeper. This was a simple program which operated independently of human control, moving from one connected computer to another and displaying the message, “I’m the creeper. Catch me if you can.”
The researcher, Bob Thomas, wasn’t a cybercriminal; he was just playing with this rapidly evolving technology. However, his experiment was a sign of things to come. That template, a self-operating and self-replicating program spreading from one device to another, foreshadowed malware as we know it now.
As a response to Creeper, another team member — Ray Tomlinson, the inventor of the email — created a program to pursue and eliminate the virus. He called it Reaper, and it is the first example we have of cybersecurity software. This arms race between malware and anti-malware continues to drive the development of cybersecurity to this day.
Adoption and risk
As the 1970s continued, adoption of these relatively new technologies — computers and internet connectivity — began to increase. The US government, having developed ARPNET, was an early mover in this space, seeing the potential these systems had to revolutionize military communications.
Adoption drives risk, however, as ever greater amounts of data — including sensitive government information — was now being stored and accessed on connected devices. The US government began developing software to limit unauthorized access, launching a new ARPA project called Protection Analysis to try and find automated security solutions.
Large companies and corporations were involved too, producing computers, chipsets, and operating system software. One of these was Digital Equipment Corporation (DEC). During the late 1970s, DEC used a computer system called The Ark to develop operating systems for other computers.
In 1979, a high schooler in the US called Kevin Mitnick hacked The Ark and stole copies of DEC’s new operating systems. This cyberattack was notable for several reasons: the youth of the attacker, the severity of the punishment he received when he was caught, and the ease with which he carried out the crime.
All it took was a phone call. Using a technique we now refer to as social engineering, the young Mitnick called someone inside DEC and convinced them that he was a lead software engineer who had been locked out of his account. He talked his contact into giving him the login details he needed and soon had unauthorized access to huge amounts of sensitive company data.
Encryption is standardized
Another major leap forward in cybersecurity came with the development of the Data Encryption Standard (DES). In the early 1970s the US government was growing to understand that data stored and moved through computer networks had to be protected.
In response, the DES was developed by researchers at the tech company IBM, with some involvement from the NSA. In 1977 it was officially published as a Federal Information Processing Standard, encouraging large-scale adoption of the protocol.
The DES wasn’t the most robust encryption protocol, but it worked well enough to be adopted and endorsed by the NSA and, in turn, the wider security community. It remained a widely used method of encryption until it was replaced in 2001.
While cybersecurity was still in its infancy, people in the 1970s developed an understanding that encryption could protect data and proactively prevent cyberattacks and data breaches. However, as the Kevin Mitnick incident proved, hackers still had many other ways to access sensitive data. Social engineering and human error are still valuable cybercriminal assets to this day.
The 1980s: Cybersecurity goes mainstream
By the 1980s, internet-enabled computers were being used in government, financial institutions, and many other walks of life. That meant that an ever-growing number of opportunities for hackers to steal valuable information or simply cause disruption with viruses and other malware.
Cyberattacks make headlines
Throughout the 1980s, high-profile cyberattacks against AT&T, National CSS, and other major institutions began making the news. In 1983, hackers truly entered the mainstream after the movie WarGames depicted a fictional story in which a hacker gains access to nuclear weapons systems.
While most early media depictions of hackers and cybercriminals were inaccurate and melodramatic, the public was becoming aware of “cyber” as a concept. The internet was here, and though the technology still had a long way to go, people were coming to understand the benefits that came with it — and the risks.
One piece of malware that caught the public’s imagination was the Vienna virus, a self-replicating program that could corrupt files on an infected device. Many similar threats were in circulation by this time, but Vienna earned its place in history not because of what it did, but how it was stopped.
In the mid-1980s, German cybersecurity expert Bernd Fix realized that his device had been infected by the Vienna virus. In response, he coded a piece of antivirus software that located and removed the Vienna malware. This was one of the first examples of modern antivirus software as we know it today.
The cybersecurity market expands
With the threat of cyberattacks growing, in practice and in the public discourse, software vendors started selling cybersecurity programs. In 1988, commercial antivirus software appeared.
In the US, the security company McAfee brought VirusScan to market. In Europe, programs like Ultimate Virus Killer and NOD antivirus were made available. Cybersecurity experts began selling their services across the globe as companies and governments raced to keep up with the hackers that were probing their new systems for weaknesses.
This explosion of new cybersecurity software was really the beginning of cybersecurity as we know it. Programs and applications were being created to automatically mitigate or neutralize the threats posed by hackers and their malware online.
The 1990s: The Internet Age begins
The 1990s continued the trends of growing adoption and risk, but it was in this decade that widespread internet proliferation began to accelerate.
The new normal
Microsoft released multiple new and improved versions of its Windows operating system throughout the 1990s, focusing increasingly on servicing individual consumers rather than businesses or government agencies. They also launched Internet Explorer with Windows 95, which remained the most popular web browser for roughly two decades.
This step was both a reflection of and a driving force behind the fact that computers were becoming more affordable and widely available. Throughout the 1980s, public awareness of this new technology increased sharply, and now people wanted to be able to access the internet from the comfort of their own homes.
Microsoft’s affordable, consumer-facing products made the internet more accessible than ever before, and suddenly millions of people around the world were sending emails, carrying out research, and even playing online games.
Cyberspace was no longer the sole domain of tech companies and the military. A digitally connected society was the new normal, and everyone wanted to be involved.
The dangers of email
One of the first useful functions that the internet has played for individual users was email. Services like Microsoft Outlook gave people a taste of rapid messaging services, something that had never really been an option before.
Understandably, many internet users eagerly adopted email as a new communication form and, predictably, so did cybercriminals. One of the most striking and expensive attacks of the decade came in 1999, when the Melissa virus began spreading through Outlook inboxes.
The malware arrived inside an email, with the subject line “Important Message.” Attached to the email was a file entitled “list.doc,” which contained the Melissa virus. As soon as the file was opened, the malware installed itself onto the device and started causing trouble.
First, it opened multiple pornographic sites, and while users rushed to try and close them, it quietly disabled Outlook’s security systems. Finally, with Outlook vulnerable, the virus would generate new email messages with the same format and attachment to send to the top 50 people in the victim’s contact list. Melissa spread like wildfire through the ever-expanding cyberspace, causing an estimated $80 million in total damage.
This incident demonstrated two things. First, the new global network of internet communications allowed malware to spread at an unprecedented speed. Second, current security protocols were still woefully inadequate, especially when a little social engineering was involved. Robust security software was still no match for the human curiosity that led so many to open an “important message.”
The 2000s: a new level of connectivity
The 1990s laid the groundwork for the internet we have today, with all its attendant threats and security protocols. However, it was in the 2000s that our modern cyberspace took shape.
Cybercrime evolves
The main goal of cybercriminals continued to be the spread of malware, and a new method began to be employed in the early 2000s that is still used today. People were becoming more wary of email attachments, and some email services even scanned attachments now to check for risks. To bypass these defenses, hackers realized that they could trick people into leaving the relative safety of their email services and visiting a web page set up by the hacker.
This process involves convincing the victim that the email is from a trusted sender – a bank, for example, or a government agency. The email asks the receiver to click a link, perhaps to cancel an unexpected bank transfer or claim a prize. In reality, the link takes them to a website where malware can be installed onto their device or where their personal data can be exposed.
Once again, hackers were realizing that they could use social engineering to trick people into putting themselves at risk in ways that their limited security software could not prevent. This technique is still used today and is still depressingly effective.
In response to the escalation of cybercrime, the Department of Homeland Security in the US founded its National Cyber Security Division. For the first time, the American government and the world at large recognized the fact that cybersecurity was now an issue of national and even global significance. Defending cyberspace from criminals and bad actors was a matter of both personal safety and state security.
Cybersecurity evolves
As always, the arms race between crime and security continued. Cybersecurity companies like Avast realized that the demand for cybersecurity products was skyrocketing and responded by releasing the first free mainstream security software.
A wider range of security tools became available in the mid-2000s, with the first commercial virtual private networks appearing. A VPN, unlike antivirus software, allows users to encrypt the data they send and receive online.
Despite the growth in new security tools, from VPNs to advanced anti-malware, it soon became clear that many people couldn’t or wouldn’t use them, because the software took up too much space on their devices. Computer memory was still fairly restricted in the 2000s, and so another solution had to be found.
It came in 2007, when companies like Panda Security and McAfee published the first cloud-based security solutions, allowing cybersecurity tools to be used much more widely. The improved accessibility of cybersecurity products couldn’t have come at a better time, as the arrival of smartphones and social media was now supercharging global connectivity, making the public ever more vulnerable to hackers.
The 2010s: Conflict in cyberspace
With the modern internet now fully established, the 2010s saw a number of key developments: the evolution of new cyber warfare tactics, the growing tensions around personal data privacy, and the massive risks posed by corporate data breaches.
Cyber warfare
In 2010, computers involved in Iran’s controversial nuclear program were infected with malware, causing large-scale disruption across their networks. The malware was called Stuxnet, and though its origins have not been officially confirmed, it is widely believed to have been the product of American and Israeli security forces.
This incident heralded a new direction for international conflicts and espionage. Cyberattacks could be weaponized, allowing governments to target their rivals covertly. Iran could point a finger at their rivals, but they could never prove their accusations beyond reasonable doubt.
Of course, it wasn’t just the Americans who could play this game. Major rivals of the US, including both China and Russia, could use these same tactics. Because so much of the world’s infrastructure was now connected to the internet, the potential damage of a successful cyberattack was catastrophic.
Suddenly, cybersecurity was no longer just about preventing crime and protecting data. It was now a matter of national security.
The privacy debate
While Russia and America probed each other’s cyber defenses, another battle was beginning to heat up: the battle for online privacy.
In the early 2010s, public awareness began to grow around data collection. Companies like Facebook and Google were gathering huge troves of information about their users and were either using it to target advertising on their own platforms or selling it to third-party advertisers.
Government regulation lagged behind, so many corporations were able to take part in massive invasive data collection without breaking any laws. In response, many individuals took steps to enhance their own security.
Since then, similar laws have been passed around the world, but many individuals have taken steps to enhance their own security. During the 2010s, a new sector of the cybersecurity market emerged: privacy products.
Internet users could now buy apps and other software solutions to help them maintain their privacy online. Privacy-focused browsers and search engines were in growing demand. The popularity of VPNs spiked dramatically. For the first time, people began to realize that they could limit the data collection practices of major companies rather than waiting for slow-moving governments to step in.
Corporate data breaches
You might think that privacy and security are two different things, but they’re closely linked. To understand why online privacy enhances personal cybersecurity, we need to look at the third feature of the 2010s: data breaches.
A data breach is an unauthorized leak of information. It could be something that happens accidentally, but more often it is the result of a hacker deliberately targeting a website or an organization to steal data. A breach might include user information, private internal communications, customer payment details, and anything else that wasn’t meant to be released to an entity outside of the organization.
If a company gathers information on its users and then suffers a data breach, that information could end up for sale on the dark web. There it can be bought by other criminals and used to launch targeted phishing attacks or to carry out identity theft.
For anyone who still had doubt about the security risks attendant upon rampant data collection, the 2010s brought numerous massive breaches to underline the point. The decade saw too many huge leaks to list here, but a few notable events included:
- The 2019 Facebook leak, which exposed information from more than 500 million Facebook users.
- The 2019 First American breach, in which 850 million sensitive documents were leaked (including social security numbers).
- The 2013 Yahoo breach, which is to date the largest known breach of all time, resulted in the exposure of details from 3 billion users. Incredibly, the company chose not to report the breach publicly until 2016.
Protecting privacy and limiting data collection is a matter of principle for many, but it’s also a security issue, as the incidents above make clear.
Want to read more like this?
Get the latest news and tips from NordVPN.
The 2020s — and beyond
Finally we come to the present decade, and the future of cybersecurity. While we’re only a few years into the 2020s, a lot has already happened in the cybersecurity space. We’ve seen new risks emerging as a result of Covid-19 and remote work, massive attacks against critical infrastructure in the US, and cyber warfare taken to new heights in the war between Russia and Ukraine.
The new normal (again)
The outbreak of the Covid pandemic in early 2020 had a profound impact on the evolution of cybersecurity and data privacy.
For one thing, it accelerated a process that began in the 1990s as computers and the internet became more widely available. Every individual was now connected to the internet, and with stay-at-home orders in place in many countries, organizations around the world realized that their employees could work remotely, attending online meetings without ever stepping foot in an office.
The shift to remote work resulted in millions of people connecting to company networks and databases from their own homes, often using their personal devices. It was a golden opportunity for hackers, who had a much easier time attacking people’s personal computers and smartphones than they would have had those same people been using work devices loaded with security software. According to Sophos Group, a British security software company, more than half of all businesses were hit by ransomware attacks in 2020 alone.
There was also a huge uptick in Covid-related phishing attacks. While stuck at home, many people began ordering more products online, making them vulnerable to an increasing number of delivery email scams (in which an attacker claims to be emailing from a courier service and asks the victim to click a link to arrange delivery of an unspecified parcel).
Millions also received text messages offering them vaccines and Covid medication or warning them about having had close contact with an infected person. Of course, each message then urged the receiver to click a link — and you know the rest.
Covid reminded us that, four decades after Kevin Mitnick talked his way into The Ark systems, social engineering is still an effective way to bypass security protocols.
Infrastructure under attack
For years, experts had predicted that the integration of essential infrastructure with online systems created heightened risks from cyberattacks. In May 2021, they were proven right once again.
Colonial Pipeline, the company responsible for pumping huge amounts of gas to the East Coast of America, was hit by a ransomware attack. The hackers stole at least 100 gigabytes of data, locked the company’s IT networks with ransomware, and took large portions of its billing network offline.
The attack was traced back to a Russian hacking collective, but Colonial Pipeline ended up paying a ransom to regain access to its data. By the time its systems were up and running again, the price of gas had spiked and chaotic scenes played out across the East Coast as Americans rushed to fill up their cars.
It was a stark reminder that the stakes in cybersecurity have never been higher. Our energy grids, water filtration systems, hospitals, and communication networks can all be targeted by hackers — including state-backed agents from rival nations.
Cyber warfare digs in
The Colonial Pipeline attack in 2021 may have hinted at the dangerous potential of cyber warfare tactics, but less than a year later those same methods were being employed by rival combatants in a European ground war.
In February 2022, Russian tanks rolled across the Ukrainian border, marking the start of the first land war in Europe in decades. Yet even before the outbreak of war, Ukraine was under attack in cyberspace. Aggressive malware was regularly distributed across Ukrainian government devices, and official websites were defaced with threatening messages about the coming war.
In response, a coalition of European nations, led by Lithuania, launched a Cyber Rapid Response Team. This group of cybersecurity specialists, backed by the European Union, has been working with Ukrainians to defend their country from online attacks.
If anyone had any doubt that cyber warfare would play a role in conflicts of the future, these recent events have dispelled them.
What comes next?
The history of cybersecurity is still being written. The fundamental pattern of risk and response will continue. New technologies will be developed and adopted, causing new threats to emerge and be counteracted with new cybersecurity tools. Using this basic pattern as a template, what can we predict as we look ahead?
AI guardians
As early as the 1980s, the first cybersecurity specialists were looking for ways to automate their defenses, creating systems that could recognize and neutralize a threat without constant human supervision.
Artificial intelligence (AI) is already playing a key role in this space, and that will only increase as time goes on. Thanks to a process called deep learning, sophisticated AI systems can continually improve their threat detection processes, picking up on subtle risk indicators that a human being might never be able to identify themselves.
In the future, it’s likely that cybersecurity will increasingly become the responsibility of deep learning AI systems – self-educating software robots. Cyberspace may eventually be patrolled by AI guardians with enough processing power to predict and understand online threats in ways that are almost incomprehensible to us.
Cyber World War
In light of recent events, it seems reasonable to assume that cyber warfare will only intensify as time goes on. A successful cyberattack against a rival nation can be devastating, doesn’t put the aggressor’s military personnel in direct danger, and can rarely be definitely traced back to it.
We can theorize that the US attacked Iran’s nuclear computer systems or that Russian hackers disrupted the Colonial Pipeline, but we cannot be certain. A missile strike on an Iranian facility or American energy infrastructure would cause enormous diplomatic repercussions, but in cyberspace these attacks can escalate without real accountability.
It’s easy to see how some kind of large-scale cyber war could eventually break out between superpowers like America and China without either side taking responsibility for their actions. Yet this kind of warfare may still cause tremendous damage and must be guarded against.
If we’re going to continue to integrate every aspect of our lives and national infrastructure with the internet, we have to be ready to defend ourselves with robust cybersecurity measures.
Our future in cyberspace
The one thing we can be certain about as we look ahead is that we will continue to merge our lives with cyberspace. Our homes are filled with smart devices, our movements are tracked and logged by applications on our phones, and it is hard to imagine any area of society that will not, eventually, be dependent on the internet.
Of course, hackers aren’t going anywhere; the same old arms race will continue. It’s been more than half a century since Creeper and Reaper began a game of cat and mouse across the ARPANET computer network, and the same game is still playing out around us today.
The stakes we’re playing for are just much higher now.
Want to read more like this?
Get the latest news and tips from NordVPN.