The problems with facial recognition software
There are plenty of problems with facial recognition technology, problems that could shatter our personal liberties and society at large.
Here are some of our biggest concerns:
The potential for abuse
Facial recognition systems gather and use gigantic biometric databases full of your personal data, and they’re owned by private companies run with little to no oversight. How much can we trust a private company that holds huge databases of our biometric data? What’s stopping them from selling our data?
Exploiting these biometric databases would be at the top of every hacker’s list, and once your identity has been leaked, how difficult would it be to get it back? How difficult would it be to prove that you didn’t commit a crime if your fingerprints, or face scans were stolen? The ways in which these systems can be abused are simply disastrous and far too risky.
Several independent studies have confirmed facial recognition systems as racist and sexist. NIST (the National Institute of Standards and Technology) found that for one-to-one matching, most systems had a higher rate of false positive matches for Asian and African American faces over Caucasian faces. Algorithms developed in the US were all consistently bad at matching Asian, African American and Native American faces. NIST also found that when studying one-to-many matches, the systems had the worst false positive rates for African American women, which puts entire populations at greater risk of being falsely accused of a crime.
Clearview AI, a facial recognition company, was recently exposed for collecting over 3 billion images from social media to feed its facial recognition database without users consent. Meanwhile, the UK government has installed live facial recognition cameras across London, ready to be integrated into everyday policing. While the city’s police say only individuals on “bespoke” watch-lists will be flagged, independent researchers found that four out of five people identified by the Metropolitan Police’s facial recognition software are innocent.
With global databases full of our images, officials could use live facial recognition cameras as an overt surveillance tool to track anyone at any time. If these databases got into the hands of tyrannical governments in oppressive countries, the data could be used to target, frame, blackmail or even imprison innocent civilians.
How would this data be stored and under which state laws would it be regulated? Can we opt out? How long would a person be included on a “bespoke” watch-list and flagged, after being convicted? These are all privacy and security issues that have quietly slipped under the radar, at the convenience of private corporations.
Why it matters
Facial recognition technology – with all of its inaccuracies and biases – is already being used by governments and law enforcement with dire effects. It’s only a matter of time before the world is blanketed in surveillance under the guise of ‘public safety’. Police could track anyone at any time by using live facial recognition. A stroll in the park, a night out with friends, grocery shopping or even entering your apartment will all be scraped from CCTV and used by facial recognition companies to supply to law enforcement.
The most horrific part is you’d have no say in the matter. The invasion into our privacy is tentacular. Our most valuable biometric data could be bought and sold, exploited, and abused to such extremes that you could be framed for a crime you didn’t commit.
The future of facial recognition
Before we introduce facial recognition into society, we must take the following precautions to avoid becoming a surveillance state run by law enforcement:
- Demand tighter regulations: We need to consider the civil rights violations that facial recognition technology might cause. We will need deeper insight into where and how long our data is being stored, where it’s being gathered and what it’s being used for. Even if regulations are passed, we also shouldn’t ignore the fact that fragmented US privacy laws left huge data loopholes for Facebook and others to exploit.
- Wait for the technology to improve: With all of the unanswered questions surrounding the use of facial recognition, it’s obvious that the technology is still in its infancy. While airports have embraced the technology to speed up the boarding process, big tech companies like Google, Amazon and IBM have backed a one-year moratorium on the use of facial recognition technology while others push for a nationwide facial recognition law. These are all opportunities to improve its accuracy and restrict its scope.
The power that private companies have could redesign civilization as we know it. We hope Western legislation can ensure that this technology respects our rights to security and privacy, but right now, we wouldn’t bet on it.