Why We Absolutely Must Ban Private Use of Facial Recognition

Wired just reported that Uber Eats drivers in the UK are being fired because of the company’s faulty facial identification software. Uber requires drivers to submit selfies to confirm their identity, but when the technology fails, and isn’t able to match photos of the drivers with their accounts, drivers get booted off the system and are unable to work, and thus unable to pay their bills. This isn’t the first time this has happened — in 2019 a Black Uber driver in the US sued the company for its discriminatory facial recognition.

This case clearly shows how private use of facial recognition, by institutions and even individuals, poses just as much of a threat to the future of human civilization as government use. And it’s one of countless examples of how this technology automates and exacerbates existing power imbalances to control, target, and discriminate.

Workplaces are using facial recognition in recruitment, to replace traditional timecards, and to monitor workers’ movements and “productivity.” Retailers and restaurants can use it to harvest data on our purchases and then target us with specific messaging and products, making decisions about what options are presented to people.

It’s widely known that most facial recognition algorithms exhibit systemic racial and gender bias. This is causing real harm that must be addressed now. But at the same time, we have to worry about the future when the tech improves, but the bias inherent in the systems in which they are being used persists. In the same way that Black and brown communities are over-policed, companies can target certain communities with their surveillance. Even with perfectly performing algorithms, a store could use a publicly available mugshot database to ban everyone with a criminal record from the store, which would disproportionately harm Black and brown people who are over-policed.

Over the past year, the movement to ban facial recognition has gained momentum. Cities and towns have banned it; Fight for the Future led a campaign with Students for Sensible Drug Policy that’s gotten more than 60 colleges and universities to commit to not use it, and another campaign with Tom Morello and other artists that got more than 40 of the worlds largest music festivals to pledge not to use it; and federal legislation banning government and law enforcement use of the technology was introduced last year (and will be reintroduced again for the new Congress).

So far, much of the campaigning against facial recognition has focused on banning government and law enforcement use of the technology, and that makes sense. But solely focusing on government use doesn’t fully address the issue.

Some argue that we can’t ban private use of facial recognition without giving the government too much power over individual choices, and say that we should instead create a regulatory framework governing how it can and can’t be used. But these types of rules normalize and codify its use and are unjustly applied to certain groups.

Biometric surveillance is more like lead paint or nuclear weapons than firearms or alcohol. The severity and scale of harm that facial recognition technology can cause requires more than a regulatory framework. The vast majority of uses of this technology, whether by governments, private individuals, or institutions, should be banned. Facial recognition surveillance cannot be reformed or regulated, it should be abolished.

Our friends at EFF have suggested that an opt-in consent framework is enough to address the potential harms of private use of facial recognition. We disagree. While we support harm reduction legislation like the Illinois Biometric Information Protection Act (BIPA), which requires private companies to get permission to collect your biometric data before doing so, these measures are not sufficient, and specifically fail to protect the people most vulnerable to discrimination and abuse.

Policy that relies on companies gaining “informed consent” is dangerous. It puts the onus on the individual to understand the risks associated with handing over their sensitive biometric information, which may not be clear, and disproportionately harms Black, Brown, and poor communities.

This approach also assumes that people are able to opt out of situations that require facial recognition. If the only hospital in your area uses facial recognition on patients (or visitors, staff, doctors, and nurses), you might not have the opportunity to find another hospital to address your health emergency. As facial recognition is being deployed by airlines in airports, it isn’t reasonable to tell people to find other ways to travel if they don’t want to use facial scans to check in, or force people to wait in much longer lines at airports to avoid facial recognition technology. If a private school or college is using facial recognition and a student (or parent, for K-12 schools) must give consent in order to attend, or teachers, janitors, and administrators have to consent in order to work there, that isn’t actually a choice. In the Uber Eats driver case, Uber gives drivers the option for AI or human verification, but the system in place for human verification doesn’t actually work. Requiring workers to accept facial recognition as a condition for employment is not meaningful consent for someone who needs a job.

Even seemingly innocuous use cases create problems. A music festival or sporting arena using facial recognition for ticketing could offer discounts or shorter lines for those who consent. This harms people who agree without understanding the risks, who could be tracked by event organizers to see what food they bought, how often they went to the bathroom, or what artists they came to see. That data could be sold to other corporations and used for profiling or advertising, and could end up in many databases without their knowledge. This scenario also puts the people who have to pay more or wait longer to protect their privacy and biometric data at a disadvantage. Is it really “informed consent” if someone is heavily incentivized to use the option that lets a corporation collect your biometric data?

Regulations also allow the technology to be normalized, and to spread. Companies promote their facial recognition tech as convenient (and, in light of the pandemic, touch free and thus, “safe”), and we’ve seen adoption for everything from unlocking phones to checking in at the airport. These applications of convenience may make people more comfortable with the technology, embed it in our day-to-day lives, and make it harder to ban it moving forward, while still posing the same threat.

It is worth noting that legislative bans on facial recognition — including the groundbreaking Portland, OR ban on private use of facial recognition — have an exception for people accessing their private technologies like cell phones. While we generally advise people not to use biometrics to unlock their phone, and that systems should only offer this when the biometric data is being stored on device and not in the cloud, we think this exception is reasonable.

We also think there can be a reasonable exception made for research. Research like that of Joy Buolamwini and Timnit Gebru, which helped expose the racial and gender bias built into facial recognition algorithms, should absolutely be allowed to continue. And again pointing to the Portland, OR ban on private use of facial recognition, which smartly bans use in places of public accommodation as defined by the Americans with Disabilities Act, this research would be allowed. We believe this research can continue without allowing the technology to be used on the public at large.

In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can’t afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist. The dangers of facial recognition far outweigh any potential benefits, which is why banning both government and private use of facial recognition is the only way to keep everyone safe.

We believe there's hardly anything as important as ensuring that our shared future has freedom of expression and creativity at its core.