Trending Topics

Facial Recognition Technology: Programmed Racial Discrimination Looms

You’re walking down a city block at lunchtime and, as you pass by a pizza place, an ad for $1 off of a small pie flashes on your phone. You pause briefly to consider, then decide to keep walking. In an attempt to gain your business, the ad responds by upping the ante and offering $1.50 off of the same pizza pie. You realize that, while you were briefly examining the ad, the ad — or, at least, the technology surrounding it — was closely examining you.

Given recent advances in biometric technology like facial recognition, such a scenario is far from inconceivable. Variations of this artificial intelligence (AI) are already in use and, beyond emotions, these technologies target and collect demographic information like age range, gender and, yes, race, and can do so without your knowledge or consent. As far back as 2013, a “60 Minutes” segment reported on companies developing digital billboards for shopping malls that surreptitiously scanned consumer faces to determine such demographics.

The implications are considerable. For example, what happens if the technology has been designed, based on your race, to automatically steer you into an assumed income bracket which is only exposed to certain types of products? Or only toward certain hotels when you travel? Comparable to existing product-steering and discrimination in the housing market and online, could such technology bring about an enhanced version of digital redlining for African-Americans?

“We’re already seeing big data analytics being used to segment digital audiences with more precision and to make predictions and judgments about how people buy, shop and behave,” said Jay Stanley, senior policy analyst with the ACLU Speech, Privacy, and Technology Project. “A lot of the existing racial biases that we have in this country end up getting sucked into those algorithms and hidden inside the computer,” said Stanley, noting that “face recognition definitely has the potential to join that stream of data collection.”

Contrary to popular belief, machine bias mimics human bias as its particular scope and inputs are determined by a flesh-and-blood programmer. “If there is one objective lesson machines have learned, it’s this: garbage in, garbage out,” recently wrote Forbes contributor Daniel Newman. “When the data is skewed by human bias, the AI results will be skewed, as well.”

Given the programmers behind these face-based technologies enabling the interactive ads are overwhelmingly white men — only 5 percent of technology sector jobs are held by African-Americans and Latinxs — the potentially discriminating impact upon its subjects is far from surprising.

“Face recognition is a tool, and like all tools it can have good and bad uses,” acknowledged Stanley, suggesting there will be developments for it that “make people’s lives better and easier.”

Still, he highlighted its darker side. “It can generally intensify the very detailed spying that the advertising industry is doing on internet users by taking existing biases and disparities and amplifying them,” said Stanley.

“And not just amplifying them,” clarified Stanley, “but hiding them” given that “human bias is baked into the computer algorithm.”

Facial recognition technology, long employed by law enforcement yet largely hidden to its subjects, has been plagued by flaws and charges of racial bias for years. Although 48 percent of American adults are in a law enforcement face recognition network — many having no idea they are in this network of data affecting over 117 million adults in the US—the process is largely unregulated as the FBI, state and local police departments operate their own face recognition systems. FBI face recognition searches occur more frequently than federal court-ordered wiretaps. and at least 26 states allow law enforcement to request searches against their databases of driver’s license and photo IDs, some managed by third-party contracts. No state has passed a law comprehensively regulating facial recognition searches.

Still, studies have documented the skewed and disparate impact of such state-sponsored systems on African American citizens. Given the disproportionate representation of African-Americans in the criminal justice system, this over-representation greatly increases the chances law enforcement will look for a “match” within its available database. This is particularly problematic given a major study by the Center on Privacy & Technology at Georgetown Law reveals serious flaws and errors in accuracy in matching individuals with darker skin, presenting a scenario where such technology is likely to be “overused on the segment of the population on which it underperforms.”

Beyond the public sector, there is even less accountability as NFL teams, cruise lines, airlines and giant retailers like Walmart have tested and employed facial recognition software for their own security systems. The NFL employed it at the 2001 Super Bowl to monitor the crowd without their knowledge, and this year two NFL teams have contracted with security company IDEMIA to implement a face-based system for fans entering stadiums.

After stressing the need for “strong checks and balances” and strict limitations for police or government entities when employing such technology, Stanley acknowledged that “when it comes to the private sector, regulation is harder. But we can incorporate some general overarching privacy and fairness principles into our laws that can be used to hold companies to account.”

He suggested a number of legal principles, including that information cannot be collected without a consumer’s permission, cannot be collected for purposes outside the one consented to, and, ultimately, “a lot of transparency,” given these companies are “using algorithms that make decisions that are affecting people’s lives.”

“We’re in a brand new world here,” quipped Stanley, noting that with a fingerprint at least the individual knows they are being recorded in a database. “It is not at all clear how these technologies are going to work in terms of fairness. So the first principle is we need transparency so people can study what’s happening.”

“And if we don’t get rid of all the injustices,” continued Stanley, “we will at least know they are there.”

Back to top