Trending Topics

Built for Bias: Artificial Intelligence Produces Real Racism

The ideological divisions in the United States widening for the past generation, and now being exacerbated by President Donald Trump, have created a toxic environment for companies who depend upon advertising. For one thing, advertisers are suddenly hypersensitive about where their ads, logos, and videos appear. But at the same time they have yielded control of their online ad placements to algorithms that broker transactions and make placement decisions automatically — without any human intervention whatsoever.

The result: Brands sometimes discover — usually after the fact via Twitter storm — that their content has appeared alongside “offensive” content. (Imagine a Breitbart piece praising the virtues of Bill O’Reilly being preceded by an NAACP public information spot about inner-city policing.)

Advertisers have responded by pulling their ads entirely, resulting in lost ad revenue for publishers and fewer brand impressions for the advertisers. Still, according to eMarketer, close to 80 percent of all online advertising will be bought and placed using these algorithmic-based programmatic buying networks this year, up from 65 percent in 2015.

Artificial intelligence is one proposed solution to this problem. AI is being used to create algorithms that “feel offended” and then factor these “feelings” into their decisions. Being AI, these algorithms would also subject themselves to constant, intense self-scrutiny and self-modification to maximize the accuracy of their decisions over time. While such a system might guarantee a World Wide Web free of offense, artificial intelligence is fundamentally flawed in a way that disproportionately affects minority populations.

Last year the University of Washington published a report that demonstrated gender bias in Google’s voice recognition technology. Google’s voice-activated personal assistant was significantly more responsive to commands issued by males than by females. This effect has an easy explanation: the natural biases of AI engineers are unconsciously hard-coded into the algorithms they create.

It is possible that representatives from each of the teams building the various text recognition components (the teams themselves probably scattered across globe, as is the custom at Google, to maximize the number of working hours per each 24-hour cycle) conspired to create a barrier between women and the technology they were creating. A far more likely explanation, however, is that the Google engineering teams working on the project were overwhelmingly male and their natural bias towards data sets based on male voices unconsciously influenced their programming decisions. (In fact, only 18% of Google engineers are female.)

Another telling example was the Beauty.AI 1.0 which, in the January of 2016, mounted the first international beauty contest judged by entirely by artificial intelligence, an exploration of Human Beauty writ large, free of the cultural biases that limit our aesthetic sensibilities. Over 6,000 people from over 100 countries submitted their selfies as directed. Of the 44 winners across all seven age groups and both genders, only one winner had dark skin.

Immediately following Beauty.AI 1.0, planning to improve the “thinking” that lead to the blatantly racist results began. This year, Beauty.AI 3.0 will incorporate a diversity algorithm. But the very need for a diversity algorithm is proof positive of AI’s fundamental flaw: AI is not, and cannot be objective.

We are fast propelling ourselves into a world in which artificial intelligence is the primary gatekeeper of all of the information used by the institutions that judge us based on historical data and demographic trends rather than on our merits as individuals. Our loan eligibility, the educational institutions that solicit, accept and reject us, the terms of our vehicle leases and credit cards agreements, the real estate to which we have access, the ways in which we are treated by landlords and co-op boards, whether or not we’re able to purchase court-side tickets for the Hawks’ season and whether or not the fans sitting next to us in the bleachers also happen to look like us—artificial intelligence will be the autonomous puppet master in determining who has access to what, and why.

The Beauty.IA project was an anomaly because the programmers immediately recognized the flaw in their system and are correcting it. But as similar systems become more widespread, will we have visibility into the racial disparities in their results? Will we know how accurately artificial intelligence is able to identify the candidates most likely to succeed at institutions like Harvard Law? To become U.S. senators? Nobel laureates? U.S. presidents? And will the systems be transparent enough for us to ensure that for every cherry-picked candidate that looks like Ronald Reagan, Bill Clinton or Donald Trump, candidates that look Hillary Clinton, Barack Obama or Kamala Harris will be fairly represented?

—————————————————————————————————Michael Maliner is a mixed-race writer who frequently writes about race, culture and identity. You can give him a shout on Twitter @PlasticSpoon.

Back to top