Trending Topics

‘The Wrong People Were In the Room’: On the Heels of Facebook Labeling Black Men as Primates, Tech Advocates Talk Racial Bias In AI and What Real Change Looks Like

“No matter where you go in this conversation, you’re going to come back to this: The wrong people were in the room. If the right people were in the room, this wouldn’t have happened,” said Dr. Charles Isbell, John P. Imlay Jr. professor and dean of College of Computing at Georgia Tech.

This month Facebook had issued an apology for one of its algorithms labeling Black men as primates. This is not the first time a major tech company had to issue an apology for labeling Black people after primates. In 2015, Google’s image recognition software labeled Black people as gorillas.

These racially charged slights will continue until the tech industry from the lowest levels all the way to the top becomes more diverse, Isbell says.

“You’re asking people to change fundamentally the way they do things, and that structure of the way you do things, the way you build systems, the people you involve in the conversation, that’s culture, and culture takes a very long time to fix,” Isbell said.

These bias issues extend far beyond social media sites to algorithms such as facial recognition software used by police or programs designed to clear up pixelated photos are all subject to racial bias, the professor explained.

Isbell referenced PULSE (Self-supervised Photo Upsampling via Latent Space Exploration of Generative Models), a program designed to clear up pixelated or blurry photographs. 

The program transformed a blurry image of former President Barack Obama into a white man. Isbell also plugged a photo of himself into the software and he too turned into a white man.

“The system was optimized, the hyperparameters were optimized for and toward white people in fact, cameras are designed that way,” Isbell said.

He points to other examples of flawed Artificial Intelligence (AI) algorithms with the development of certain cameras that failed to detect brown skin tones.

It took lobbying from furniture and chocolate manufacturers for digital camera developers to improve detection of shades of brown so furniture and chocolate could be used in advertising. While not the intended targets of the lobbying effort, Black people benefited also because more detail in Black skin was able to be detected by improved cameras.

Netia McCray is a graduate of Massachusetts Institute of Technology. She recounts an encounter with what she characterized as AI bias while struggling to use a paper towel dispenser.

“It was not until a colleague of mine on campus who said, ‘Oh no, it’s because it uses red and green LEDs,’ and nobody in the room thought, do green LEDs penetrate Black skin, how about red, did anyone look into that?” McCray said. (Automatic paper towel dispensers are activated by motion detectors.)

McCray is the founder and executive director of Mbadika, a nonprofit where she uses science, technology, engineering and math to engage students, particularly students of color. She says algorithm and programming bias unveiled itself again when she working on a project with her students that involved creating a sleep monitor for babies prone to seizures.

“She was like, why is this freaking sensor not working? It’s not reading my pulse-rate, it’s not reading my body temperature,” McCray said.

“I had to sit down with a group of 25 16-year-olds and explain to them how certain technology companies have created sensors and still despite research paper after research paper and complaints have not adapted the sensor to read through Black skin. Even the Apple Watch and Fitbit used by company health insurance programs were misreading blood pressure stuff because they didn’t have the right sensors.”

It was not clear from McCray’s remarks how the sensory devices amounted to artificial intelligence.

In April of this year, the Pew Research Center reported Black and Hispanic workers are underrepresented in the overall STEM workforce. Black workers made up 9 percent, Hispanic workers 8 percent, Asian workers 13 percent, white workers 67 percent and other racial groups made up 3 percent.

Is it as simple as bringing in more people of color and diversity within the tech industry to break the ongoing cycle of racial bias in AI? Dr. Isbell says yes, but it will take more than simply bringing in more Black and brown faces.

“What makes it more complicated and not so simple and that there’s more to it, you have to make certain that the people who are involved are listened to,” Isbell said. Adding that they should be “appropriately supported” and “given the skills, the talents the experiences whatever they need in order to be a real participant in the design of these systems.”

“This argument always boils down to, how do you feel about Facebook calling you an ape, how do you feel about not being able to dry your hands and doing this when you walk out of the bathroom because we didn’t build the paper towel dispenser correctly. Yeah, I have feelings about it, but what I don’t appreciate is how those little mistakes build up to all of a sudden I’m wanted by the police because you couldn’t figure out the difference between two-twists and braids,” said McCray.

Back to top