Facebook has issued an apology after its artificial intelligence system labeled a video of Black men as “primates.”
Users who watched a video featuring Black men were faced with an automated prompt asking if they wanted to “keep seeing videos about Primates.”
Facebook issued an apology and said on Friday, Sept. 3 that it was looking into the recommendation feature to “prevent this from happening again.”
The video was posted June 27, 2020 by The Daily Mail and featured interactions between Black men and white civilians and police officers, but had no connection to monkeys or apes.
Facebook tailors content to a user’s viewing habits, and sometimes asks users if they are interested in watching videos under certain categories.
Following the appearance of the message, Facebook disabled the feature that caused the prompt to appear after the video, and called it “an unacceptable error.”
The company learned about the prompt after it was posted to product feedback forum for current and former Facebook employees.
Former Facebook employee Darci Groves posted about the incident on Twitter on Sept. 2.
“Um. This ‘keep seeing’ prompt is unacceptable, @Facebook. And despite the video being more than a year old, a friend got this prompt yesterday. Friends at FB, please escalate. This is egregious,” Groves wrote.
A Facebook spokesperson told The New York Times in a statement, “As we have said, while we have made improvements to out A.I., we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”
Other tech companies have previously been embroiled in controversy for the way facial recognition software classifies Black people.
In 2015, Google’s artificial intelligence classified Black people as gorillas. Two years later, words like gorilla, chimp, chimpanzee and monkey remain censored.
Facebook has sought to improve diversity in its workforce in recent years. Last year, the company hired a vice president of civil rights and by July of 2021, Facebook announced the number of Black American employees had risen from 3.9 percent to 4.4 percent.
The company said last year it was studying whether its algorithms were racially biased.
The U.S. Federal Trade Commission warned in April that AI systems have demonstrated “troubling” racial bias that could violate consumer protection laws if used in housing and employment decisions.