Trending Topics

New Study Suggests Self-Driving Cars Have Trouble Detecting Dark-Skinned Pedestrians

As if there weren’t enough concerns regarding the safety of self-driving cars, a new study found that people with darker skin may be more likely than their white peers to get hit by the automated vehicles, thanks to a thing researchers call “algorithmic bias.”

The study, published by the Georgia Institute of Technology, examined state-of-the-art object-detection models commonly used in self-driving cars and found the technology may be better at detecting pedestrians of a lighter hue. The new research was largely driven by the question of just “how accurately do these models … detect [pedestrians] from different demographic groups?” according to Vox.

Self-Driving Car Bias

The study found that on average, object-detection model were 5 percent less accurate in detecting pedestrians with dark skin. (Photo: CHOMBOSAN / ALAMY stock)

“The few autonomous vehicle systems already on the road have shown an inability to entirely mitigate risks of pedestrian fatalities (Levin & Wong, 2018),” the report reads. “A natural question to ask is which pedestrians these systems detect with lower fidelity, and why they display this behavior.”

For the study, researchers reviewed a large data set of photos featuring pedestrians, who were then divided using the Fitzpatrick Scale, a system used to classify human skin tones from light to dark. Researchers then analyzed how often the models were able to correctly detect the presence of individuals in the light-skinned group versus individuals in the dark-skinned group.

The result? On average, the image detection systems were 5 percent less accurate for dark-skinned group pedestrians. The discrepancy persisted, researchers said, even when they controlled for variables such as the time of day the pictures were taken or the occasional blocked view of a pedestrian.

“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” Jamie Morgenstern, an author of the study, told Vox in an interview.

The “Predicted Inequity in Object Detection” report comes with limitations, however. For one, it hasn’t been peer–reviewed yet. The study also failed to test models currently used by self-driving cars and the data sets being used by autonomous car manufacturers. Instead, researchers assessed various models used by academic researchers, and trained on publicly available data sets, Vox reported.

The reason for this is that companies typically don’t make their data available for public scrutiny — which is a problem in itself.

The study’s findings also point to the larger issue of algorithmic bias in the development of automated systems and technology. A growing body of research has found that racial bias, whether implicit or explicit, has a way of seeping into automated decision-making systems, including self-driving cars. Just last year, Amazon’s facial recognition system came under scrutiny when it mistakenly matched the faces of 28 members of Congress with criminal mugshots and disproportionately misidentified people of color.

So why are these disparities happening? Vox reports that, “because algorithmic systems ‘learn’ from the examples they are fed, if they don’t get enough examples of, say, black women … they’ll have a harder time recognizing them when deployed.”

Researchers of the self-driving car study also pointed to the fact that the object-detection models had mostly been trained on examples of light-skinned pedestrians. Moreover, they found that the models didn’t place much weight on learning from the few examples of dark-skinned pedestrians that it did have.

To address the inequity, study authors suggested including more dark-skinned people in the training data and more heavily weighting that sample.

Back to top