Social media harassment has become a rising concern among the public as of late, as social media companies scramble to find solutions to the growing wave of online racism seemingly brought on by the election of Donald Trump.
In search of a concrete fix, New York University Ph.D. candidate and researcher Kevin Munger conducted an experiment via Twitter to test whether peer-to-peer sanctioning could potentially curb incidents of social media racism. He found that empowering users to discourage others from using racially charged language online worked but only if that user was perceived to be white.
To test his hypothesis, Munger deployed two anti-racism bots, one Black and one white. The bots were then assigned to the accounts of users who tweeted the N-word, after which the harasser(s) received a message that read: “@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.”
In an effort to control the context of the sanctioning, Munger sent each harasser the same message, The Washington Post reports. Additionally, he made sure to eliminate Twitter users who weren’t white males over the age of 18 and also those who participated in incidents where the harasser and the recipient of the harassment appeared to be friends.
“It was essential to keep the race and gender of the subjects constant to test my central question: How would reactions to my sanctioning message change based on the race of the bot sending the message?” Munger wrote.
The race of each bot was manipulated by using the same cartoon avatar for its profile picture but simply changing the skin color. The bots were also given characteristically white and black-sounding names; the white bot’s name was Greg while the Black bot’s name was Rasheed.
To make the bots appear more like real people, Munger followed a few celebrity/news outlets and tweeted generic messages here and there. He even varied the amount of followers each bot had, based off the notion that “higher status” people are more effective in influencing people’s behaviors. The “high-status” bots had 500-550 followers while the “low-status bots” had just 10 followers.
All in all, Munger deployed four different bots: High Follower/White; Low Follower/White; High Follower/Black; and Low Follower/Black. The results of his experiment found that the high follower/white bots were the most effective in reducing the rate at which harassers used racial slurs. Deploying the “influential” white bots caused each subject to tweet the slur 0.3 fewer times each day in the week after being sanctioned, according to the study.
On the flip side, sanctioning tweets from the low follower/Black bots actually caused an increase in the use of racial slurs.
Though his messages were identical, Munger found that the effectiveness of his peer-to-peer sanctioning largely depended on the race and status of the bot, as well as the degree of anonymity of the harasser. Munger also noted that online users who call out others’ bad behavior could potentially emphasize the distance between them and the harasser, making the sanction less effective.
“Physical distance, anonymity and partisan bubbles online can lead to extremely nasty behavior, but if we remember that there’s a real person behind every online encounter and emphasize what we have in common rather than what divides us, we might be able to make the Internet a better place,” he wrote.
Black activists have long called on white people to publicly denounce the the racist actions of their white peers, and Munger’s study actively encourages that. Leaders of the Black Lives Matter movement have emphasized that the only way to end white supremacy is if white people speak out against it.
Munger’s study, published in the journal Political Behavior, comes just days after social media giant Twitter announced that it would suspend the accounts of prominent users associated with the alt-right movement, which also has ties to white supremacy. The company finally decided to take action after years of allowing racist, bigoted behavior to go unpunished on its platform.
“We believe that everyone should have the power to create and share ideas and information instantly, without barriers,” the social media site stated. “In order to protect the experience and safety of people who use Twitter, there are some limitations on the type of content and behavior that we allow. All users must adhere to the policies set forth in the Twitter Rules.”