It’s an about-face for Facebook, whose operators have been hesitant to constrain user activity. “As abhorrent as some of this content can be, I do think that it gets down to this principle of giving people a voice,” founder and CEO Mark Zuckerberg told tech journalist Kara Swisher in a 2018 Recode interview.
Zuckerberg was defiant when Facebook was accused of allowing false information about the 2016 election to flourish on the platform. Just one day after Donald Trump won the presidency, Zuckerberg told attendees at the tech conference Technonomy that the claims were “pretty crazy,” The Verge reported in 2016.
When white nationalists held a rally in Charlottesville, Va., that resulted in the 2017 death of counterprotester Heather Heyer, Facebook “pushed to re-educate its moderators about American white supremacists in particular,” but made a distinction between white nationalism and white supremacy, as Motherboard revealed when it obtained leaked Facebook moderation training materials last year.
“We don’t allow praise, support and representation of white supremacy as an ideology,” a training slide reads. “We allow praise, support and representation of white nationalism.” The leaked documents said that white nationalism “doesn’t seem to be always associated with racism (at least not explicitly.)”
Citing multiple conversations with “more than 20 members of civil society,” Brian Fishman, Facebook’s policy director of counterterrorism, told Motherboard, “We decided that the overlap between white nationalism, [white] separatism, and white supremacy is so extensive we really can’t make a meaningful distinction between them.”
Next week, when users search for terms linked to white supremacy, they’ll see a link to Life After Hate, an organization that encourages people to leave hate groups.
But Facebook’s success in implementing its new policies depend not only on its stated commitment, but on its content moderation infrastructure, both human and automated. “Hate speech can be tricky to detect since it is context and domain dependent. Trolls try to evade or even poison such [machine learning] classifiers,” Aylin Caliskan, a computer science researcher at George Washington University told Wired in September.
In his April 2018 congressional testimony, Zuckerberg expressed confidence that within five to 10 years, “We will have [artificial intelligence] tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.” As Louise Matsakis explains in Wired, however, “For that to happen … humans will need first to define for ourselves what hate speech means—and that can be hard because it’s constantly evolving and often dependent on context.”
As for Facebook’s human moderators, a February investigation by The Verge found that the 1,000 based in Phoenix (there are 15,000 worldwide) earn just $28,000 per year in an environment that workers told writer Casey Newton is “perpetually teetering on the brink of chaos.” Employees are required to watch thousands of violent videos and read countless hateful posts and “can be fired after making just a handful of errors a week,” Newton writes. “[T]hose who remain live in fear of former colleagues returning to seek vengeance. One man we spoke with started bringing a gun to work to protect himself.”
Fishman told Motherboard that Facebook’s new rules will be enforced partly through artificial intelligence and machine learning, but did not explain how those methods will work. Facebook’s announcement did not address whether it will hire additional moderators, or describe any new AI technology.
Activists and technology experts are divided on the ban’s prospects for success.
Becca Lewis, an affiliate researcher at Data & Society, which studies technology, told Wired, “I’m cautiously optimistic about the impact that it can have.”
Kristen Clarke, president and executive director of the Lawyers’ Committee for Civil Rights Under Law, one of the groups Facebook consulted, told CNN, “It took a lot of hard work to get Facebook to where they are today. But the hard work lies ahead, we will be watching closely how they implement the policy.”
Madihha Ahussain, a lawyer for Muslim Advocates, a civil rights group, told The New York Times that her group has questions. “We need to know how Facebook will define white nationalist and white separatist content,” she said, “For example, will it include expressions of anti-Muslim, anti-Black, anti-Jewish, anti-immigrant and anti-LGBTQ sentiment—all underlying foundations of white nationalism?”