Listen Live  
Radio Free -
Radio Free never takes money from corporations, keeping our focus on people, not profits. Radio Free is an independent, free news organization headquartered in Tucson, Arizona. We are currently in need of community volunteers to help cover local politics. To learn more and get involved, please visit

On 25 April 2019, Vice Motherboard journalists Joseph Cox and Jason Koebler reported that during a recent Twitter company meeting a comment was made that:

Twitter hasn’t taken the same aggressive approach to white supremacist content [as it has to ISIS] because the collateral accounts that are impacted can, in some instances, be Republican politicians. The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material.

Indeed, it is well-known that most machine-learning algorithms in use in content moderation lead to a significant number of false positives (i.e. extremist content which turns out not to be). Even those that are sophisticated enough to correctly identify a vast majority of material correctly are likely to have these ‘false flags’, which when scaled up to millions of posts, inevitably causes non-extremist material to be flagged. Indeed, many archivists of human rights abuses of the Syrian Civil War, as an issue that Dima Saber is currently researching, have had their content removed from YouTube for this reason: their content was flagged incorrectly as extremist.

The use of machine-learning tools to enhance content moderation is inevitable, given the scale of the content that needs to be moderated and profit motives guiding social media companies. False positives, then, will also be inevitable.

When is extremism extremism?

What Cox and Koebler’s piece points out is that we are more willing to accept these false positives when it comes to the consensus against ISIS content. This is inevitably more complex when the current administration in the US, Republican politicians, and right-wing social movements ­– who have much more capacity to pressurize social media platforms than those who are swept up in content moderation as ISIS false positives – have constantly accused social media platforms of censoring legitimate conservative voices. Thus, a false positive that sweeps up someone allied with the broader right-wing ecosystem online can result in significant backlash against these companies. We can therefore hypothesize that political pressure has an effect on how and when these machine-learning tools are deployed.

It becomes imperative that we think therefore about the power relations that are at play in the use of machine-learning and the risks of false positives. Antecedent claims about conservative censorship have created a situation in which deployment of technology to counter white extremist, white nationalist, or white supremacist content cannot be treated in the same way as Islamic extremist content.


[1]Content moderation and censorship: can we handle a double standard? | openDemocracy ....[2]Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too. - VICE ....[3]SEMINAR SERIES: Dima Saber on ‘Resistance-by-Recording’ part 3 – Cultural Translation Research Blog ....[4]Machine Learning: What it is and why it matters | SAS UK ....[5]Are Google and Facebook really suppressing conservative politics? | Technology | The Guardian ....[6]Content moderation and censorship: can we handle a double standard? | openDemocracy ....