News (USA)

Twitter is ‘researching’ whether or not they should allow white nationalists to send racist tweets

Twitter is ‘researching’ whether or not they should allow white nationalists to send racist tweets
Photo: Shutterstock

Twitter has long faced criticism for seemingly being soft on abusive voices, including white nationalists and others. Now, the social networking service says they are conducting in-house research in an effort to determine what to do about the problem.

The platform is trying to determine if they should outright ban such speech from the site or if giving it a platform will allow their views to be debated and changed by others.

Speaking to Motherboard, Twitter’s head of trust and safety, legal and public policy, Vijaya Gadde, said Twitter believes “counter-speech and conversation are a force for good, and they can act as a basis for de-radicalization, and we’ve seen that happen on other platforms, anecdotally.”

Twitter has claimed for several years that it was looking at the issues of hate speech on their service, including a blog post of their own in April claiming that they were making strides on proactively halting abusive content.

Related: Twitter may start labeling Trump’s tweets as ‘offensive’ when necessary 

Nevertheless, many feel that the company has not done enough.

At an appearance at TED2019, Twitter’s CEO, Jack Dorsey, was cut off by moderator Chris Anderson. 

“Jack, just picking up on some of the questions flooding in,” said Anderson. “A lot of people [are] puzzled why, like, how hard is it to get rid of Nazis from Twitter?”

“We have policies around violent and extremist groups,” said Dorsey. “And the majority of our work and our terms of service works on conduct, not content. So, we’re actually looking for conduct. So conduct being using the service to periodically or episodically to harass someone, using hateful imagery that might be associated with the KKK or the American Nazi Party. Those are all things that we act on immediately.”

Yet a recent exposé in Motherboard, a Twitter staff member claimed that Twitter was unwilling to use an algorithm to solve the issue simply because it would also affect Republicans who share many of the same views as white nationalists.

“The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued,” stated the article.

Rather than a software solution, Gadde claims that the platform has employed external researchers to look into the issue.

“We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” said Gadde. “Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?”

Gadde was unwilling to name the researchers involved, and notes that the same researchers were under non-disclosure agreements and therefore not allowed to speak on the issue.

Don't forget to share:

Support vital LGBTQ+ journalism

Reader contributions help keep LGBTQ Nation free, so that queer people get the news they need, with stories that mainstream media often leaves out. Can you contribute today?

Cancel anytime · Proudly LGBTQ+ owned and operated

Protestors have been disrupting an elementary school for months over gay-inclusive lessons

Previous article

MTV’s updates dating reality show to only include ‘sexually fluid’ contestants

Next article