Never a day goes by without some lie about LGBTQ+ people being posted on social media. Maybe it’s that the community is going to add P to its acronym to welcome pedophiles. Or perhaps it’s conspiracy theories about how monkeypox is spread. Whatever the falsehood, it barely matters. Haters know the quickest way to spread it is to post it on social media.
So how are the big tech companies dealing with the ever-growing flood of lies and misinformation? By slashing the staff who are supposed to counter it.
As big tech cuts budgets and shifts its priorities, the few resources it had to combat the damaging lies on social media are dwindling even more. YouTube, which Google owns, fired two of its five misinformation experts, two of its five policy experts, and reduced the team enforcing its policies.
At the same time, the tech companies are arguing before the Supreme Court that the law itself is on their side. This week, the Court is hearing two cases, one involving Twitter and another involving Google, arguing that the companies should be held accountable for the content promoted on their sites.
The Google case is especially shocking. It’s brought by a family of a woman killed by ISIS terrorists. They argue that YouTube served as a recruitment tool for the terrorist group and that Google did nothing to stop it.
At issue is Section 230, part of a law passed in 1996. The provision essentially holds the tech companies harmless for content on their site.
Predictably, the reduced staff and the longstanding belief that the law protects the companies create a holiday for anti-LGBTQ+ posters. YouTube’s minute-long Shorts are a haven for hateful attacks as right-wing activists like Ben Shapiro and Candace Owens misgender trans people and brand drag queens “groomers” in videos viewed millions of times.
YouTube removed six videos after the watchdog group Media Matters for America flagged a slew of offensive videos but left most of them because they didn’t violate the company’s rules on speech.
Meta, the company that owns Facebook, made a big deal in 2020 about setting up an election integrity unit specifically to fight misinformation. That group has slowly dwindled from hundreds to dozens as the company shifts its focus away from Facebook to the metaverse. Meantime, the company has welcomed back one of the leading purveyors of misinformation, Donald Trump, after a two-year ban.
Then there’s Twitter, a case unto itself.
Upon buying Twitter, Elon Musk promptly eviscerated staffing levels, firing most staff. Content moderation, which branded calling LGBTQ+ people groomers as hate speech, went out the window.
That was bad enough. But then Musk made Twitter a carnival for the crazies, starting with himself. He amplified a homophobic conspiracy theory about the attack on Nancy Pelosi’s husband. He reinstated accounts that were banned for hate speech and misinformation.
The result was about what you’d expect. Hate speech and misinformation is out of control on Twitter.
The lies have real-world consequences. Children’s National Hospital in Washington D.C. was targeted with bomb and death threats after the far-right anti-LGBTQ+ Twitter account Libs of TikTok falsely claimed that the hospital was performing hysterectomies on transgender children.
“I wouldn’t say the war is over, but I think we’ve lost key battles [against fighting misinformation on social media],” Angelo Carusone, president of Media Matters for America, told The New York Times. “I do think we, as a society, have lost the appetite to keep battling. And that means we will lose the war.”
Sadly, that means the lies will keep escalating, as well as the hatred and violence that invariably accompanies the lies. Perhaps a tragedy would force the tech companies to reconsider them. But the human cost of that policy change is too much to consider.