Twitter's stand against QAnon

The platform takes steps to be an arbiter of truth. Will Facebook follow?

Last night, Twitter took the unusual step of standing up for ... something.

The company announced that it was effectively choking QAnon—the conspiracy-laden community of Americans that believe, well, an anonymous tipster by the name of Q, using coded-language, has proof that Donald Trump is prosecuting a secret war against pedophiles at the highest levels of government (e.g. Hillary Clinton), media (Chrissy Tiegen) and businesses, by playing multi-dimensional chess using numerology to guide their dangerous, if not misguided perception of reality—off the platform. 

(Image via Rick Loomis/Getty Images)

Twitter said it “will permanently suspend accounts Tweeting about these topics,” while also:

NBC News reports that Twitter has taken down 7,000 QAnon accounts and will limit 150,000 others who harass or “swarm” the platform with abusive messages or flooding the platform with hashtags to bubble up into trending topics.

QAnon conspiracy theorists falsely claimed on Twitter, Reddit and TikTok this month that the furniture company Wayfair was shipping trafficked children because price glitches raised the prices of pillows and cabinets to tens of thousands of dollars. The company's name was the top trend on Twitter in the U.S. on July 10 as Twitter users posted links to expensive furniture.

Platforms have typically shied away from policing content in this specific way. They lean on the “we are not the arbiters of truth” mentality, punting responsibility as the mass communications medium they are. 

Through this new policy, however, Twitter is standing up for, at the very least, truth. Sure, conspiracies and abuse will still litter the site. But by putting these policies in place, Twitter is also signaling to the Commander-in-Chief that his tactics of retweeting QAnon accounts to stir the pot are no longer appreciated. 

If Twitter was slow off the blocks in approaching conspiracy theories and the tributaries that run from them (let alone all the Nazis), Facebook isn’t even in the race. 

The Wall Street Journal reports that Facebook, on the heels of its civil rights audit, is

“creating new teams dedicated to studying and addressing potential racial bias on its core platform and Instagram unit, in a departure from the company’s prior reluctance to explore the way its products affect different minority groups.

The newly formed “equity and inclusion team” at Instagram will examine how Black, Hispanic and other minority users in the U.S. are affected by the company’s algorithms, including its machine-learning systems, and how those effects compare with white users, according to people familiar with the matter.”

It really is a tale of two different philosophies. Twitter seems to recognize its role in shaping discourse, if not society, while Facebook continues to bury its head in the sand. It’s a passive maneuver to study something you already know the answer to, especially as we are 22-days into the advertiser StopHateForProfit boycott.

(Facebook says it already catches 90 percent of hate speech; that it’s investing in hiring Black and Hispanic employees; it’s working on sticking its fingers in the collapsing dam of user and advertiser trust.)

Facebook’s decisions should be obvious: establish controls that blunts the spread of conspiracy and hate. It would be both good for society and clearly for business, as companies seem to be engaging in high-level conversations with other platforms. For example, in Snap’s earnings call, CNBC reports, chief business officer Jeremi Gorman said

although it was difficult to determine the exact revenue impact of the boycott so far, the conversation has “opened the door” for Snap to have conversations with high-level executives at brands.

“What we do know is that it’s always positive to engage at the highest levels of an organization, and this conversation has opened up the door for us to do that extremely frequently at the CEO and CMO levels,” she said.

It’s also a moment where companies are balancing the perception of advertising on Facebook and the rampant ad fraud. This morning, the Washington Post reported that:

Between February and July, Patagonia says it received more than 1,500 reports of fake Facebook ads for its products. In that same time period, the company submitted 236 reports to Facebook about problematic ads, sometimes containing up to two dozen examples.  

More than 70 companies have “been impacted by fraudulent ads” on Facebook and Instagram since 2017, the paper reports. 

It seems as if Facebook is getting hit on multiple fronts: advertisers pulling spend; advertisers mad about ad fraud; companies choosing other platforms; and of course its CEO getting made fun of for wearing sunblock. (I don’t condone the latter; wearing sunscreen should not be a shameable offense.)

But one thing Facebook could do, right now, is to follow Twitter and ban conspiracy content from the platform. They could also ban hate speech, but that’s a story for another day. 

Thank you for allowing me in your inbox. If you have tips or thoughts about the newsletter, drop me a line. Or you can follow me on Twitter.

Also, I want to highlight something cool: The Media Nut was listed as one of the best single-operator newsletters from Inside Hook. This just tickles me. 

Van Halen, “Running With the Devil”

Some interesting links:

  • NYT names Meredith Levien new CEO (NYT)

  • Slack Files EU Competition Complaint Against Microsoft (BusinessWire)

  • Experiential Marketing -- Especially Live Consumer Events -- Take $13 Billion Hit In 2020 (MediaPost)

  • DDB names new North American and global CEOs (Business Insider)

  • Trump’s Request of an Ambassador: Get the British Open for Me (NYT)

  • There are now just 4 Black CEOs in the Fortune 500 as Tapestry boss resigns (Fortune)

  • R.I.P. Cable TV: Why Hollywood Is Slowly Killing Its Biggest Moneymaker (Vanity Fair)