Tired: Infrastructure Week
Wired: Safety Week
(Image via Shutterstock)
Over the course of the last seven days, tech companies took to their blogs to reinforce their commitment to “safety.”
August 19: Keeping People Safe and Informed About the Coronavirus. Facebook.
August 19: An Update to How We Address Movements and Organizations Tied to Violence. Facebook.
August 19: Introducing the new Twitter Transparency Center. Twitter.
August 25: Our Commitment to Safe Conversations. LinkedIn.
August 25: Responsible policy enforcement during Covid-19. YouTube.
Each of these platforms has, in one form or another, a safety issue. Whether it’s their role in spreading disinformation and conspiracy theories or allowing harassment and abuse on their network, the platforms continually find themselves at the heart of society’s and business’s challenges.
One thread through all of these platforms’ posts about safety: there’s a curious black box alchemy of machine learning and human eyes.
From YouTube:
We normally rely on a combination of people and technology to enforce our policies. Machine learning helps detect potentially harmful content, and then sends it to human reviewers for assessment. Human review is not only necessary to train our machine learning systems, it also serves as a check, providing feedback that improves the accuracy of our systems over time. Each quarter, millions of videos that are first flagged by our automated systems are later evaluated by our human review team and determined not to violate our policies.
From LinkedIn:
We’re doing more with AI and ML, including an ongoing partnership with Microsoft, to help keep the LinkedIn feed appropriate and professional. More recently, we’ve scaled our defenses with new AI models for finding and removing profiles containing inappropriate content. We created a LinkedIn Fairness Toolkit (LiFT) to help us measure multiple definitions of fairness in large-scale machine learning workflows. We’ve used LiFT to improve our prototype anti-harassment classification systems and now we’re making it broadly available to other companies by open sourcing it.
Also, we’ve implemented AI that helps stop inappropriate, inflammatory, harassing, or hateful content when sent privately via messaging to individual members. When content is detected as possible harassment, you’ll also see an in-line warning at the top of the message that gives you the option to view (and report), or dismiss and mark as safe. This will signal to us any unwanted messages from senders, and allow us to take appropriate action, including reminding senders of our professional guidelines.
Facebook announced this week that it has “removed over 790 groups, 100 Pages and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram.”
Tech using tech to combat bad humans. Growing up in the 1980s and 1990s, we were promised a future involving hover boards. Instead, we got ~this~.
The interesting thing, at least for me, is that for years, the platforms have worked very hard to provide an environment for companies to be comfortable advertising—you know, that whole ‘brand safety’ canard—(whether successful, that’s another question)—while either ignoring or not caring about the safety challenges presented through improper or non-existent moderation. As the Stop Hate For Boycott stunt in July tried to emphasize, it’s easier for the platforms to make money from the spread of hate than it is to curtail it.
But now that we are dealing with multiple societal-shifting crises, the platforms are putting resources into doing ... something? One question, of course, is: ‘is it too little, too late?’
Another question, ok, and then what?
If the goal is to keep its users “safe” but the users that have given the platforms their success are the ones that are spreading conspiracies and dropping into people’s DMs to harass, where does that leave us? One could argue that the platforms are a mirror to society, showing us exactly who we are. And it plays out every day in tweets and Facebook Groups and radicalized YouTube videos.
We have several candidates running for national office who believe that there is a deep state controlled by liberal Satan-worshiping pedophiles and the only person who can defeat them is Donald Trump. The platforms have played a fundamental role in shaping our collective consciousness. And it’s not pretty.
I’m not sure where all this leads, but I do think it’s a good thing that the tech companies are starting to take safety a little more seriously. They, of course, have an uphill battle, as the lies and the conspiracies and the hate have spread, occupying significant portions of real estate inside many people’s heads.
The platforms’ incentives, as much as their policies, need to change. These companies are now utilities, which means they need to be regulated as such. Regulate the companies like telephone companies or the airwaves, and perhaps they’ll take their role as not just arbiters of truth, but conduit of information, more seriously.
But I’ll say this: even with all these societal safety concerns, at least these companies aren’t forcing their employees back to the office anytime soon. Out of safety, of course.
Thanks for allowing me in your inbox, today and everyday. If you have tips, or thoughts about this newsletter, drop me a line. Or you can follow me on Twitter. If you enjoyed this edition, please consider sharing across your social platforms in a safe way.
Jackie Wilson, “Better Play It Safe”
Some interesting links:
How TikTok’s talks with Microsoft turned into a soap opera (NYT)
Lowe’s to partner with NYFW on shoppable runway backdrops (FashionWeek Daily)
Mystery radio signal from space that’s on 157-day cycle just woke up right on schedule (NYPost)
As Summer Wanes in N.Y.C., Anxiety Rises Over What Fall May Bring (NYT)
Here’s why AirBnB is going public during a pandemic (Bloomberg)
‘I moved on her very heavily’: Trump’s accusers speak (The Atlantic)
Artificial intelligence is here to calm your road rage (Time)