When news organizations get manipulated
AI and machine learning, coupled with the spread of misinformation is a recipe for disaster.
Yesterday, Defense News, a trade publication for the defense industry, published a story explaining that it had been the victim of a disinformation campaign.
The reporter, Joe Gould, recently discovered that
The attached jpeg resembled a Defense News story complete with my byline, but instead of my work, it was a cocktail of lies and paranoia. There was the layout of one of my stories about the new defense secretary, Lloyd Austin, but some of the words matched a Washington Post article ― and it included both a false headline about Austin “defunding and dismantling” the U.S. Army and an equally false quote about America looking to China for its national defense.
Over the next few days, I’d receive more than five dozen more emails from strangers, most of them fearful this viral fake news was true. Meanwhile, I watched the meme stubbornly spread on Twitter, Instagram, Facebook and maybe most of all, Telegram, a chat app popular with QAnon adherents that was first developed in Russia.
This story is one that reporters, newsrooms and yes, even corporations, need to pay attention to, as it sits at the intersection of two prevalent themes that have risen in the last several years and promise to only become more intricate: the rapid evolution of artificial intelligence and machine learning, and the contagious spread of mis- and disinformation at a global scale.
A third leg of this stool: the role of the platforms spreading manipulated images, stories, what-have-you.
As Gould writes:
It was dispiriting that my flagging of several dozen individual posts as “false news” through Facebook’s own notification system had no immediate effect. Only after Defense News and Memetica contacted Facebook did the memes disappear en masse.
For its part, Facebook said its team determined that the posts violated its policies toward digitally altered images leading to misinformation, that it deleted them and that similar posts should be automatically screened in the future.
Several users I contacted acknowledged the meme was false but stubbornly left it up because, to them, it showed larger truths about the new Biden administration.
“Not as farfetched as you think,” one man replied with regard to Austin. “What do you know of his past? Why was he chosen? Why did Biden already move in Syria? Why did he just make us dependent on foreign oil? Why is he against jets just sold to UAE? Why did he support past funding for Iran? Why did his past administration take out Seal Team Six and not lend aid to Bengahzi? Not farfetched at all. Corruption runs very deep with this guy.”
But anyone with an internet connection can watch Austin’s confirmation hearing and get the facts.
This, of course, assumes people are intellectually curious and honest. Watching memes spread on social, I have my doubts.
The danger, naturally, of being able to replicate a news organization that is designed to trick people as they see the image bubble up on their news feed is multifaceted. On one level, it sows distrust; on another level, it creates (or amplifies) an echo chamber of lies and propaganda. And since we aren’t taught media literacy, it becomes all too easy to convince people that there is a ring of Democratic politicians secretly running the government as they mastermind a pedophile criminal ring that only Donald Trump can save.
Imagine instead of Defense News, someone decided to create a fake New York Times or Wall Street Journal.
(This is a different scenario of someone scraping a website’s content, re-purposing it—The New York Times Post—and taking in ad revenue. The scams are elaborate.)
The trouble gets magnified when media manipulation enters the political world. Indeed, in 2019, the New York Times reported on deepfakes and the impact on politics across the globe.
In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”
“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”
With advancements in manipulating media, we can now take the words out of your mouth. Conversely, we can also add them.
Here’s a good CNN video on deepfakes.
Companies are already using technology to explore ways of manipulating media. While early versions were just inserting long-dead actors like Ginger Rogers and Fred Astair into commercials, this year, for example, State Farm used 1998 footage of a Sportscenter clip with ESPN anchor Kenny Mayne to “predict” 2020.
Each day, we see how a healthy percentage of our population is detached from reality. The ability to change information and then distribute that to unknowing people is a crisis. We saw the tip of the iceberg over the last four or five years, and it will be hard to stamp much of it out. It will take regulation, corporate cooperation, and media literacy programs teaching people how to absorb information. This is a years-long process, one that might be too late.
Twitter, however, looks to ask its users to help combat misinformation, with its new “Birdwatcher” forum. But what if users can’t discern from a real video and faked one?
It may be innocuous to have a Snap filter that swaps your face with someone else’s, and though the creators of South Park might disagree, it’s not all fun and games.
Serially.
Phish, “The Final Hurrah”
Thank you for allowing me in your inbox, today and everyday. If you have tips, or thoughts on the newsletter, drop me a line. Or you can follow me on Twitter. If you arrived here via social or through a colleague, please consider signing up. And if you appreciated this edition, please consider sharing. Thanks for reading, and I’ll see you tomorrow!
Some interesting links:
For retirees:
The Post’s Marty Baron set to #30 on Feb. 28 (WaPo)
For giving the keys to your outlet to a bad-faith actor:
100+ Politico Staffers Send Letter to Publisher Railing Against Publishing Ben Shapiro (Daily Beast)
For newsletters:
Twitter acquiring newsletter publishing company Revue (Axios)
Newsletters are growing up and leaving the coop (Axios)
For journalists getting fired for irrational reasons:
‘It’s a shot at my reputation’: Lauren Wolfe reacts to NYT’s statement about her dismissal (WaPo)
An Idaho newspaper editor struggled to get Excel access for staff. After tweeting about it, she was fired. (WaPo)
For looming financial crisis:
When SPACs Attack! A New Force Is Invading Wall Street. (WSJ)
For video services not named YouTube:
IAC’s Vimeo Raises New Funds at $6 Billion Valuation (Bloomberg)
Videos scraped from Parler reveal incitement on Capitol Hill (Tech Policy Press)
For the fox guarding the hen house:
Rupert Murdoch, Accepting Award, Condemns ‘Awful Woke Orthodoxy’ (NYT)