In the first hours after the Texas school shooting that left at least 10 dead Friday, online hoaxers moved quickly to spread a viral lie, creating fake Facebook accounts with the suspected shooter’s name and a doctored photo showing him wearing a “Hillary 2016” hat.
Several were swiftly flagged by users and deleted by the social network.
But others rose rapidly in their place: Chris Sampson, a disinformation analyst for a counterterrorism think tank, said he could see new fakes as they were being created and filled out with false information, including images linking the suspect to the anti-fascist group Antifa.
It has become a familiar pattern in the all-too-common aftermath of U.S. school shootings: A barrage of online misinformation, seemingly designed to cloud the truth or win political points.
But some social media watchers said they were still surprised at the speed with which the Santa Fe shooting descended into information warfare.
Sampson said he watched the clock after the suspect was first named by police to see how long it would take for a fake Facebook account to be created in the suspect’s name: less than 20 minutes.
“It seemed this time like they were more ready for this,” he said. “Like someone just couldn’t wait to do it.”
The fakes again reveal a core vulnerability for the world’s most popular websites, whose popularity as social platforms is routinely weaponized by hoaxers exploiting the fog of breaking news.
Facebook officials said the company removed the suspect’s real account and were working to remove impersonating accounts.
Facebook said this week it had disabled more than 500 million fake accounts on the social network in the first three months of the year, although it contended tens of millions more were probably still online.
Christopher Bouzy, whose site Bot Sentinel tracks more than 12,000 automated Twitter accounts often used to spread misinformation, said four of the top 10 phrases tweeted by bot or troll accounts over the past 24 hours were related to the Santa Fe shooting, reaching the top 10 within less than three hours. “That is significant activity for our platform,” he said.
The fake accounts included the name of Dimitrios Pagourtzis, the 17-year-old student and suspect who police say is now in custody, and included a photo taken from his Facebook that had been changed to include a hat from Hillary Clinton’s 2016 presidential campaign.
It’s unclear who created the false accounts. In the past, similar accounts have been created as part of disinformation campaigns, including by Russian-linked trolls or people just out to spread havoc.
“For some people, they have no stake in the game, and life is just a big joke,” Sampson said.
Conspiracy theories, hoaxes and unsubstantiated news reports by anonymous online posters have increasingly run rampant on message boards such as 4chan and other dark corners of the Web in the wake of school shootings.
Alt-right news sites also quickly spread unsubstantiated allegations claiming the shooter was part of the Antifa movement.
But that wave of misinformation can also pierce into the mainstream: In February after the school shooting in Parkland, Fla., a video labeling a shooting survivor as a “crisis actor” whose involvement was faked to boost gun control soared to the top of YouTube’s “Trending” list.
The site blamed algorithms that rewarded the video for gaining a rapid amount of viewership in a short amount of time.
Several of the fake Facebook accounts named for the Santa Fe shooter were disabled within a half-hour on Facebook, but others could be seen popping up sporadically through Friday afternoon, including one fake profile that featured a banner from the campaign of President Trump.
Facebook said it has 10,000 human moderators watching the site and intends to double that number by the end of the year.
Some critics suggested the site should force new accounts into a waiting period before they are publicly available or that the company should more aggressively watch names in the news for potential fakes.
YouTube, the Google-owned video giant, had by Friday night appeared to avoid some of the issues following the Parkland shooting, including sharing videos falsely alleging “crisis actors” were involved.
Still, a dozen videos with limited viewership had been posted claiming with zero evidence that the shooting was a “false flag” operation.
The company has said it is working to bolster its content-moderation staff and crack down on objectionable content, though it remains a challenge: More than 400 hours of video are uploaded there every minute.
Twitter also struggled to combat hate speech and misinformation from often-anonymous user base. Some tweets suggested with no evidence that a young Santa Fe student, whose quiet resignation over increasingly routine school shootings went viral, had been an actor.
One tweet, that she was “obviously reading from a script,” remained online for five hours, and counting.
Abby Ohlheiser contributed to this report.
2018 © The Washington Post
This article was originally published by The Washington Post.