Looking at 126,000 stories sent by ~3 million people, researchers found that humans, not bots, were primarily responsible for the spread of disinformation
It’s comforting to image that when faced with outright falsehoods, readers would recognize “fake news” for what it is and stop it in its tracks. Indeed, some have argued that the only reason fake news stories have penetrated the national conversation is because bots and nefarious outside actors have tried to push lies on a virtuous public. But reporting on a new study, Robinson Meyer at The Atlantic writes that data science contradicts that idea. In fact, it seems we like fake news, seek it out and spread it much more quickly than the truth.
To investigate how fake news spreads, MIT data scientist Soroush Vosoughi and his colleagues collected 12 years of data from Twitter. They then looked at tweets that had been investigated and debunked by fact-checking websites. Using bot technology software, they were able to exclude any traffic created by bots from their results. As Katie Langin at Science reports, that left them with a set of 126,000 “fake news” stories shared on Twitter 4.5 million times by some 3 million people. They looked at how quickly those stories spread versus tweets that were verified as true. What they found was that fake stories reached more people and propagated faster through the Twittersphere than real stories.
“It seems to be pretty clear [from our study] that false information outperforms true information,” Vosoughi tells Meyer. “And that is not just because of bots. It might have something to do with human nature.” The research appears in the journal Science.
Based on the study’s findings, it appears that people are more willing to share fake news than accurate news. A false story was 70 percent more likely to earn a retweet than verified news, Meyer reports. While fake news was found in every category, from business to sports and science, false political stories, not surprisingly, were the most likely to be retweeted.
So why are people seemingly drawn to these false tweets? The study doesn’t address that directly, but the researchers do hypothesize that the novelty of fake news makes it more appealing to share. Brian Resnick at Vox reports that studies have shown that people are more likely to believe headlines or stories that they’ve read or heard many times before but were less likely to share them. They are more likely to share novel stories on social media that are emotionally or morally charged, even if they are not verified.
It’s that urge that fake news is designed to appeal to. “Fake news is perfect for spreadability: It’s going to be shocking, it’s going to be surprising, and it’s going to be playing on people’s emotions, and that’s a recipe for how to spread misinformation,” Miriam Metzger, a UC Santa Barbara communications researcher not involved in the study, tells Resnick.
So what can be done to combat fake news? According to a press release, the team points out that the platforms themselves are currently complicit in spreading fake news by allowing them to appear on things like trending lists and by allowing fake news stories to game their algorithms. The researchers suggest the social media companies should take steps to assess those publishing information on their sites or they risk some sort of government regulation.
Twitter’s cooperation with the study was a good start. In a perspective paper published alongside the study, David Lazer of Northeastern University and Matthew Baum of the Harvard Kennedy School are now calling for more cooperation among social media companies and academics to get a handle on the anything-but-fake problem.