Study Finds Twitter Bots, Russian Trolls Fuel Anti-Vaccination Conflict
Measles cases have reached record highs in Europe as experts raise concerns about resurgences of whooping cough and mumps. Now, new research reports that internet bots and Russian trolls are "weaponizing" anti-vaccination messages on Twitter.
Why do 50 percent of tweets argue against vaccination when a 2017 Pew survey reports the vast majority of Americans support it?
A new study in the American Journal of Public Health has found one possible contributing factor: a rogue's gallery of fake accounts, sophisticated bots and Russian trolls tweeting about vaccination at significantly higher rates than other users.
Some of the Russian troll accounts — particularly those linked with the hashtag #VaccinateUS, which the authors say "is designed to promote discord using vaccination as a political wedge issue" — have been previously identified by the U.S. Congress and NBC News as linked to the Internet Research Agency, a company with ties to Russian government.
These accounts foster doubt, sow discord and inflate the idea of false equivalence; but they also drive clicks, leading users to visit inflammatory website material or tricking them into downloading malware.
David Broniatowski of George Washington University, who led the study, says that, as with spam emails, users need to focus on the dubious source, not the content.
"You want to shine light on that and warn people about that, rather than trying to argue, 'Oh, that anti-vaccine stuff is not true,' because, if that's what you do, you're kind of missing the point."
The University of Maryland, College Park, and Johns Hopkins University also contributed to the study, which examined more than 1.7 million tweets over 3.25 years.
They found that bots designed to spread malware and unwanted content tended to strongly disseminate antivaccine messages, whereas Russian trolls mainly sought to fostered discord. Accounts that pretended to be legitimate users focused on inflating confidence in false equivalence.
Broniatowski's future work will explore what makes messages like these so compelling.
For now, he says addressing the problem requires moving beyond automated approaches and helping users respond to bot- and troll-based messages in a way that takes context into account.
"This where it becomes really important to understand the content, the context and the source of the message, altogether. And that's something that I feel is really a big gap in terms of how we've been approaching Twitter. It's an area that needs really good, solid scientific research."