Automated accounts or "bots" played a "disproportionate" role in spreading misinformation online during the 2016 US presidential polls, revealed an analysis of 14 million messages and four lakh articles shared on Twitter.
Researchers from the Indiana University in the US identified a mere six per cent of Twitter accounts that acted as bots and were enough to spread 31 per cent of the "low credibility" information on the network.
These accounts were also responsible for 34 per cent of all articles shared from "low credibility" sources.
"This study finds that bots significantly contribute to the spread of misinformation online as well as shows how quickly these messages can spread," said lead author Filippo Menczer, Professor at the varsity.
The analysis, published in the journal Nature Communications, also revealed that bots amplify a message's volume and visibility until it's more likely to be shared broadly, despite only representing a small fraction of the accounts that spread viral messages.
"People tend to put greater trust in messages that appear to originate from many people," added co-author Giovanni Luca Ciampaglia, an assistant research scientist from the varsity.
"Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them," he noted.
Other tactics for spreading misinformation included amplifying a single tweet -- potentially controlled by a human operator -- across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts.
The team also ran an experiment inside a simulated version of Twitter and found that the deletion of 10 per cent of the accounts -- appearing to be bots -- resulted in a major drop in the number of stories from low credibility sources in the network.
"This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks," Menczer said.
Although their analysis focused on Twitter, the researchers stressed that other social networks such as Snapchat and WhatsApp are also vulnerable to manipulation.
To combat misinformation, companies should improve algorithms to automatically detect bots and add a "human in the loop" to reduce automated messages in the system. For example, users might be required to complete a 'Captcha' to send a message, they suggested.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)