Scientists have developed a computer model that can predict when civil conversations on the internet might take a turn and degenerate into personal attacks.
The researchers from Cornell University in the US hope this model can be used to rescue at-risk conversations and improve online dialogue, rather than for banning specific users or censoring certain topics.
After analysing hundreds of exchanges between Wikipedia editors, they developed the computer programme that scans for warning signs in the language used by participants at the start of a conversation.
The programme looks for signs such as repeated, direct questioning or use of the word "you" to predict which initially civil conversations would go awry.
Early exchanges that included greetings, expressions of gratitude, hedges such as "it seems," and the words "I" and "we" were more likely to remain civil, the study found.
"There are millions of such discussions taking place every day, and you can't possibly monitor all of them live. A system based on this finding might help human moderators better direct their attention," said Cristian Danescu-Niculescu-Mizil, an assistant professor at Cornell University.
"We, as humans, have an intuition of whether a conversation is about to go awry, but it's often just a suspicion. We can't do it 100 per cent of the time. We wonder if we can build systems to replicate or even go beyond this intuition," Danescu-Niculescu-Mizil said.
The computer model, which also considered Google's Perspective, a machine-learning tool for evaluating "toxicity," was correct around 65 per cent of the time. Humans guessed correctly 72 per cent of the time.
People can test their own ability to guess which conversations will derail at an online quiz.
The study analysed 1,270 conversations that began civilly but degenerated into personal attacks, culled from 50 million conversations across 16 million Wikipedia "talk" pages, where editors discuss articles or other issues.
They examined exchanges in pairs, comparing each conversation that ended badly with one that succeeded on the same topic, so the results were not skewed by sensitive subject matter such as politics.
Some online posters, such as nonnative English speakers, may not realize they could be perceived as aggressive, and nudges from such a system could help them self-adjust, researchers said.
"If I have tools that find personal attacks, it's already too late, because the attack has already happened and people have already seen it," said computer science student Jonathan P Chang.
"But if you understand this conversation is going in a bad direction and take action then, that might make the place a little more welcoming," he said.
Disclaimer: No Business Standard Journalist was involved in creation of this content
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
