'New algorithms can help identify cyber-bullies on Twitter'

Image
Press Trust of India New York
Last Updated : Sep 17 2019 | 3:45 PM IST

Scientists have developed new machine learning algorithms which can successfully identify bullies and aggressors on Twitter with 90 per cent accuracy.

Effective tools for detecting harmful actions on social media are scarce, as this type of behaviour is often ambiguous in nature and exhibited via seemingly superficial comments and criticisms, said researchers from Binghamton University in the US.

The study, published in the journal Transactions on the Web, analysed the behavioral patterns exhibited by abusive Twitter users and their differences from other Twitter users.

"We built crawlers -- programmes that collect data from Twitter via variety of mechanisms," said Binghamton University computer scientist Jeremy Blackburn.

"We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them," Blackburn said.

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users.

The researchers developed algorithms to automatically classify two specific types of offensive online behaviour, ie, cyberbullying and cyberaggression.

The algorithms were able to identify abusive users on Twitter with 90 per cent accuracy, researchers said.

These are users who engage in harassing behaviour, e.g. those who send death threats or make racist remarks to users.

"In a nutshell, the algorithms 'learn' how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples," said Blackburn.

While this research can help mitigate cyberbullying, it is only a first step, he said.

"One of the biggest issues with cyber safety problems is the damage being done is to humans, and is very difficult to 'undo,'" he said.

"For example, our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users," Blackburn said.

"However, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale.

"And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them," Blackburn said.

Disclaimer: No Business Standard Journalist was involved in creation of this content

*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

First Published: Sep 17 2019 | 3:45 PM IST

Next Story