As Twitter rolls out a limited test to record audio tweets and attach those to the original post, concerns are now been raised on how the company will moderate them as tackling hateful, abusive or racist audio messages require more efforts that using AI to curb disinformation on normal tweets.
One good thing is that audio can only be added to original tweets and the users can't include those in replies or retweets with a comment.
This makes the job a bit easier to find a person who posts a bad audio tweet, and the moderators swing into action to block or flag his tweet or account.
However, unlike Facebook which currently has over third-party15,000 content moderators policing its main app as well as Instagram, Twitter has a small team of human moderators.
In case of an audio tweet, one has to listen to it to reach a conclusion if the voice tweet contains inflammatory or abusive content which then needs to be flagged.
Or AI models get on to the job to go through audio tweets but then, how are they supposed to scan voice tweets in various languages?
Even Facebook moderators do blunders. Tasked with reviewing about three million posts a day, Facebook moderators make about three lakh mistakes in 24 hours in deciding what should stay online and what should be taken down, according to a new report from New York University's Stern Center for Business and Human Rights.
The number of blunders was derived on the basis of a statement made by Facebook CEO Mark Zuckerberg in a white paper in November2018. The Facebook CEO admitted that moderators "make the wrong call inmore than one out of every 10 cases."
According to a report in Vice, at a time when online platforms are struggling to remove misinformation and fake content, audio tweets may be "a new mechanism to harass people".
"As we've previously reported, Twitter has far fewer human moderators than other social media giants, so adding such alabor-intensive type of content to moderate seems like it could go poorly," said the report.
In the case of Facebook, the research found that to efficiently sanitise the platform, Facebook needs to end outsourcing of content moderation and double the number of people who moderate the content on a daily basis and significantly expand fact-checking to debunk misinformation.
Most of these workers are employed by third-party vendors, said the report, adding that the frequently chaotic outsourced environments in which content moderators work impinge on their decision making.
The onus is now on Twitter to sort these things out while voice tweets are still in the testing phase, and create a good mix of AI-human moderation to control what people utter via voice tweets on its platform, before the users flood the micro-blogging platform with complaints.
One subscription. Two world-class reads.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)