Filtering poison on WhatsApp

In India, social media started becoming important circa in 2012

WhatsApp
WhatsApp
Devangshu Datta
Last Updated : Jul 21 2018 | 5:59 AM IST
The US presidential election of 1960 marked a media watershed. John Fitzgerald Kennedy beat Richard Milhous Nixon because JFK came through as, by far, the more telegenic personality and TV had overtaken print and radio. 

The 2008 presidential election initiated the social media epoch. Barack Hussein Obama's campaign leveraged Facebook's networking capacities brilliantly to micro-target voters. The 2016 presidential campaign was also social media-oriented. But it depended on dissemination of fake news and misinformation rather than just networking. 

In India, social media started becoming important circa in 2012. Most political formations have learnt to effectively use both networking capacity and the potential for misinformation. Even more than Facebook and Twitter, WhatsApp is the favoured tool for spreading poison. Over 50,000 WhatsApp groups were created for the last Karnataka assembly elections alone. 

There are sound reasons for this preference. Political formations use social media for many purposes. One is to rally the faithful and craft digital strategy, generate content, etc. This requires closed groups to discuss things without fear of oversight. Then, there's the need to amplify, and virally spread, any message. Third, there’s the need to connect to potential converts one-on-one.

Facebook disapproves of anonymity. While closed FB groups can rally the faithful, messages are hard to spread beyond the network of the converted. Twitter is anonymous. It's possible to trend a message virally but it's hard to use Twitter for organising since it's an open platform. A viral twitter trend based on lies can be equally openly, and virally refuted.

On WhatsApp, it's possible to create closed groups, protected by end-to-end encryption. Those groups can plot strategy as they please. WhatsApp is an anonymous platform for practical purposes, since content can be generated, copied and forwarded without attribution, or with false attribution, as desired. It is possible to take content viral as in Twitter. Unlike Twitter or Facebook, where content is visible to all, a message remains a private message. The service provider doesn't know what's being spread.

Over 200 million Indians use WhatsApp, including many who use no other form of social media. Most messages — around 90 per cent, according to WhatsApp — are one-on-one. A WhatsApp forward can gain "pseudo-authenticity" if it comes from a known person. Unless the recipient chooses to check factual content proactively, there is no means of validating or invalidating that content. 

As everybody reading this is aware, WhatsApp has been used to propagate all sorts of fake news and misinformation. In the past six months or so, WhatsApp has been the core enabler of many instances of lynching. Similar "experiments with truth" have been carried out in Brazil and Mexico by politicians who have leveraged large WhatsApp bases in those countries. 


In the run-up to the recent Mexico elections, WhatsApp actively verified messages. It connected to Verificado, a local organisation that fact-checks social media content. WhatsApp set up helplines where users could forward messages for verification to Verificado. In the Indian context, it is already working with Boom Live, a similar fact-checking organisation and it intends to modify the Mexican model to try and filter fake news in the run-up to the 2019 general elections.


WhatsApp also intends to roll out an experimental system where forwards are marked as forwards. This could, perhaps, nail originators of fake content, or those who instigate lynch mobs. Another measure WhatsApp intends to put in place is filters for automated spam — messages churned out in greater volume than humans could manage. Again, this may reduce the effectiveness of political propaganda.

Will these measures bring accountability to lynch mobs, or raise the red flag for credulous voters consuming pernicious nonsense? One stumbling block is behavioural. End-to-end encryption means that content cannot be verified unless a message is actively sent to the verifier. Somebody who believes a forward will not forward it for verification. Somebody who's out to lynch a stranger will not bother to check if that stranger is indeed a criminal. 

Nevertheless, this is a beginning and it does set up a framework for identifying poison and watermarking it. One can only hope it works.
Twitter: @devangshudatta

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper
Next Story