You are here: Home » International » News » Technology
Business Standard

On WhatsApp, fake news is nearly impossible to moderate. Is that bad?

WhatsApp (which is owned by Facebook) is the leading messaging app for mobile users outside the US

Kisholoy Mukherjee | Global Voices 

Whatsapp
Whatsapp

With the number of social media users in India rapidly rising, the dissemination of has become a widespread phenomenon in recent years.

So-called “information overload” has made it difficult to separate the wheat from the chaff, and in some cases, misinformation spread via appears to have precipitated real-life violence, sometimes with fatal consequences.

In one recent incident, Twitter users in expressed their anger when a ruling party member shared an image taken out of context, in what seemed like an effort to stoke social tensions during a riot in the Indian state of West Bengal. Several such images were circulated through social media to skew public opinion in this period. In 2015, a possibly fake image circulated via WhatsApp and was later linked to the subsequent lynching of a Muslim man in India, on the suspicion that he had slaughtered a cow.

In India, reporting misinformation to police can be a first step towards prosecuting its sender under Indian laws like Section 67 of the IT act, if the information is perceived as likely to be “harmful to young minds”, or section 468 of IPC if the news is considered “detrimental” to someone's reputation. But policies like these are hard to implement effectively, routinely running afoul of protections for free expression.

Online civil society is also increasingly proactive, with the emergence of several hoax-slaying initiatives run by do-gooders from different spheres of life who try to expose fake news for what it is. But research has shown that civilian reporting of is often not swift or thorough enough to curb the problem.

At the moment, the most likely mitigators of online may be the companies themselves. But experts are still undecided on whether or how companies might change their behaviors — by choice or by regulation — in order to diminish the problem.

Facebook's “trending” tweaks

As a major venue for the spread of fake news, has found itself at the center of this debate. After the 2016 election, critics charged that the prevalence of false stories smearing Hillary Clinton, spread mostly on Facebook, may have shaped the outcome of the US election. These allegations triggered an ongoing debate about how Facebook might moderate misinformation on their network, along with multiple technical tweaks by Facebook, in an attempt to make its network less friendly to distributors.

Most recently, Facebook updated its “Trending” feature formula. Unlike in the past, when the posts with maximum engagement appeared in the “Trending” section, now only those posts that have been shared by other “reputable sources” will appear in the Trending section. Users are also invited to contribute to the system by reporting false news stories directly to the company.

But CEO Mark Zuckerberg says it is difficult to rely on feedback from users, who may flag potentially correct content as wrong, for vested reasons. In fact, recent research seems to indicate that most people fail to distinguish between real and fake online content. This, along with the fact that most of the news that we receive on sites are from those in our close circles (and therefore people we generally trust), makes an ideal platform for propagating

The only thing that is certain is that there are major pitfalls for any entity — whether a company, a government, or an individual — that aims to separate out the real from the fake.

Thanks to encryption, can't moderate messages

While misinformation continues to circulate on standard platforms, all of the above examples from reportedly went viral on WhatsApp. As the internet-based app has become a key platform for disseminating news and information, for groups of friends and media houses alike, it has also increasingly served as a mechanism for distributing

But the picture becomes more complex when it comes to news and information spread through

(which is owned by Facebook) is the leading messaging app for mobile users outside of the It is often easier to access via mobile phone than or other platforms that carry a higher volume of content and code.

But in contrast to the that supports Facebook, which allows the company can see and analyze what users post, operators have no way of seeing the content of users’ messages.

This is because uses end-to-end encryption, where only the sender (on one end) and receiver (on the other end) can read each other's messages. This design feature has been a boon for users — including journalists and human rights advocates — who wish to keep their communications private from government surveillance.

But when it comes to the proliferation of misinformation, this presents a significant hurdle. In a recent interview with the Economic Times, software engineer Alan Kao explained that WhatsApp's underlying encryption makes it difficult to tackle the challenge of fake news, as operators have no way of seeing what kind of information is being spread on their networks, unless it is reported to them directly by users.

Like other Facebook-owned products, has a policy on acceptable use which prohibits the use of the app, among other things, to publish “falsehoods, misrepresentations, or misleading statements.” But this seems more like a suggestion than a hard and fast rule. The app doesn't offer a user-friendly way to report violating content, apart from its “Report Spam” option. In its FAQ on reporting “issues” (i.e. problems) to WhatsApp, the company writes:

We encourage you to report problematic content to Please keep in mind that to help ensure the safety, confidentiality and security of your messages, we generally do not have the contents of messages available to us, which limits our ability to verify the report and take action.

When needed, you can take a screenshot of the content and share it, along with any available contact info, with appropriate law enforcement authorities.

While it is easy to see why the company would encourage users to report violating behaviour to law enforcement, this might not render the best outcome in a country like (alongside many others.) Indeed, there have been several cases of arrests of people who have criticized politicians on WhatsApp. And in April 2017, an Indian court ruled that a group administrator could even be sentenced to jail time for “offensive” posts.

No matter what, it seems there is always the risk of the powers-that-be taking undue advantage of their influence over activity.

First Published: Fri, September 08 2017. 09:58 IST
RECOMMENDED FOR YOU