3 min read Last Updated : Oct 26 2021 | 10:27 PM IST
Internal documents released by whistle-blowers who are former employees of Facebook indicate the social media giant has a huge and growing problem with moderation in India, which has the largest user-base of 350 million. The company’s studies show it has long been aware about the network being used to propagate fake news, hate speech, and violent images. It doesn’t seem to have taken adequate steps to mitigate this. In one of the leaked studies, released by data scientist Frances Haugen, the methods used to push right-wing anti-Muslim agendas were outlined. Groups are created and run by individuals using multiple fake IDs and these groups were allowed to remain active long after they were identified. In another study, released by data-scientist Sophie Zhang, there was evidence of the selective removal of groups spreading politically-charged fake news in the run-up to the Delhi Assembly elections of 2020. Groups run by two major political parties were removed, but similar networks run by the ruling party were not.
In a third study, also released by Ms Zhang, Facebook set up a dummy account purporting to be the profile of a young woman based in Jaipur in February 2019, during the run-up to the general elections. The account monitored news-feed suggestions directly from the Facebook algorithm. It turned into a barrage of fake news, graphic faked images of beheadings, bombings, and torrents of anti-Muslim abuse. The study says Facebook’s recommendations led the test account to be “filled with polarising and graphic content, hate speech and misinformation”. Implementing better moderation would require a big effort. Typically, Indian Facebook users post in an idiomatic mix of three or more languages with casual references to popular culture thrown in. Moderators must be comfortable in that argot to make sense of it.
Financial allocations must change. Facebook spends 87 per cent of its moderation budget on US content, allocating only 13 per cent for the rest of the world, including India. This is despite the fact that India has more users than the population of the US. A second deeper problem lies in the nature of Facebook’s business model, which is based on engagement leading to advertising. In 2020, the network earned $84 billion from advertising out of total standalone revenues of $85 billion. (Around $1.2 billion of those revenues was from India.) Much analytical power is devoted to learning how long users engage with specific kinds of content so that ads can be targeted.
If hate speech, fake news, or graphic violent images draw more engagement, it is in Facebook’s financial interests to allow such content. Similar problems are noted with Facebook’s subsidiary, Instagram, where other leaked studies show young adults have been pushed into eating disorders and suicidal thoughts by content that is “highly engaging”. WhatsApp, also a Facebook subsidiary, has been reportedly used by some groups as a “central office” for coordinating violence and propagating rumours and fake news. The network has to find better ways to moderate content, especially non-English content. The solutions could actually come from within. The leaked documents and the testimony of whistle-blowers indicate many employees are concerned and they could find ways to reduce this menace. But they would be allowed to do so only if the top management rethinks the model of encouraging engagement at all costs.