The scale and breadth of the court order are astonishing; it targets about 25,000 posts, videos and links spread across multiple platforms. Some of these posts go back to at least 2014. Many of these posts consist of legitimate content pertaining to the product in question, including opinions by food-bloggers, reports, and also satirical jokes. Although the court order allows for "just exceptions" in theory, there is no review mechanism in practice. It is not easy for a content-creator to appeal for review and restoration of legitimate content. A mechanism is obviously required to remove fake news or malicious web content. But a broad blanket order on these lines restricts any commentary whatsoever about the subject on social media. That cannot be a healthy state of affairs since it effectively amounts to the censorship of critical opinion, and stifles feedback from consumers. In addition, there have been technical problems earlier about over-enthusiastic compliance with broad Ashok Kumar orders. It requires technical skill to block a specific link and 2017 saw at least two instances when such orders resulted in the blocking of entire domains, including the invaluable non-profit Internet Archive. The take-down mechanism is also prone to misuse. Companies have used it several times to try and force the take-down of adverse financial analysis of balance-sheets.
Ideally, the rules, guidelines and laws in question should undergo review to help plug such gaps. The law must certainly uphold the rights of corporates, and of individuals, to ensure the removal of anonymous hateful, or malicious content, and fake news. However, this cannot happen at the expense of free speech and the stifling of opinion. There should be an explicit, clear-cut review mechanism where content creators may appeal for a review and timely restoration of legitimate content if there is a take-down order against them.
Ironically, the enforced take-down of web content can often result in greater adverse publicity and wider dissemination of the very content that is sought to be blocked. This is known as the "Streisand Effect" due to an incident involving American star, Barbra Streisand. In 2003, Streisand attempted to get aerial photographs of her California beachfront property removed from a database that documented coastal erosion. Not only did she lose the case; nearly half a million surfers downloaded the pictures in question when the lawsuit drew attention to their existence in the database. Similar effects may occur in cases like this. Indeed, it arguably already has. The ban has hit the headlines across mainstream media and many humorous tweets and social media posts about the ban have been generated and circulated in the past few days. This may well result in the absurdity of an endless cycle of repeated take-down orders, followed by more content creation, and so on. That cycle will only be broken if there is a more balanced and effective mechanism for take-downs.