Monday, January 19, 2026 | 05:23 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Increasing discomfort: AI and social media need new norms of regulation

The Grok controversy shows how fast-moving AI is outpacing laws, forcing governments to rethink how to curb explicit content without undermining free speech

grok ai, xai
premium

Many well-known people, including a former partner of Mr Musk, have already been targeted by Grok Imagine users.

Business Standard Editorial Comment Mumbai

Listen to This Article

The global firestorm around the proliferation of pornographic and violent content generated by the Artificial Intelligence (AI) tool Grok and posted on social-media platform X indicates a persistent issue: Legislation and regulations always lag technological change. Most nations have legislation designed to limit the creation and dissemination of pornographic and violent content, especially if that involves images depicting minors. But those laws did not envisage a situation where an AI instrument could be deployed to spew many thousands of explicit photorealistic images every hour and post them on a platform where they can be viewed by hundreds of millions. India, the United Kingdom, Malaysia, and reportedly several European Union nations as well as the United States, have started to investigate this phenomenon and asked X and Grok (both of owned and controlled by Elon Musk) to put a stop to it. While X has reportedly responded to enquiries from the Indian government within the time provided, such explicit content continues to be visible and new content of this nature apparently continues to be created and disseminated. If the government is not satisfied by its response, which is reportedly the case as of now, X could lose its safe-harbour status or even be banned in India and possibly in other jurisdictions. 
AI can be used to generate images of generically realistic individuals of specified age, physical attributes, race, body type, and attire. It can also be used to alter images (and clone voices) of existing people. Most of the publicly available AI has guardrails to prevent the use of these algorithms to generate pornographic or violent content (although there is a cottage industry in finding ways to bypass those restrictions). It requires some technical skills and an ability to navigate the Dark Web to use algorithms to realistically depict violence on most AI models. So there are barriers to generating and posting such content in meaningful volume. Grok Imagine has far fewer controls than most AI models. Moreover, it is easy to post such images on a platform such as X and, again, the platform has few guardrails to prevent any user from accessing such images. Such technologies can also easily be deployed to manipulate images of real, live people or to create disturbing images of children, or to portray horrifying acts of violence. Given very high and increasing levels of realism, such images may be indistinguishable from actual photos to the naked eye. 
Many well-known people, including a former partner of Mr Musk, have already been targeted by Grok Imagine users. While the platform says it takes down such images as soon as it can detect them, volume makes this a difficult task. Obviously this impinges on the consent of the targeted individuals. At the same time, blanket bans would probably be unenforceable and impact the free-speech rights of users to post and disseminate normal content. There would also be a huge cost to policing such content, and if done indiscriminately, it will lead to censorship. AI creators and social-media platforms in principle have a moral duty to self-regulate content but doing this in practice is a difficult task. However, new ways will have to be found either by the platforms themselves or the regulators. The current position is certainly far from ideal.