Cancer cells use glucose and other nutrients from one’s body to grow, spread, and eventually kill the person. If one starves them of these nutrients, could the disease stop spreading or even reverse? That thinking is driving much research into the use of fasting to stop cancer. It is also the logic media and entertainment firms are using in their fight against artificial intelligence’s (AI’s) appropriation of their work.
Much like cancer, generative AI gets its “intelligence” from everything that has been published, written, recorded and available online and offline. Without all the knowledge, information, music, films, books, newspapers, there would be nothing to train the AI. That is why the cheer around it seems fantastic.
In a world where information and knowledge is the fuel that runs entire economies, we are happily giving the collective knowledge of humankind to a handful of companies. These firms then sell or offer the same knowledge in bits and pieces through apps or summaries to us.
There is an inherent conflict of interest here. AI is destroying the very thing that it is feeding on. A senior publisher told me recently that he can train an AI tool to write like, say, Salman Rushdie or Ian McEwan. Why on earth should Mr Rushdie, or for that matter anybody doing original work, bother to create anything then? If all creators did that, what would it mean for the “development” of AI?
Imagine a world where human beings have stopped producing anything original and are completely dependent on AI for knowledge, for the very function of thinking. It is the world of The Matrix, a 1999 film that envisaged human beings as battery cells in an apocalyptic world ruled by machines.
Snap back to the present. The fact is AI is hugely useful, even transformational for people and businesses. From dubbing and subtitling to cleaning up workflows, it has cut costs and improved efficiencies in the media industry. Ditto for a range of industries. For instance, the bots that answer your questions on travel or hotel sites. But a travel company training bots on its own data is different from an app that charges for help with research or to write a script or create a piece of music.
Take news. From June 2024 to June 2025, the top 10 English language publications in India lost anywhere from 20 to 30 per cent of their online audience, according to Comscore. Overall, the number of people seeking news online fell by around five per cent in 2024, compared with 2023, while the time spent on news dropped by a third. Google’s AI summaries, OpenAI’s ChatGPT, or Perplexity give information gleaned from existing information and media sites, ensuring that people don’t go to the source of the news. You could argue that it should be fine if news firms were compensated for the use of their copyrighted material — but they aren’t.
The AI ecosystem is about startups, how can they pay? That is the argument the companies and even regulators make. OpenAI, now nearing $500 billion in valuation, hawks ChatGPT at $20 (₹1,760) a month in the United States and at ₹399 per month in India. But Indian publishers’ attempts to even engage with the New York-based firm have been difficult: Talk of compensation remains distant. Incidentally, the same company will be paying a reported $250 million to the US-based NewsCorp over five years for the use and display of content from the Wall Street Journal, The Times, and other titles. Google maintains that its use of publicly available information for training AI models is based on the legal principle of fair use.
This complete denial that AI is being built on existing copyrighted data is worrying — a position that governments and businesses support. This is particularly true for all creative arts. A report on copyright and AI released in May this year by the US Copyright Office said: “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” In the same month, American President Donald Trump fired Copyright Office Director Shira Perlmutter. In the US, in two cases, judges have ruled in favour of AI firms.
In India, the battle has just begun. Late last year, Asian News International, or ANI, filed a lawsuit against OpenAI in the Delhi High Court. It alleges that the AI company used ANI’s content to train its AI models and generated false information attributed to the news agency. The lawsuit was soon joined by The Digital News Publishers Association or DNPA, which represents 22 mainstream publishers (including this newspaper) and the Indian Music Industry or IMI, the body representing music labels. Several publishers (The New York Times) and authors (George R R Martin, John Grisham) are going the legal route too. However, their contention that AI violates copyright law invites ridicule from an ecosystem that firmly believes that whatever big-tech is doing with AI is correct.
Besides the legal fight, the creative ecosystem is fighting back with technology. Both Cloudflare and Tollbit, which either block AI crawlers/bots that scrape data from websites or make them pay, are seeing rising usage. Many editorially driven news organisations are increasingly going behind a paywall. For creative businesses, then, the “starve the AI” route seems logical.
There is another scenario — once all original content has been guzzled and regurgitated and existing firms have died, AI will be feeding on AI generated content. It will be interesting to ask, AI perhaps, what it will produce then.
https://x.com/vanitakohlik