Development dilemma: Cheap replication of AI models is raising questions
The concern has intensified with the rise of DeepSeek and other high-performance Chinese models at significantly lower prices
)
premium
Listen to This Article
OpenAI, Anthropic PBC, and Alphabet Inc’s Google are reported to be working together to tackle the problem of unauthorised replication of their artificial-intelligence (AI) systems. They are coordinating through the Frontier Model Forum, a non-profit industrial body originally set up with Microsoft to promote the safe and responsible development of advanced AI models. OpenAI and Google Gemini are banning accounts that violate terms of service and proactively removing users who appear to be attempting to distil models through obfuscated third-party routers. However, the issue at hand is not just technological misuse but a deeper shift in the economics and geopolitics of AI, one that has direct implications for countries like India. At the centre of the debate is distillation, a machine-learning compression technique that allows a compact “student” AI model to be trained to replicate the behaviour and performance of a larger, complex “teacher” model. It retains high accuracy while reducing model size, speeding up inference, and lowering computational costs. But when used without authorisation to replicate proprietary systems, it becomes what the big tech firms call adversarial distillation. For the top generative AI companies based in the United States (US), this is a form of free-riding, allowing competitors to build similar products without bearing the massive costs of innovation.
