Earlier this month, Anthropic, after testing another AI tool, Claude Mythos Preview, decided that it was too dangerous to release to the public. Reason: Mythos could remotely identify and exploit cybersecurity vulnerabilities, and could thus be used by dangerous criminals to hold vulnerable entities to ransom. Anthropic said it is sharing the tool with large corporations so they can identify their vulnerabilities before similar tools are developed by others.
One should think of April 2026 as a moment similar to J Robert Oppenheimer’s testing of the atomic bomb under the Manhattan Project in 1945, which reminded him of Sri Krishna’s dark message in the Gita. When Arjuna asks him who he really is, Sri Krishna replies in Chapter 11, Verse 32: “I am mighty Time, the source of destruction that comes forth to annihilate the worlds. Even without your participation, the warriors arrayed in the opposing army shall cease to exist.”
In less than three-and-a-half years after the release of ChatGPT, we have now arrived at a critical juncture in AI’s development: Before building anything real, it will empower those who want to destroy our world. It is the new WMD. The D in the WMD could stand for anything from Disruption (jobs) to Destruction (AI’s use in warfare) to Detection (mass surveillance) to Deception (the generation of deep fakes that could fool anyone).
In the hands of the government, the atomic and hydrogen bombs were used to destroy two cities in Japan. Now we are dependent on Anthropic’s moral qualms to prevent a potent destroyer of digital integrity, on which much of world commerce depends, to save us from economic damage. It is also notable that Anthropic is giving this tool largely to American tech giants like Amazon, Apple, Broadcom, Cisco, Google, Microsoft and Nvidia to fix their cyber vulnerabilities — which implies that this is, shorn of moral grandstanding, a tool meant to preserve the American tech edge.
While China and other tech powers will surely develop their own Mythos — assuming they haven’t already done so — Europe, India, Asia (ex-China), Africa and the rest of the tech laggards of the world will be reduced to mere consumers of tech, subservient to America’s geopolitical priorities. What Mythos demonstrates is that everyone can now be threatened. In less than a year, bad actors, state or non-state, may get to lay their hands on a similar tool.
Just in case you think that American corporations are wringing their hands in anguish over the thought of putting such lethal tools in the hands of bad actors, forget it. When it comes to the American military-industrial complex and the Deep State, conscientious objectors to the use of AI for military purposes, and that too without human oversight, can be set aside. To go back to Oppenheimer, when he opposed the building of the hydrogen bomb, he was quickly labelled a Communist and sidelined. The military-industrial complex does not care for dissent if it stands in the way of its power. In the current context, the US Department of War (formerly Defense), has labelled Anthropic a “supply-side risk” and new players are eager to fill the gap. A Reuters report says that small AI startups are eager to gain from Anthropic’s exit. Uncle Sam has the ability to make any supplier of AI tools for defence rich.
Mythos is yet another wakeup call for India: It cannot afford to remain merely a supplier of tech labour for American corporations building their platforms. We have to build our own platforms. It will take a lot of money and effort to get this done, but if we do not incentivise this process, it will never happen. We must remember we are more cyber vulnerable than many other countries.
But how exactly do we incentivise sovereign tech?
First, it is not about companies lacking the money; it is about misallocation. Take one example. Between 2014 and today, according to a Crisil report, nearly ₹1.2 trillion of the profits of listed companies has gone towards funding CSR (corporate social responsibility) projects. This is stupidity disguised as do-goodism. Not only do individuals and corporations pay taxes to enable social welfare projects, expecting them to fund more welfare directly is folly.
Second, India’s tech companies pay out thousands of crores to investors as dividends and for buybacks. In 2024-25, the top three software services companies, TCS, Infosys and HCL Tech, had more than ₹1 trillion of free cash flows. Even if half the money were to be paid out to investors, what is to stop the government from giving tax benefits to use the bulk of the balance for building sovereign platforms and products?
The problem is not a lack of funding for building Indian intellectual property. It is a lack of appetite for taking real risks to create something larger than quarterly profits.
The hegemonic West will use its huge lead in AI tech to increase its share of global power and reduce us to tech coolies. If India really wants to change the world order to make it less one-sided, it has to put its money where its mouth is. To be at the high table, you have to be able to show what you bring to the table, not just that you have an appetite. If we are only bringing cheap labour to the table, we will be treated as such.
The lesson to learn from the Iran-US war is this: It takes will power, and not just money and high-tech, to win a war. Also, note the late Intel boss Andy Grove’s advice: Only the Paranoid Survive. By merely surviving and giving it back in spades, Iran has shown what national will can accomplish. It has demonstrated clarity of vision on what it wants to accomplish: Be its own master. This is what India must aspire for.
The author is a senior journalist