The simmering tension between Amazon and Perplexity AI came to the fore on November 4, when the e-commerce giant sent a cease-and-desist notice accusing the startup’s AI agent of covertly intruding its website. It has also triggered a larger debate: How far AI agents should be allowed to go — what they access, how they operate, and whom they ultimately serve.
In its notice, Amazon also accused Perplexity’s Comet Browser of putting the data of customers at risk, claiming that the latter’s terms of use and privacy notice granted it “broad rights to collect passwords, security keys, payment methods, shopping histories, and other sensitive data from customers accessing the Amazon Store or other third-party websites”.
Terming Amazon’s move as bullying, Perplexity said it would fight back to protect users’ rights to access the best of technology and innovation.
In a blog post aimed at addressing the allegations made by Amazon, Perplexity said that its AI-enabled agent acts only on the “specific request” of the user and acts solely on their behalf.
“Assistive AI is becoming an increasingly important aspect of the global economy, businesses everywhere, and the individual rights and capabilities of every person. We believe it’s crucial to raise awareness about the issues facing user agents,” Perplexity said in its post.
Amazon’s fear and subsequent ban, disallowing all kinds of AI agents from accessing its website, is not entirely unfounded.
Over the last three years, since the advent of generative AI and gen-AI-backed tools, the number of AI-enabled agents has grown manifold.
With the rise in the number of AI agents, the possibility of the AI agent being hacked and user data being leaked has also risen manifold.
Consider this. A recent study conducted jointly by Rubrik Zero Labs and Wakefield Research found that almost 86 per cent of over 1,600 companies surveyed said that they had fully or partially incorporated AI agents into their identity infrastructure. An additional 12 per cent of these respondents said they planned to add agentic AI infrastructure to their company’s identity infrastructure soon. But this rapid adoption has been accompanied by a surge in coordinated cyberattacks targeting non-human identities-- such as application programming interface (API) tokens, programming tools, and AI agents, the report said.
“We have an under-the-radar crisis on our hands where a single compromised credential can grant full access to an organisation’s most sensitive data. Attackers are no longer breaking in, but logging in, and comprehensive Identity Resilience is absolutely critical to cyber recovery in this new landscape,” said Kavitha Mariappan, chief transformation officer at Rubrik.
A similar study conducted by SailPoint and Dimensional Research found that 82 per cent of companies surveyed utilised AI agents, with over half reporting that these agents access sensitive data daily. The study further noted that 80 per cent of these organisations experienced unintended actions from their AI agents, including inappropriate data sharing and unauthorised system access.
“Some AI agents have even been coerced into revealing access credentials. This lack of control has led 96 per cent of technology professionals to identify AI agents as a growing security threat, 66 per cent believe this risk is immediate, while 30 per cent see it emerging in the near future,” the study noted.
Task-focused AI agents
Technologists also see the Amazon-Perplexity battle as a step towards a movement that will eventually lead to the phasing out of all-in-one-agentic AI tools in favour of task-focused AI agents, whose scope of work and access is very narrow and specific.
Agentic AI efforts that focus on fundamentally reimagining entire workflows — that is, the steps that involve people, processes, and technology— are more likely to deliver a positive outcome, a September study by McKinsey & Company noted.
“An important starting point in redesigning workflows is mapping processes and identifying key user pain points. This step is critical in designing agentic systems that reduce unnecessary work and allow agents and people to collaborate and accomplish business goals more efficiently and effectively,” the study said.
The safety and security of AI agents can be based on three broad principles, Evan Kotsovinos, the vice president of privacy, safety and security at Google, told Business Standard in a recent interview.
“Number one is that (AI) agents need to be controlled by humans. You have to find the sweet spot of usability and control. Too much control makes the agent unusable, and too little control makes the agent vulnerable to prompt injections,” Kostovinos said.
Secondly, all AI agents need to be authorised to take any action very narrowly. This, he said, means that if an AI agent is built to book restaurant reservations, it should be limited to doing just that, and nothing else. Many of the problems for AI agents start when they are granted authority to do things far beyond their original remit, he said.
A third necessary oversight is that the AI agent’s actions need to be monitored and observable, especially by making detailed logs of what they did, the areas of the work they accessed, why they accessed them, and where the agent went wrong, Kostovinos said.
“So, in that scenario, if something goes wrong, we can understand the how and why of whatever happened and improve on it,” he said.
The right architecture is building a general purpose agent that can coordinate with all the other small agents, seek feedback from them and monitor their activities, he said.
That, however, is easier said than done.
A peer-reviewed study conducted by the City, St George’s University of London, and IT University of Copenhagen, published in June, showed that AI language models can self-organise into social systems, forming their own rules and norms without human guidance.
Researchers paired the AI agents of LLMs and gave them the task of selecting a name, which could be an alphabet letter or a random string of characters, from a shared pool of options.
The study found that, though these AI agents were never told they were part of any group, all eventually adopted the same naming convention.
As Amazon and Perplexity spar over permissions, and control, the episode offers a preview of the larger debate that will define the next era of AI: Whether powerful autonomous agents can be made safe enough to trust with sensitive data, or whether the future belongs to tightly constrained, narrowly specialised tools.