The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot, Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defence Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. The San Francisco-based company didn't immediately respond to a request
The short-lived rise of Moltbook, a social network built for AI agents, reveals more about human fascination with artificial intelligence than about the arrival of truly autonomous machines
Moltbook, a new social media platform, is built only for AI agents, while humans are allowed to quietly watch
St. Clair, a conservative influencer, said some users asked the chatbot to change her photos without her permission, and Grok obliged
The Elon Musk-owned platform said the restriction will apply to all users, including paid subscribers, following action sought by the IT ministry over misuse of AI-generated images
The Grok controversy has exposed gaps in India's tech laws, reviving calls for AI-specific regulation, conditional safe harbour, and stronger safeguards against misuse of generative tools
ChatGPT Health introduces a dedicated space for health queries, allowing users to link medical records and fitness apps, even as OpenAI warns it is not a substitute for medical care
China has proposed new rules to stop AI chatbots from emotionally influencing users or encouraging self-harm, with strict safeguards, human intervention in crises and special protections for minors
Some researchers believe the popular interactive tools should act more like software and less like humans
A balanced level of humanised Artificial Intelligence (AI) design in chatbots enhances customer comfort and trust, while excessive human resemblance can cause discomfort, a new research by the Goa Institute of Management (GIM) has found. The research has studied customer behaviour towards Artificial Intelligence enabled service agents, including as chatbots, digital assistants and service robots. Conducted in collaboration with researchers from Cochin University of Science and Technology (CUSAT), Kerala, the findings of this research have been published in the International journal of Consumer Studies. With AI reshaping the Frontline Service Encounters (FLSE), the study aims to explore "how do consumers perceive and interact with AI in every day service interactions". For the same, the research team consolidated findings from 157 peer-reviewed articles to identify the key drivers, theories, and outcomes shaping consumer and AI interactions. The research team reviewed 44 top-tier .
A recent leak from ChatGPT's Android beta app, along with public signals from OpenAI's Sam Altman, points to the company preparing for an advertising-supported version of its AI assistant
The growing popularity of chatbots makes it extremely important to understand the safety guardrails on digital systems
Amazon and Flipkart are updating select product listings so they appear better on ChatGPT and other AI chatbots, as more shoppers use these tools to search, compare and buy products online
Leading global AI leaders are racing to fix security flaws in chatbots that hackers are exploiting to steal data and launch cyberattacks
AI-powered systems are subsuming jobs done by headset-wearing graduates in technical support, customer care and data management, sparking a scramble to adapt
In this Manager’s Mantra episode, Ashish Tiwari shares his rich experience in marketing that spans across sectors. His suggestions can help you land into your dream marketing job
The FTC is seeking details from AI chatbot firms like OpenAI, Meta, and Snap on how they handle user data, monitor safety, and manage potential risks from their technology
Meta is hiring contractors fluent in Hindi, Spanish, Portuguese, and Indonesian to design culturally relevant AI chatbots for Instagram, WhatsApp, and Messenger
Anthropic will pay $1.5 bn to settle a lawsuit by authors claiming it used their books without consent to train AI chatbot Claude, in what may be the largest copyright recovery ever
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. As best as we can tell, it's the largest copyright recovery ever, said Justin Nelson, a lawyer for the authors. It is the first of its kind in the AI era. A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case