Explore Business Standard
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot, Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defence Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. The San Francisco-based company didn't immediately respond to a request
A balanced level of humanised Artificial Intelligence (AI) design in chatbots enhances customer comfort and trust, while excessive human resemblance can cause discomfort, a new research by the Goa Institute of Management (GIM) has found. The research has studied customer behaviour towards Artificial Intelligence enabled service agents, including as chatbots, digital assistants and service robots. Conducted in collaboration with researchers from Cochin University of Science and Technology (CUSAT), Kerala, the findings of this research have been published in the International journal of Consumer Studies. With AI reshaping the Frontline Service Encounters (FLSE), the study aims to explore "how do consumers perceive and interact with AI in every day service interactions". For the same, the research team consolidated findings from 157 peer-reviewed articles to identify the key drivers, theories, and outcomes shaping consumer and AI interactions. The research team reviewed 44 top-tier .