Explore Business Standard
A balanced level of humanised Artificial Intelligence (AI) design in chatbots enhances customer comfort and trust, while excessive human resemblance can cause discomfort, a new research by the Goa Institute of Management (GIM) has found. The research has studied customer behaviour towards Artificial Intelligence enabled service agents, including as chatbots, digital assistants and service robots. Conducted in collaboration with researchers from Cochin University of Science and Technology (CUSAT), Kerala, the findings of this research have been published in the International journal of Consumer Studies. With AI reshaping the Frontline Service Encounters (FLSE), the study aims to explore "how do consumers perceive and interact with AI in every day service interactions". For the same, the research team consolidated findings from 157 peer-reviewed articles to identify the key drivers, theories, and outcomes shaping consumer and AI interactions. The research team reviewed 44 top-tier .
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. As best as we can tell, it's the largest copyright recovery ever, said Justin Nelson, a lawyer for the authors. It is the first of its kind in the AI era. A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case
Tech companies looking to sell their artificial intelligence technology to the federal government must now contend with a new regulatory hurdle: proving their chatbots aren't woke. President Donald Trump's sweeping new plan to counter China in achieving global dominance in AI promises to cut regulations and cement American values into the AI tools increasingly used at work and home. But one of Trump's three AI executive orders signed Wednesday the one preventing woke AI in the federal government marks the first time the U.S. government has explicitly tried to shape the ideological behavior of AI. Several leading providers of the AI language models targeted by the order products like Google's Gemini and Microsoft's Copilot have so far been silent on Trump's anti-woke directive, which still faces a study period before it gets into official procurement rules. While the tech industry has largely welcomed Trump's broader AI plans, the anti-woke order forces the industry to leap into
A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Mnchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100