A balanced level of humanised Artificial Intelligence (AI) design in chatbots enhances customer comfort and trust, while excessive human resemblance can cause discomfort, a new research by the Goa Institute of Management (GIM) has found. The research has studied customer behaviour towards Artificial Intelligence enabled service agents, including as chatbots, digital assistants and service robots. Conducted in collaboration with researchers from Cochin University of Science and Technology (CUSAT), Kerala, the findings of this research have been published in the International journal of Consumer Studies. With AI reshaping the Frontline Service Encounters (FLSE), the study aims to explore "how do consumers perceive and interact with AI in every day service interactions". For the same, the research team consolidated findings from 157 peer-reviewed articles to identify the key drivers, theories, and outcomes shaping consumer and AI interactions. The research team reviewed 44 top-tier .
A recent leak from ChatGPT's Android beta app, along with public signals from OpenAI's Sam Altman, points to the company preparing for an advertising-supported version of its AI assistant
The growing popularity of chatbots makes it extremely important to understand the safety guardrails on digital systems
Amazon and Flipkart are updating select product listings so they appear better on ChatGPT and other AI chatbots, as more shoppers use these tools to search, compare and buy products online
Leading global AI leaders are racing to fix security flaws in chatbots that hackers are exploiting to steal data and launch cyberattacks
AI-powered systems are subsuming jobs done by headset-wearing graduates in technical support, customer care and data management, sparking a scramble to adapt
In this Manager’s Mantra episode, Ashish Tiwari shares his rich experience in marketing that spans across sectors. His suggestions can help you land into your dream marketing job
The FTC is seeking details from AI chatbot firms like OpenAI, Meta, and Snap on how they handle user data, monitor safety, and manage potential risks from their technology
Meta is hiring contractors fluent in Hindi, Spanish, Portuguese, and Indonesian to design culturally relevant AI chatbots for Instagram, WhatsApp, and Messenger
Anthropic will pay $1.5 bn to settle a lawsuit by authors claiming it used their books without consent to train AI chatbot Claude, in what may be the largest copyright recovery ever
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. As best as we can tell, it's the largest copyright recovery ever, said Justin Nelson, a lawyer for the authors. It is the first of its kind in the AI era. A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case
A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people. The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for further refinement in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The research - conducted by the RAND Corporation and funded by the National Institute of Mental Health - raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support, and seeks to set benchmarks for how companies answer these questions. We need some guardrails, said the study's lead author, Ryan McBain, a senior policy researcher at RAND. One of the things that's ambiguous about chatbots is whether ..
China's Tiangong station has deployed Wukong AI, its first large-scale artificial intelligence assistant, to support taikonauts during missions and spacewalks
Bhavish Aggarwal's AI venture cuts its workforce in second round of layoffs as fundraising falters and Kruti's training nears completion
The controversy over the now-deleted inflammatory posts erupted just days after X CEO Elon Musk claimed major upgrades to Grok, mentioning that the chatbot had improved significantly
AI chatbots often give biased or incomplete health advice when users ask leading or vague questions, warns a new Google-backed study
Marketers are turning to generative engine optimisation, which uses GenAI to create and optimise content for improved search engine ranking
A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Mnchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100
Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally "scraping" the comments of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and intentionally trained on the personal data of Reddit users without ever requesting their consent. Anthropic said in a statement that it disagreed with Reddit's claims "and will defend ourselves vigorously. Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based. AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data, said Ben Lee, Reddit's chief legal officer, in a statement Wednesday. Reddit has previously entered licensing agreements with Google, OpenAI and other companies to enable them to train their AI systems on Reddit commentary. T
Elon Musk's artificial intelligence company said an unauthorised modification to its chatbot Grok was the reason why it kept talking about South African racial politics and the subject of white genocide on social media this week. An employee at xAI made a change that directed Grok to provide a specific response on a political topic, which violated xAI's internal policies and core values, the company said in an explanation posted late Thursday that promised reforms. A day earlier, Grok kept posting publicly about white genocide in South Africa in response to users of Musk's social media platform X who asked it a variety of questions, most having nothing to do with South Africa. One exchange was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa's white farmers. It was echoing views shared by Musk, who was born in South Africa and frequently opines o