The Grok controversy has exposed gaps in India's tech laws, reviving calls for AI-specific regulation, conditional safe harbour, and stronger safeguards against misuse of generative tools
ChatGPT Health introduces a dedicated space for health queries, allowing users to link medical records and fitness apps, even as OpenAI warns it is not a substitute for medical care
China has proposed new rules to stop AI chatbots from emotionally influencing users or encouraging self-harm, with strict safeguards, human intervention in crises and special protections for minors
Some researchers believe the popular interactive tools should act more like software and less like humans
A balanced level of humanised Artificial Intelligence (AI) design in chatbots enhances customer comfort and trust, while excessive human resemblance can cause discomfort, a new research by the Goa Institute of Management (GIM) has found. The research has studied customer behaviour towards Artificial Intelligence enabled service agents, including as chatbots, digital assistants and service robots. Conducted in collaboration with researchers from Cochin University of Science and Technology (CUSAT), Kerala, the findings of this research have been published in the International journal of Consumer Studies. With AI reshaping the Frontline Service Encounters (FLSE), the study aims to explore "how do consumers perceive and interact with AI in every day service interactions". For the same, the research team consolidated findings from 157 peer-reviewed articles to identify the key drivers, theories, and outcomes shaping consumer and AI interactions. The research team reviewed 44 top-tier .
A recent leak from ChatGPT's Android beta app, along with public signals from OpenAI's Sam Altman, points to the company preparing for an advertising-supported version of its AI assistant
The growing popularity of chatbots makes it extremely important to understand the safety guardrails on digital systems
Amazon and Flipkart are updating select product listings so they appear better on ChatGPT and other AI chatbots, as more shoppers use these tools to search, compare and buy products online
Leading global AI leaders are racing to fix security flaws in chatbots that hackers are exploiting to steal data and launch cyberattacks
AI-powered systems are subsuming jobs done by headset-wearing graduates in technical support, customer care and data management, sparking a scramble to adapt
In this Manager’s Mantra episode, Ashish Tiwari shares his rich experience in marketing that spans across sectors. His suggestions can help you land into your dream marketing job
The FTC is seeking details from AI chatbot firms like OpenAI, Meta, and Snap on how they handle user data, monitor safety, and manage potential risks from their technology
Meta is hiring contractors fluent in Hindi, Spanish, Portuguese, and Indonesian to design culturally relevant AI chatbots for Instagram, WhatsApp, and Messenger
Anthropic will pay $1.5 bn to settle a lawsuit by authors claiming it used their books without consent to train AI chatbot Claude, in what may be the largest copyright recovery ever
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. As best as we can tell, it's the largest copyright recovery ever, said Justin Nelson, a lawyer for the authors. It is the first of its kind in the AI era. A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case
A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people. The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for further refinement in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The research - conducted by the RAND Corporation and funded by the National Institute of Mental Health - raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support, and seeks to set benchmarks for how companies answer these questions. We need some guardrails, said the study's lead author, Ryan McBain, a senior policy researcher at RAND. One of the things that's ambiguous about chatbots is whether ..
China's Tiangong station has deployed Wukong AI, its first large-scale artificial intelligence assistant, to support taikonauts during missions and spacewalks
Bhavish Aggarwal's AI venture cuts its workforce in second round of layoffs as fundraising falters and Kruti's training nears completion
The controversy over the now-deleted inflammatory posts erupted just days after X CEO Elon Musk claimed major upgrades to Grok, mentioning that the chatbot had improved significantly
AI chatbots often give biased or incomplete health advice when users ask leading or vague questions, warns a new Google-backed study