Wednesday, February 18, 2026 | 01:37 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Global coordination, clear standards key as AI scales: Experts at AI Summit

As deepfakes, misinformation and AI system failures rise, global experts call for coordinated oversight, standardised reporting and faster response frameworks to address growing risks

India AI Summit

Elham Tabassi, Senior Fellow at Brookings Institution and Caio Vieira Machado of Berkman Klein Center at Harvard University at the AI Impact Summit in New Delhi. (Photo: YouTube/@IndiaAI)

Rahul Goreja New Delhi

Listen to This Article

With the rapid mainstreaming of artificial intelligence (AI) tools and their growing accessibility across sectors, AI-related incidents, ranging from deepfakes and misinformation to system failures and cybersecurity breaches, are rising sharply. Experts at the AI Impact Summit in New Delhi called for stronger global systems to monitor and respond to AI incidents, warning that technology is evolving faster than policy frameworks.
 
Marko Grobelnik, digital champion of Slovenia at the European Commission, said the rapid advances in generative tools over the past six months have led to the creation of high-quality deepfakes. "The cost is close to zero," he said, adding that such generative AI capabilities can be misused, particularly in situations such as elections.
 
 
Referring to the Organisation for Economic Co-operation and Development (OECD) taxonomy of AI incidents and hazards, Grobelnik said incidents are classified into 14 groups, with patterns fluctuating over time. "Before elections, you get a spike," he said. 
 
Grobelnik further said that one of the major gaps in the current reporting and monitoring systems is the lack of clear causality analysis. "We need to know why the accident happened. At this moment, we don’t have any notion of causality," he said. He stated that the focus should be on a framework of detection, analysis, reporting and feedback loop.
 
Grobelnik also said that rapid response to such incidents is critical since AI systems are becoming more complex.
 
Elham Tabassi, senior fellow at the Brookings Institution and former chief AI advisor at the US National Institute of Standards and Technology (NIST), said governance efforts remain overly focused on pre-deployment checks rather than post-deployment monitoring.
 
"The tech is moving much faster for policy to keep up with it. Most risk management or governance looks at pre-deployment. But the majority of incidents we have to worry about cannot be reliably predicted before deployment. We don’t quite know how to do the evaluations the right way," she said.
 
Tabassi said incident reporting must generate “results and technical facts that are relevant in decision-making" and called for standardised reporting frameworks. "We need standardised definitions of incidents and accidents," she said, adding that flexible standards could support multiple regulatory regimes if designed properly. 
 
Adding to it, Hugo Valadares from Brazil’s Department of Science, Technology and Digital Innovation, said his country national AI plan is built around five pillars and 54 actions, with more being added as the technology evolves. Key focus areas include human resources, data sovereignty and supercomputing, with cybersecurity now emerging as an additional dimension.
 
"We are working in all dimensions together," he said.
 
With national elections due this year, Brazil's President Luiz Inácio Lula da Silva is "very concerned about fake news and generative AI", Valadares said. He also stated that Brazil had recently faced an "unprecedented attack" on its systems and is investing heavily across sectors, including acquiring large-scale AI systems to support research.
 
“We need to take care of children in this dangerous environment,” he added.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Feb 18 2026 | 1:20 PM IST

Explore News