Modern systems good but inconsistent: Google DeepMind CEO Demis Hassabis
Demis Hassabis says current solutions lack consistency, creativity and continuous learning ability
)
Google Deepmind | Photo: Bloomberg
Listen to This Article
Artificial general intelligence (AGI) is still about five to eight years away and systems must be trained further to have the capability to learn on their own, said Demis Hassabis, chief executive officer and cofounder of Google DeepMind, speaking about AI that can perform any intellectual task a human can.
Hassabis, one of the world’s most well-known AI researchers, said the bar for achieving AGI remains high because such systems would need to integrate the full range of human cognitive capabilities.
“Today’s systems are impressive but they still have flaws and one of the major ones is inconsistency across tasks,” he said on Wednesday in a discussion with Balaraman Ravindran, head of the department of data science and AI at Indian Institute of Technology Madras, at the AI Impact Summit in New Delhi.
Hassabis said current systems lack general learning capabilities — the ability to continue learning after deployment. “We train these systems, freeze them and put them out in the world. What you would like is for these systems to continuously learn from experience, from the context they are in, and personalise themselves to the situation and the tasks you have for them. They do not do that now. They also have difficulty in doing long-term coherent planning but are good at short-term planning,” he said.
This results in what Hassabis and Ravindran described as “jagged intelligence” — systems that perform exceptionally well in some areas but poorly in others. “They may win gold medals in mathematics Olympiads but also make mistakes in elementary mathematics. A true general intelligence system should not have that sort of jagged intelligence,” he said.
Also Read
When asked whether humans also display such jagged intelligence, Hassabis disagreed. “Humans are not jagged in that way. If you are an expert, you will not make a mistake in trivial problems and you will always find a way to work through it. The general foundation models are poor at playing chess, almost at amateur levels,” he said.
He also highlighted the issue of creativity, describing it as a major gap in current artificial intelligence systems. “AI is useful as a scientific tool or system for solving specialised areas, just what AlphaFold does with protein structures. But what separates great from good scientists is creativity, and their sense of what is a good question and hypothesis. It is always harder to come up with the right question and hypotheses than to solve a conjecture. Systems do not have that capability,” he said.
“AI is a dual-purpose technology. It can be most transformative, and these international summits are needed to discuss how to mitigate those risks through international cooperation and dialogue,” said Hassabis.
More From This Section
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Feb 18 2026 | 11:15 AM IST