Google DeepMind CEO and Co-founder Demis Hassabis on Wednesday said current artificial intelligence (AI) systems still lack consistency, continual learning and long-term planning, the key attributes required to reach Artificial General Intelligence (AGI).
In a fireside chat with Balaraman Ravindran, head of department of Data Science and AI, IIT Madras, at the AI Impact Summit in New Delhi, Hassabis said while today’s systems are “very impressive”, they fall short of exhibiting the full range of human cognitive capabilities.
“If we talk about solving intelligence, meaning building artificial general intelligence, then my measure for that is having a system that can exhibit all the cognitive capabilities humans can, including in creativity, long-term planning, things like that,” he said, adding that AGI could be on the horizon in the next five to eight years.
‘Jagged intelligences’
Hassabis described current systems as “jagged intelligences”, something which is highly capable in some domains but inconsistent in others.
“Today’s systems can get gold medals in the International Maths Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths, if you pose the question in a certain way,” he said. “A truly general intelligent system shouldn’t have that kind of jaggedness.”
He argued that human experts do not typically fail at simpler versions of problems within their area of expertise, highlighting the need for greater consistency in AI systems.
Why continual learning is still missing
Another major limitation, he said, is the lack of continual learning. Current models are trained and then “frozen” before deployment. What is needed, according to him, are systems that can learn online from experience, adapt to context and personalise themselves to users and tasks.
He also pointed to weaknesses in long-term coherent planning, saying today’s systems can plan over short horizons but not over extended periods in the way humans can.
Creativity remains a high bar
Hassabis said creativity remains one of the highest benchmarks for AGI.
AI tools such as AlphaFold have proven effective in solving specialised scientific problems, but he argued that identifying the right question or hypothesis, rather than merely solving one, is a hallmark of higher intelligence.
“It’s much harder to come up with the right question and the right hypothesis than it is to solve a conjecture,” he said.
'A new era for science'
Despite these limitations, Hassabis said AI could usher in a “new golden era” of scientific discovery over the next decade.
He reiterated his long-standing view that AI could become “maybe the ultimate tool for science”, given its strength in identifying patterns in vast datasets.
Systems such as AlphaFold, which predicts protein structures, are early examples of how AI can accelerate research in drug discovery, material science and climate-related challenges.
In the near term, he expects AI to function as a powerful assistant for scientists. Over time, as systems become more autonomous, they could act as “co-scientists”, although he cautioned that such capabilities may still be years away.
He also expressed particular enthusiasm for multidisciplinary science, where AI tools can help researchers connect insights across multiple domains.
Robotics and the ‘agentic era’
Hassabis said the field is entering what he described as a more “agentic era”, where AI systems become increasingly autonomous.
He expects significant breakthroughs in robotics within the next two to three years, driven by foundation models capable of understanding vision and the physical world.
Both humanoid and non-humanoid robots are likely to emerge, he said, although more research is needed before widespread deployment.
Safety and dual-use risks
Hassabis cautioned that AI is a dual-purpose technology and could be among the most transformative technologies in human history.
He identified two broad categories of risk: misuse by bad actors — including individuals and nation states — and technical risks arising from increasingly autonomous systems.
Bio and cyber risks, he said, are among the most immediate concerns. As AI systems grow more capable in cyber domains, cyber defences must remain stronger than offensive capabilities.
He called for international dialogue and globally agreed minimum standards to mitigate risks, noting that AI, as a digital technology, cannot be contained by borders.
Opportunity for the Global South
On the role of countries such as India, Hassabis said young populations have unprecedented access to cutting-edge AI tools, often within months of their development in frontier labs.
He encouraged students and entrepreneurs to become highly proficient in these tools, likening the moment to the early days of the internet or mobile computing.
“The generation that grows up native with that technology will end up doing incredible things that we can only dream of right now,” he said.