AI delivering real-world scientific outcomes: Google DeepMind exec
Google DeepMind's Pushmeet Kohli discusses AI's role in science, India's place in global research, and DeepMind's footprint in the country
)
premium
Pushmeet Kohli, vice-president of research at Google DeepMind
7 min read Last Updated : Feb 11 2026 | 12:17 AM IST
Listen to This Article
Pushmeet Kohli, vice-president of research at Google DeepMind, believes artificial intelligence (AI) and science can finally address some of the world’s biggest problems. In an interview with Shivani Shinde, as Kohli heads to India for the AI Impact Summit, he discusses AI’s role in science, India’s place in global research, and DeepMind’s footprint in the country. Edited excerpts:
The buzz around AI is largely about large language models (LLMs). How do you see AI impacting science and everyday life?
When we talk about AI, there’s an important distinction to make: AI takes many forms, just as intelligence itself exists at multiple levels. There is common intelligence, which all of us possess. Then there is expert-level intelligence. Beyond that is what we can call superhuman intelligence.
A good example is the protein-folding problem. Given a sequence of amino acids that makes up a protein, can you predict its three-dimensional structure? This is an extraordinarily important problem. If you understand the shape and function of proteins, you can understand diseases, design drugs, build vaccines, and even engineer enzymes — for example, to decompose plastics. Yet no human has been able to solve this problem. Today, AI models can.
What we are seeing now are AI systems that demonstrate common intelligence, expert-level intelligence, and even superhuman intelligence. All three will have a transformational impact on the world.
If AI is bringing this level of capability into research, would you agree that AI for science is shifting us from discovery to validation and real-world problem-solving at speed?
Yes. There are several areas where the impact is already clearly visible. In biology, AlphaFold is a good example. We released AlphaFold 2 at the end of 2020, and by 2024, it had already had a profound effect.
Today, AlphaFold has been used by nearly three million scientists across 180 countries. Researchers have used it to design new drugs, including work across pharmaceutical companies. Beyond that, it has been used to design enzymes that decompose plastics and to study and develop disease-resistant crops. There are examples from India as well.
Even with just one application, you can see a wide range of real-world research outcomes. This extends to other scientific domains. In weather prediction, for instance, models like WeatherNext can predict weather patterns and track cyclones and hurricanes more accurately than traditional methods. Overall, AI is accelerating progress across a broad range of scientific disciplines.
How do you see AI research evolving in India, and how is the broader ecosystem shaping up? What role do you see India playing in this AI-led transformation?
India has a very important role to play in the global AI transformation. First, there is a strong need for AI in the country. Take healthcare as an example. Expert-level intelligence is increasingly being democratised through AI, which creates an opportunity for India to deliver healthcare at scale.
Another factor is India’s immense linguistic diversity. With so many languages spoken across the country, it is essential to build and adapt AI models that can understand and operate across Indian languages. There is a lot of interesting work underway on building foundational models and tailoring them for applications that matter in the Indian context.
These applications span healthcare, citizen services, and the delivery of more advanced public and private services. Together, these factors position India to play a distinctive role in the global AI ecosystem.
Do you also have teams in India working with you?
Yes, absolutely. We have always seen India as a strong source of talent. Google has a significant presence in the country, including a large team in Bengaluru. These teams work across several areas — from fundamental machine learning research and LLMs to improving performance across multiple languages.
Language support is a critical focus, given the global diversity that AI models must handle. Teams in India also work on applying AI to local use cases, such as agriculture and other region-specific challenges. There is ongoing work on applied and inclusive AI as well.
AnthroKrishi is one example, where we are organising India’s agricultural data to create a unified understanding of the landscape, enabling better crop management for millions of farmers.
India, therefore, plays a key role not just as a talent base, but also as a centre for research and real-world AI applications.
When LLMs are combined with research — where bias can materially affect outcomes — how do you ensure it does not creep into the results?
One of the remarkable things about AlphaFold was not only its high accuracy, but another important property: when it was uncertain or likely to make a mistake, it explicitly indicated that uncertainty. That uncertainty estimate is extremely important. We trained models like AlphaFold to be very good at recognising when they do not know the answer.
In the context of LLMs, this has been harder, but it is an active area of research, and we are continuously improving their reliability and their ability to express uncertainty.
A related challenge is creativity. The creativity of LLMs can be a problem when they generate things that are not true, but it can also lead to genuinely new insights. The key question is how you distinguish between the two.
One example is our agent called AlphaEvolve. AlphaEvolve takes an LLM and applies it to very hard mathematical and computational problems — problems where progress can translate into savings of hundreds of millions of dollars, such as improving algorithms that run data centres around the world. The system generates many different solutions. Many of them are poor, but some are genuinely novel and insightful — ideas that nobody has had before.
What are the top three problems you are trying to solve right now?
There are several problems we are working on, but I’ll highlight a few key ones.
The first is biology. Biology is incredibly complex. If you think about proteins as the building blocks of life—the Lego blocks that make up every living thing on the planet, including you, me, bacteria, and viruses — then the genome is essentially the recipe.
The big question is: how do we interpret the genome, and what does that mean for understanding human health? What does it tell us about what happens to us when we are 40, 50, 60, or 70 years old? What does it tell us about children, or about which diseases we may be susceptible to? All of this information is encoded in our genome, but we do not yet truly understand it. That is one major problem we are trying to solve.
The second area is materials science. The question here is what the next generation of materials will look like, and what new — almost magical — properties they might have, including things that today may seem impossible. This includes high-temperature superconductors and other advanced materials.
There are many other problems as well: quantum computing, nuclear fusion, and the development of agents that can design better algorithms in computer science. Since we were discussing commercial impact earlier, AlphaEvolve is a good example. It has delivered real commercial value for Google by optimising core algorithms, enabling data centres and LLMs to run much more efficiently. That efficiency translates directly into lower power consumption and clear commercial benefits, showing how fundamental research can have real-world impact.
Topics : Google Artificial intelligence Technology