According to sources, the government is likely to formally launch India’s first indigenously developed large language model (LLM) by Sarvam during the summit, along with other LLMs and SLMs from companies and start-ups.
The IT ministry has also asked all participating companies to conduct extensive dry runs of their products and AI-enabled services, another official said.
“There could be some large announcements in terms of funding for AI data centres or on the onboarding of more GPUs (graphics processing units), up nearly 2–3 times from the current number of nearly 40,000,” one of the officials quoted above said.
The government has so far approved 12 players to build indigenous LLMs and SLMs in India, including an open-source 120-billion-parameter model being developed by Sarvam.
A second open-source suite of multilingual and multimodal models, being developed by a BharatGen team led by the Indian Institute of Technology (IIT), Bombay, will launch its flagship Param2, a 17B-parameter sovereign multilingual foundation model, at the India AI Impact Summit 2026.
Built fully in India using a Mixture-of-Experts architecture, the model supports 22 Indian languages and is trained on large India-centric datasets under Bharat Data Sagar.
BharatGen, the first government-supported initiative, is also advancing sovereign AI capabilities across speech models in 12 languages and document vision models, with deployment-ready platforms for governance, healthcare, education, finance, and cultural preservation alongside text models.
“The India AI Impact Summit 2026 is a major milestone platform for showcasing India’s sovereign AI progress at a global level. For BharatGen, it marks the go-live of the 17B model and a large set of sector-ready AI solutions,” Professor Ganesh Ramakrishnan of the Department of Computer Science and Engineering at IIT-Bombay and principal investigator at BharatGen told Business Standard.
The summit, he said, will also open fast go-to-market and developer access pathways, enabling start-ups, researchers, and enterprises to build on BharatGen models.
“We see it as a strong catalyst for global collaboration and accelerated adoption of India-built AI systems,” he added.
Additionally, BharatGen models are already being integrated into real operational use cases across multiple sectors. Some of the instances include MahaGPT for government departments in Maharashtra; regulatory AI assistants with IFSCA; governance transformation initiatives with Goa Electronics; healthcare solutions such as the Medsum app; spoken English assessment tools with Kotak Education Foundation; a public sector culture knowledge digitisation initiative; and an AI-powered policy explainer for a leading insurance company.
Apart from these, Avataar.ai is developing a library of AI avatars, which will be fine-tuned for Indian languages and domains such as agriculture (crop advisory), healthcare (patient chatbots), and governance (public query handling), whereas Fractal is building the country's first large reasoning model with up to 70 billion parameters, emphasising structured reasoning for STEM disciplines and medical diagnostics. Both plans have also been approved by the IT ministry under the Rs 10,372 crore IndiaAI Mission.
Fractal Analytics, another AI-native firm shortlisted under the IndiaAI Mission, is working on building India's large reasoning model. In an interaction with Business Standard, Srikanth Velamakanni, chief executive officer and co-founder of the firm, said: “We are working on building India’s large reasoning model — around 70 to 100 billion parameters. We recently presented this to the Prime Minister last month. Our mission is to see if India can provide healthcare AI services to the country.”
He also added that the company wants to build a large reasoning model focused on healthcare that sits on top of India’s digital health infrastructure — Ayushman Bharat, Aarogya Setu, health IPs — and act as an intelligence layer. “It is progressing well and we expect to release something later this year,” he said.
Fractal has already showcased and launched the Fathom R-1-14-billion-parameter reasoning model in June last year.