In India, data privacy has taken on greater urgency this year following the passage of the Digital Personal Data Protection (DPDP) Act and the subsequent notification of its rules. At a time when artificial intelligence (AI) is becoming ubiquitous and cybersecurity has firmly entered boardroom conversations, protecting data privacy is no longer just a compliance requirement — it is a strategic priority for both individuals and enterprises.
How AI is reshaping privacy and data governance
As AI adoption accelerates, nearly all companies are expanding privacy programmes and governance frameworks to protect their data and innovate at scale. The growing demand for high-quality data to power AI is exposing gaps in oversight, raising the stakes for trust, security, and competitiveness. Experts say for organisations to succeed in the AI era, scalable and responsible AI strategies must be built through a mature, integrated approach to privacy and data governance.
Cisco’s Data and Privacy Benchmark Study 2026 shows this trend. The study reveals AI as the primary catalyst, driving 90 per cent of companies to report expanded privacy programmes, with 93 per cent planning further investment to keep up with the complexity of AI systems and the expectations of customers and regulators. Globally, 38 per cent of organisations surveyed spent at least $5 million on their privacy programmes in the past year, up 14 per cent from 2024.
“AI is forcing a fundamental shift in the data landscape, calling for holistic governance of all data — both personal and non-personal,” said Jen Yokoyama, senior vice president, legal innovation and strategy, at Cisco.
What India’s DPDP Act means for companies
For India, the provisions of the DPDP Act cover almost all facets of online markets, including ensuring that the personal data of Indian users is not shared with governments of other countries. The Act also has provisions that place a larger burden on companies with a larger number of users. Such enterprises, referred to as SDF, can be classified by the government based on the volume and sensitivity of personal data being processed, the risks to user rights, and the potential impact on the country’s sovereignty and integrity, among other factors.
With consent at the core of the Indian rules, companies will also have to rethink how they use customer data to train internal artificial intelligence models, including removing any training data where consent is not granted.
Why cybersecurity is critical in an AI-driven world
That puts the spotlight on robust cybersecurity systems. As organisations increasingly adopt AI, cloud, and digital-first operating models, the volume and sensitivity of data being created and processed continues to grow, making it an attractive target for cybercriminals.
“Systems must incorporate privacy and cybersecurity by design, bolstered by robust governance, ongoing monitoring, and swift incident response. Equally important is building a culture of accountability and awareness across the organisation, because technology alone cannot address data risk. Organisations that prioritise data privacy and security will be better positioned to earn trust, meet compliance requirements, and drive sustainable digital growth,” said Sunil Sharma, managing director and vice president of sales, India & SAARC, at Sophos, a software security company.
Rubal Sahni, AVP India and Emerging Markets at Confluent, said we live in an age where apps know more about us than our closest ones. “But do we really know how our data is collected, shared, or used and with whom? I feel Bharat’s DPDP Act is a welcome reset. It puts the power back into the hands of the individual and rightly so. As AI tools become smarter, the real risk isn’t just machines replacing jobs; it is machines acting without permission. We must protect citizens not just from external threats, but from silent digital overreach. This isn’t about stifling innovation. It is about building responsible, India-first AI that respects our values and our people,” he added.