3 min read Last Updated : Aug 21 2025 | 12:13 AM IST
The Reserve Bank of India’s (RBI’s) Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) committee released its final report last week, proposing guidelines for the responsible use of artificial intelligence (AI) in the financial sector. The framework is anchored around seven guiding “sutras” or principles — trust, people first, innovation, fairness, accountability, explainability, and resilience — and is backed by 26 actionable recommendations across six pillars — infrastructure, policy, capacity, governance, protection, and assurance.
The report arrives at a critical juncture. A World Economic Forum study (2025) estimates that global AI investment in finance could reach $97 billion by 2027, compared to $35 billion in 2023, showing the pace at which the technology is growing. In India, AI can transform everything from fraud detection to financial inclusion. Yet, the report cautions that unchecked adoption risks replicating biases, undermining trust, and exposing financial institutions to systemic vulnerabilities. The committee recommends using the “sutras” to foster innovation while mitigating risks, treating these objectives as complementary. To lower entry barriers, it proposes a common AI infrastructure offering pooled datasets, computing resources, and a regulatory “sandbox” for safe experimentation before deployment. This is especially critical for smaller banks and non-banking financial companies, most of which report little AI adoption due to high costs, skill gaps, and poor data quality. Without support, only the largest banks may benefit, leaving smaller ones behind.
Global precedents strengthen the case. The Monetary Authority of Singapore (MAS) introduced “FEAT” (fairness, ethics, accountability, and transparency) principles in 2018. It later launched the Veritas toolkit to enable financial institutions to evaluate their AI solutions against the FEAT principles. Hong Kong’s Monetary Authority recently unveiled a generative AI (GenAI) sandbox, while the United Kingdom’s (UK’s) Financial Conduct Authority offers a “supercharged sandbox” for AI experimentation. These examples show that trust and innovation can reinforce one another when regulators act early. The FREE-AI committee also emphasises indigenous AI models trained on Indian data and languages. Off-the-shelf large language models, largely built on Western datasets, might overlook the country’s diversity, risking exclusion and unfair outcomes.
For India, the stakes are high. Poorly governed AI-based credit models could entrench social biases and erode public trust. The FREE-AI framework seeks to pre-empt such risks by aligning technological progress with ethical safeguards. Strong data governance will also be needed. Financial institutions must establish frameworks for sourcing, cleaning, anonymising, encrypting, sharing, and purging data, while addressing bias detection, synthetic data validation, and open standards. Without secure and transparent data pipelines, AI adoption risks magnifying systemic vulnerabilities rather than reducing them. That said, principles and reports are only the beginning. The real test lies in execution. The RBI and financial institutions must now invest in capacity building, interoperability, and accountability structures. India’s own digital public infrastructure — from Aadhaar to Unified Payments Interface — shows how inclusive design can create global benchmarks. If the country can replicate that success by moving from the opacity of today’s AI “blackbox” to tomorrow’s “sandbox” experimentation, it could enable the adoption of responsible AI in the banking and financial sector.