RBI’s move toward ethical and responsible AI in financial sector

On 13 August 2025, the Reserve Bank of India (RBI) released its report on the much-anticipated Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) for the financial sector. This comprehensive blueprint, developed by a high-level committee after months of consultation with banks, NBFCs, Fin-Tech businesses, and technology experts, marks a significant policy milestone in India’s approach to balancing AI-driven innovation with robust risk management.

The rapid growth of Artificial Intelligence (AI) has led to gradual integration of AI into the business functions within the financial sector such as detection of fraud, customer service and risk management. While AI adoption in the financial sector has unlocked opportunities for improved customer engagement, enhanced credit assessment and precise risk monitoring and fraud detection, it also comes with its own set of challenges. These challenges include increased exposure to risks related to data privacy, operational complexities, market manipulation, and cybersecurity vulnerabilities. As a result, there is a growing need to have in place a framework for responsible and ethical adoption of AI in the financial sector- one that harnesses AI’s true potential while safeguarding against associated risks.

Recognising this need, the RBI had set up a committee on December 26, 2024[1] to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the financial sector.  After several deliberations with stakeholders the committee has submitted the said report outlining a framework to guide the use of AI in the financial sector.

In this blog, we attempt to highlight key recommendations and action points from the perspective of Regulated entities (REs) and outline the AI Specific Enhancements which are suggested in RBI Master Directions.

Key recommendations for REs for Responsible Integration of AI

  1. Capacity Building for responsible AI governance: REs are advised to develop AI-related governance capabilities at the Board level, through structured and ongoing training, upskilling and reskilling programs for employees involved in AI use.
  2. Implementation of a board-approved AI policy: REs are directed to establish an AI policy approved by Board addressing key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework.
  3. Establishing a robust data governance framework: The data governance frameworks ensuring compliance with the applicable legislations including the DPDP Act, should cover the entire lifecycle of data and cover internal controls and policies for data collection, access, usage, retention, and deletion.
  4. Implementation of a model governance mechanisms: The mechanism (spanning across the entire AI model lifecycle) must carry out model documentation, validation, and monitoring. REs are also advised to establish mechanisms for detection and addressing model drift and degradation to ensure safe usage.
  5. Integration of AI into product approval processes: All AI-enabled products and solutions are brought within the scope of institutional product approval framework. AI-specific risk evaluations should form a part of product approval frameworks.
  6. Formulation of a Board-approved consumer protection framework: This framework should provide for transparent, fair and accessible recourse mechanisms for customers. REs must take initiatives for conducting education campaigns for raising consumer awareness regarding safe AI usage and their rights.
  7. Strengthening cybersecurity measures: The potential security risks on account of use of AI must be identified and REs must strengthen cybersecurity ecosystems including hardware, software and processes to address these issues. This can be done with the help of AI tools including dynamic threat detection and response mechanisms.
  8. Implementation of structured red teaming processes spanning the entire AI lifecycle: The scope and frequency of red teaming processes across the entire AI lifecycle must be proportionate to the assessed risk and potential impact of the AI application. Trigger-based red teaming must be considered to address evolving threats.
  9. Formulation of a Business Continuity Plan (BCP) for AI Systems: Traditional system failures and AI model-specific performance degradation must be incorporated in the REs’ existing BCP frameworks which will provide for fallback mechanisms, periodic testing of AI model resilience through BCP drills.
  10. Adoption of AI Incident Reporting and Sectoral Risk Intelligence Framework: REs must maintain an AI incident reporting framework based on a tolerant and good-faith approach which will encourage timely disclosure and reporting of AI related incidents.
  11. Maintenance of a comprehensive internal AI inventory: REs must maintain a comprehensive, internal AI inventory and keep the same updated on a half-yearly basis. The inventory must include all models, use cases, target groups, dependencies, risks and grievances.
  12. Alignment of AI audit framework with a board-approved risk categorisation:
    1. Internal Audits: REs should conduct internal audits proportionate to the level of risk associated with AI applications.
    2. Third-Party Audits: In case of high risk or complex AI use cases, independent third-party audits should be conducted by REs.Periodic Review: The audit framework must be reviewed and updated at least biennially. The framework must incorporate emerging risks, technologies, and regulatory developments.
  13. Making AI-related disclosures: REs must include AI-related disclosures in their annual reports and publish the same on the website. Regulators may provide for a standardised disclosure framework for consistency and adequacy of information across institutions.

AI Specific Enhancements in RBI Master Directions

Additionally, the committee’s report also proposed the following targeted enhancements to be incorporated in the existing Master Directions which are already addressing some AI related risks implicitly:

Serial No.

Name of Existing Law

AI Risks covered

Suggested AI Specific Enhancements

1

RBI Guidelines on Managing Risks and Code of Conduct in Outsourcing of Financial Services by banks dated 3rd November, 2006

(i) Lack of accountability for outsourced activities

 

(ii) Lack of oversight on risk management and governance of third-party services

 

(i) Providing for clauses addressing AI specific risks such as algorithmic bias which may be incorporated into the Outsourcing Agreement.

 

(ii) Specific clauses which outline disclosure obligations for third-party use of AI may also be incorporated into the Outsourcing Agreement.

 

2

RBI Circular on Cyber Security Framework in Banks dated 2nd June, 2016

(i) Confidentiality, integrity and availability of data.

 

(ii) Lack of proper incident reporting and response mechanisms

 

(i) AI specific threats such as model poisoning and adversarial attacks must be included in risk assessment processes in the Cyber Security Policy.

 

(ii) Protocols for monitoring and mitigating AI related cybersecurity incidents must be incorporated.

 

3

Reserve Bank of India (Digital Lending) Directions, 2025 dated 8th May, 2025

(i) Issues related to data privacy and consent in digital lending

 

(ii) Lack of accountability for third-party digital lending apps

 

(i) Provisions for transparency in AI driven credit assessments.

 

(ii) Conducting fairness audits for detection and mitigating algorithmic biases.

 

4

RBI Master Circular on Customer Service in Banks dated 1st July, 2015

(i) Customer rights in AI interactions and lack of proper grievance redressal mechanisms

 

(ii) Unclear board level oversight of customer service

 

(i) Mandate customer awareness during interactions with AI systems.

 

(ii) Allowing customers to contest or appeal against any AI driven decision.

 

5

Master Directions on Fraud Risk Management in Commercial Banks (including Regional Rural Banks) and All India Financial Institutions dated 15th July, 2024

(i) No proper framework for early warning signals and fraud detection

 

(ii) Oversight of board on fraud risk management.

 

(i) Provide for AI driven fraud detection mechanisms under the Framework for Early Warning Signals for Detection of Frauds.

 

(ii) Regular testing of AI models for accuracy and bias in fraud detection.

 

6

Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices dated 7th November, 2023

(i) IT governance concerns.

 

(ii) Lack of oversight on information systems and related risks.

 

Include provisions for access control measures for autonomous AI.

 

7

Master Direction on Outsourcing of Information Technology Services dated 10th April, 2023

(i) Inadequate risk assessment and due diligence for IT service providers.

 

(ii) Inadequate AI related data protection and incident reporting obligations.

 

(i) Mandate disclosure of AI use in service delivery by service providers.

 

(ii) Incorporate AI specific risk assessments for service providers.

 

Conclusion

RBI’s gradual approach towards integrating AI in the financial sector not only opens up exciting opportunities, but it also brings risks that must be managed proactively. To meet these challenges, entities need to remain closely aligned with the evolving regulatory environment.

At Lexplosion, we have extensive experience in guiding regulated entities, including Banks and NBFCs, through evolving compliance landscapes and keeping them updated on the latest regulatory developments. For more information and support, please feel free to reach out to us at inquiries@lexplosion.in.

[1] RBI Press Release on Framework for Responsible and Ethical Enablement (FREE) of Artificial Intelligence(AI) in the Financial Sector – Setting up of a committee dated 26th December, 2024.

Written by: Nishtha Chakrabarti

Co Authored by: Amiya Mukherji

Disclaimer

This content is intended for informational purposes only and does not constitute a legal opinion. Despite our efforts to maintain accuracy, we do not make representations, warranties or undertakings regarding the quality, completeness or reliability of the content. Readers are encouraged to seek legal counsel prior to acting upon any of the information provided herein. This content, including the design, text, graphics, their selection and arrangement, is Copyright 2024, Lexplosion Solutions Private Limited or its licensors. ALL RIGHTS RESERVED, and all moral rights are asserted and reserved.

For any clarifications, please reach out to us at 91-33-40618083 or inquiries@lexplosion.in. Refer to our privacy policy by clicking here.

Share this:

Sign up for our

Newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Lexplosion will use the information you provide on this form to be in touch with you and to provide updates and marketing.