Navigating AI Risk Management in Financial Institutions in Singapore

On 13th November 2025, the Monetary Authority of Singapore (MAS) issued a Consultation Paper proposing a set of Guidelines on Artificial Intelligence (AI) Risk Management for the financial sector. The consultation is open for feedback until 31st January 2026.
Recent technological advances such as Generative AI and AI agents have increased the use of AI in business processes, interactions with customers and employee workflows. According to MAS, these technologies multiply the known risks related to AI like model error and bias. They also introduce new dangers like hallucinations, prompt injection and data leakage.
With the proposed Guidelines, MAS is moving from high-level responsible AI principles towards clearly articulated standards that can be assessed. These Guidelines are an extension of earlier efforts by MAS to supervise AI risks.
In 2018, MAS established the FEAT principles (Fairness, Ethics, Accountability, Transparency) to guide responsible use of AI. In November 2019, MAS launched the Veritas Initiative, which produced assessment methodologies, toolkits and case studies to help apply the FEAT principles. MAS established Project MindForge to assess the risks and opportunities of Generative AI, producing a GenAI risk framework in 2023 and is now working towards releasing an industry-led AI Risk Management Handbook. This will support financial institutions in implementing the proposed Guidelines.
Applicability of the proposed Guidelines
The proposed Guidelines will be applicable to all financial institutions (FIs), while allowing proportionate implementation depending on the institution’s size, activities, AI use and risk profile.
- FIs using AI as an integrated part of business processes are expected to implement comprehensive AI risk management measures.
- FIs not using AI in an integrated way are expected to institute basic AI policies aligned with their level of AI adoption.
FIs that are branches or subsidiaries of foreign parent entities have the option to leverage the AI risk management frameworks of their parent entities. MAS also proposes a 12-month transition period after issuance of the final Guidelines for implementation.
Key elements of the proposed AI risk framework
The MAS guidelines focus on the following various aspects related to the use of AI:
1) AI oversight
MAS states that the Board and senior management play critical roles in establishing and overseeing frameworks, policies and procedures for AI risk management. Their responsibilities include identifying AI, assessing materiality, maintaining an inventory, governing AI within risk appetite, managing AI through its lifecycle and developing capabilities and capacity.
Further, MAS proposes that in FIs where there is material AI risk exposure there should be dedicated cross-functional oversight.
2) The foundational systems: identification, inventory and materiality assessment
MAS positions three capabilities as prerequisites for controlling AI at scale:
- AI identification: Consistent identification of AI usage across relevant business and functional areas, supported by definitions, criteria, processes and robust systems with clear ownership including a designated control function.
- AI inventory: An accurate and up-to-date register of AI systems or models. It should capture important characteristics like purpose, scope, model type, data used, dependencies, lifecycle status, risk materiality rating, validation status, roles and documentation links.
- AI risk materiality assessment: An approach applied consistently across AI to evaluate inherent and residual risk materiality, covering impact, complexity and reliance.
Together, these capabilities provide supervisors with visibility over an institution’s AI landscape, including the purpose, ownership, risk profile and applicable control standards for each use case.
3) Lifecycle controls
AI risk management is framed as a continuous obligation that extends beyond initial development and deployment, applied proportionately to the assessed risk materiality. The Guidelines lists focus areas for lifecycle controls.
Several lifecycle expectations carry particular compliance weight:
- Human oversight should account for human-factor risks like automation bias and decision fatigue and should include also documented interventions and review processes.
- Evaluation and testing for Generative AI should pay more attention to issues like hallucinations, undesirable or harmful outputs, bias, data leakage or disclosure and vulnerabilities.
- Technology and cybersecurity controls should be called out without fail. There should be secure environments, access controls and governance of third-party plugins and these must be supported by proactively undertaking activities such as vulnerability assessments, penetration testing and red teaming.
- Documentation of the AI development process should be detailed enough so that an independent reviewer or auditor can understand and potentially replicate the implementation as well as the results.
- Ongoing monitoring and incident response arrangements are emphasised considering the potential for AI performance to change over time.
4) Third-party AI
The Guidelines propose that third-party AI should be subject to onboarding, development and deployment controls proportionate to risk materiality, including testing in the context of the institution’s use case (including with its own data) and compensatory testing where vendor disclosure is insufficient. It also calls out concentration risks, contingency planning, supply chain assessments and updating legal agreements with clauses such as audit rights and notification when AI is introduced.
MAS also notes that its expectations and requirements on outsourcing and third-party services apply to the use of third-party AI. Engaging third-party AI providers does not relieve a financial institution of responsibility for the risks arising from the use of those systems.
5) Capability and capacity
The Guidelines extend beyond policy. It expects that personnel involved in AI development and deployment possess adequate skills, proper conduct and training and that institutions regularly review whether AI teams have sufficient capacity. It also expects technology infrastructure that supports performance, scalability and resilience.
Compliance Preparedness for Financial Institutions
While the consultation process is ongoing and the final Guidelines are awaited, many financial institutions are likely to already have elements of AI-related governance, risk management and controls in place, particularly where AI is used within core business processes.
Against this backdrop, the proposed Guidelines provide an opportunity for institutions to validate existing practices, identify gaps and ensure that current frameworks are sufficiently aligned with MAS’s evolving supervisory expectations. In doing so, financial institutions can also draw on MAS’s existing guidance, including the FEAT principles and the GenAI risk framework developed under Project MindForge, as practical reference points.
From the Guidelines perspective, the following steps may be particularly relevant.
1. AI Identification and inventory
- Create systems to identify the use of AI use across business functions and support lines, including embedded or third-party AI. This will ensure that AI usage is monitored consistently and according to governance standards.
- Keep an up-to-date inventory for AI systems, ensuring that every entry captures the relevant attributes such as purpose, scope, risk materiality and validation status.
Compliance tip: Early alignment between AI inventory and existing risk, governance and outsourcing registers can significantly reduce duplication of controls and ownership ambiguity.
2. Risk Materiality Assessment
- Use a standardized approach to assess the risk materiality of each AI system. Any evaluation should consider factors such as impact, complexity and reliance.
- Plan regular reviews to evaluate AI systems and update the risk materiality based on new developments or external changes like any regulatory updates or AI advancements.
Compliance tip: Define clear escalation thresholds for reporting where changes to the use of AI in terms of data source or deployment trigger enhanced governance.
3. Implement AI Lifecycle Controls
- Ensure that all AI systems, right from their development to decommissioning, are governed with sound management controls. It is also important that these controls are scalable depending upon the risk materiality of different AI applications.
- Establish procedures for change management to control modifications in the AI systems and prevent escalation of risks to improper updates.
Compliance tip: Have processes in place to document control decisions and proportionality judgement as it can be critical evidence during regulatory reviews.
4. Oversight and Reporting
- If the AI risk exposure is high, set up a cross-functional committee to oversee AI risk management and maintain a consistent approach.
- Actively involve the senior management in AI risk oversight with clear roles and responsibilities.
- Create a reporting system that helps to keep the senior management and the board informed about any risks, incidents or breaches related to AI.
Compliance tip: Maintain clear records of how AI risk exposure is identified and how incidents and breaches are escalated to senior management and the board, so that the evidence can be provided if required during any audits or regulatory reviews.
5. Technology and Cybersecurity Measures
- Ensure that AI systems operate in secure environments that have proper cybersecurity safeguards. The state of the security of AI applications should also be reviewed periodically to protect them against vulnerabilities.
- FIs which use third-party AI solutions should have appropriate risk management measures for monitoring security, compliance and operational risks.
Compliance tip: Consider how AI specific incidents will be integrated into existing incident management, breach notification and regulatory reporting processes.
6. Compliance with AI Guidelines
- While applying the AI risk management guidelines, make sure it is done in a proportionate manner considering the size and scope of the AI implementation. Smaller FIs can apply the basic policies, while the more sophisticated governance frameworks should be implemented by larger institutions.
- Implement the guidelines within a 12-month transition period, ensuring that all new AI systems and models adhere to the requirements that have been outlined by MAS in the final Guidelines.
Compliance tip: Maintain a clear implementation roadmap that explains how proportionality has been applied and how legacy AI systems will be aligned over the transition period.
Conclusion
As AI technologies evolve, the regulatory landscape to minimize risks related to AI will also mature. The proposed Guidelines signal a path of regulators moving from principles and tools to supervisory expectations that can be examined, tested and evidenced. By adopting these guidelines, financial institutions can not only ensure compliance but also position themselves as leaders in responsible use of AI.
How Komrisk can help
Komrisk is a comprehensive compliance management tool that can help financial institutions to streamline their compliance functions by keeping track of internal processes and regulatory compliance. It can help you improve your governance structure, accountability mapping, control standards and record auditable evidence that regulators can actually assess. Get in touch with us for a customised demo.
Authored by: Snigdha Sanganeria
Co-authored by: Swapna Umakanth
Disclaimer
The information provided on this blog is for general informational purposes only and is not a substitute for professional legal advice. We are not a law firm and are not authorized to practice law in your jurisdiction. Laws and regulations are complex and constantly changing, and information that may be true in one jurisdiction may not apply in another. Before acting on any information you read here, you should consult with a qualified lawyer practicing in the relevant jurisdiction for your specific legal issues or concerns. While we strive to provide accurate and up-to-date information, we make no guarantees that the information on this blog is completely current or error-free. We disclaim any liability for any actions taken or not taken based on the information on this blog.