OSFI E-23 Final Guideline (2025): What Canadian Banks and Insurers Must Do Before May 2027
OSFI published the final E-23 Guideline on September 11, 2025. Effective May 1, 2027, it extends to all federally regulated financial institutions and all models — including third-party AI. This post covers what changed from the 2017 version, the AI/ML-specific obligations, the 18-month transition window, and a gap-assessment checklist for Canadian FRFIs.
By AI Compliance Vendors Editorial · April 26, 2026 · 10 min read · Last reviewed April 26, 2026
April 26, 2026 · AI Compliance Vendors Editorial
TL;DR
- OSFI published the final Guideline E-23 — Model Risk Management on September 11, 2025, effective May 1, 2027.
- It replaces the 2017 guideline, which applied only to deposit-taking institutions. The 2025 version applies to all federally regulated financial institutions (FRFIs), including banks, foreign bank branches, life and P&C insurance companies, and trust and loan companies. Pension plans are excluded.
- The guideline now explicitly covers all models, regardless of technology or purpose, including AI/ML systems and third-party vendor models.
- Key new expectations include: enterprise-wide model risk rating, AI-specific explainability controls, independent validation for all non-negligible risk models, and formal third-party model governance aligned with OSFI Guideline B-10.
- The 18-month transition period began September 11, 2025. FRFIs that have not started gap assessments are behind.
Why This Matters Now
The financial services industry's use of AI and machine learning has accelerated substantially since OSFI's original E-23 was published in 2017. That version applied only to deposit-taking institutions and was largely technology-agnostic, following the same principles as the US Federal Reserve's SR 11-7 guidance published in 2011. It did not address AI/ML models explicitly, contained no explainability requirements, and did not formally extend to third-party or vendor AI.
The landscape in 2025 is fundamentally different. Canadian banks, insurers, and their subsidiaries are deploying models for credit adjudication, fraud detection, underwriting, customer segmentation, regulatory capital calculation, and increasingly, generative AI applications. OSFI's September 11, 2025 backgrounder notes that "institutions are increasingly relying on models to support or drive decision-making including in business areas that traditionally did not rely on models."
The 2025 final guideline is OSFI's substantive response to this transformation.
Scope: Who Is Covered, and by What
Entities in Scope
The 2025 guideline applies to all FRFIs, as confirmed by the OSFI E-23 final text:
- Banks and bank holding companies
- Foreign bank branches
- Life insurance and fraternal benefit companies
- Property and casualty insurance companies
- Trust and loan companies
The Blakes law firm analysis notes that federally regulated pension plans (FRPPs), which were included in the 2023 draft, were ultimately excluded from the final guideline due to differences in supervisory mandate. OSFI expects FRPP administrators to follow CAPSA Guideline No. 10 instead.
Models in Scope
The 2025 guideline applies to all models, defined broadly to capture any methodology that processes input data to generate results used in decision-making. This explicitly includes:
- Traditional actuarial and statistical models
- Machine learning and AI/ML systems
- Third-party and vendor-supplied models
- Models from foreign offices and parent institutions
The definition is intentionally broad. As the OSFI letter accompanying the guideline states, OSFI deliberately left the model definition broad so that institutions can make risk-intelligent decisions about which models carry non-negligible risk and therefore require full lifecycle governance.
Low-risk use cases — such as using a commercial LLM for document summarization or marketing email drafting — may warrant a lower risk rating that imposes fewer requirements. But those determinations must be documented.
What Changed from the 2017 Guideline
The 2017 E-23 was titled "Enterprise-Wide Model Risk Management for Deposit-Taking Institutions." It applied only to banks, bank holding companies, federally regulated trust and loan companies, and cooperative retail associations. It divided institutions into Internal Model Approved Institutions (IMAIs) and Standardized Institutions (SIs), with different compliance expectations for each group. Foreign bank branches were explicitly out of scope.
The 2025 final guideline removes these distinctions almost entirely. Key changes confirmed by the OSFI letter and BLG analysis include:
| Dimension | 2017 Guideline | 2025 Final Guideline |
|---|---|---|
| Entity scope | Deposit-taking institutions only | All FRFIs including insurers and foreign branches |
| Model scope | Material models, capital-relevant models | All models with non-negligible risk, regardless of purpose |
| AI/ML treatment | Technology-agnostic, no AI-specific sections | Explicit AI/ML guidance throughout, explainability requirements |
| Third-party models | Referenced under vendor model section | Formally governed under Guideline B-10 third-party risk management |
| Risk rating | Materiality classification | Formal model risk rating driving governance intensity |
| Residual risk | Not formally distinguished | Explicitly distinguished from inherent risk |
| Pension plans | Not covered | Explicitly excluded |
| Implementation | 2017 | May 1, 2027 (18-month transition from Sep 11, 2025) |
As the LinkedIn analysis by Agus Sudjianto summarizes, E-23 now explicitly embeds AI/ML throughout the model lifecycle, whereas SR 11-7 remains pre-generative-AI and technology-agnostic.
The AI/ML-Specific Sections
The 2025 guideline contains AI/ML-specific guidance at multiple points in the model lifecycle. These are not isolated sections but integrated throughout the principles.
Model Rationale and Design
For AI/ML models, institutions must consider transparency and explainability as part of model design. Where a model is a "black box" or operates autonomously, institutions must document alternative controls. Bias, ethical risk, and privacy risks must be assessed during development. The OSFI full guideline text states that for AI/ML, data governance must be integrated enterprise-wide, with checks for bias, data quality, outliers, missing data, and consistency.
Explainability Requirements
The guideline incorporates explainability requirements that vary by a model's intended use, autonomy, and impact. During model review, institutions must evaluate whether outputs are explainable and whether that explainability aligns with how the model is used. This is a material new requirement relative to 2017.
For high-autonomy AI systems making customer-facing decisions — loan approvals, insurance underwriting, fraud flags — the explainability bar is higher. Institutions cannot simply assert that a model is accurate; they must demonstrate that they understand why it produces the outputs it does, and that they can communicate those explanations to affected parties.
Model Monitoring for AI/ML
AI/ML models present monitoring challenges that traditional statistical models do not. The guideline specifically calls out:
- Drift detection: Dynamic and self-learning models may shift behavior in production without an explicit code change. Institutions must have processes to detect elevated drift.
- Autonomous re-parametrization: If a model can update its own parameters, the institution must have controls to detect when that self-learning has constituted a material model modification requiring formal re-approval.
- Scalability: Monitoring frameworks must scale to the volume and velocity of AI/ML model outputs.
Third-Party AI Models
This is among the most operationally significant sections of the 2025 guideline. Institutions must apply model risk management to all externally sourced models, including:
- Vendor-supplied AI tools and platforms
- Models from foreign parent institutions
- Third-party libraries and automated development pipelines
All externally developed models are assessed for model risk rating on a standalone basis — a risk rating from a parent institution does not automatically apply to its subsidiary or branch. Institutions must ensure third-party models receive independent validation and monitoring commensurate with their risk rating. Where a vendor is unwilling to provide documentation, the institution must assess the resulting uncertainty as a component of model risk.
This aligns with OSFI Guideline B-10 on third-party risk management, which governs outsourcing of business activities.
How E-23 Compares to SR 11-7 and SS1/23
For cross-border institutions operating in Canada, the US, and the UK, model risk governance teams now need to reconcile three frameworks.
SR 11-7 (US Federal Reserve, 2011)
SR 11-7 was published jointly by the Federal Reserve and OCC in April 2011. It is principles-based and applies to all banking organizations supervised by the Federal Reserve. It defines model risk as arising from incorrect model outputs or misuse of model outputs, and requires: model development standards, validation (conceptual soundness, ongoing monitoring, outcomes analysis), and governance, policies, and controls.
SR 11-7 remains the foundational US framework. Its weaknesses in the current AI environment are well-documented: it lacks risk-tiering, contains no AI/ML explainability expectations, and does not formally distinguish inherent from residual risk. As the Sudjianto comparison notes, SR 11-7 adds more detailed validation operationalization and concrete testing guidance, but E-23 adds formal risk rating frameworks, residual risk concepts, and deployment controls.
SS1/23 (Bank of England PRA, May 2023)
SS1/23 was published in May 2023 and became effective May 17, 2024. It applies specifically to UK firms with internal model (IM) approval for regulatory capital purposes — a narrower scope than E-23's all-FRFI, all-model approach.
SS1/23 has five principles: model risk identification and classification, governance and accountability, model development and validation, controls and monitoring, and (implicitly) reporting. The UK PRA's approach requires board-level accountability via Senior Management Functions (SMFs), typically the Chief Risk Officer.
Key differences from E-23:
| Dimension | OSFI E-23 (2025) | BoE SS1/23 (2023) |
|---|---|---|
| Entity scope | All FRFIs | IM-approved UK firms only |
| Model scope | All models with non-negligible risk | All models within IM-approved firms |
| AI/ML specifics | Explicit guidance throughout | Principles sufficient for AI/ML, less prescriptive |
| Proportionality | Risk-based, driven by model risk rating | Tiering by model complexity |
| SMF accountability | Senior management and board | SMF accountability required |
| Effective date | May 1, 2027 | May 17, 2024 |
For institutions operating in both Canada and the UK, the practical implication is that SS1/23's model tiering framework and E-23's model risk rating framework are conceptually aligned but not identical in their mechanics. Governance frameworks that treat model risk rating and model tiering as the same construct will need to be reviewed.
The 18-Month Transition: An Action Plan
The 18-month transition period runs from September 11, 2025 to May 1, 2027. OSFI's backgrounder states that OSFI will provide support throughout the transition to help institutions apply the principles proportionately.
Here is a practical sequencing of the key workstreams:
Phase 1: Readiness Assessment (Q4 2025 — Q1 2026)
- Policy gap analysis: Map your current MRM framework against E-23's requirements. Focus on: (a) scope — are insurers and foreign branches now included? (b) model definition — does your current framework capture AI/ML and vendor models? (c) risk rating — do you have a formal, criteria-based model risk rating scale, or only a materiality classification?
- Model inventory audit: Confirm your model inventory includes all AI/ML models, vendor-supplied tools, and third-party models. Per E-23, only models with non-negligible inherent risk need to be on the inventory — but that determination must itself be documented.
- Governance mapping: Document who owns model risk at the board and senior management level. E-23 requires that senior management define roles and accountabilities and report model risk to the board of directors.
Phase 2: Framework Development (Q1 — Q3 2026)
- Model risk rating scale: Develop or revise your model risk rating methodology to meet E-23's requirements. The rating must be based on quantitative factors (portfolio size, financial impact) and qualitative factors (model complexity, autonomy, customer impact, regulatory risk). The rating drives frequency and scope of review, documentation requirements, and approval authority.
- AI/ML-specific controls: For each AI/ML model in inventory, assess explainability requirements relative to its intended use and autonomy. Document how the institution will communicate model outputs to affected stakeholders.
- Third-party vendor protocol: Establish a formal protocol for new vendor model onboarding: documentation requirements, validation expectations, monitoring cadence, and escalation procedures when a vendor is unwilling to provide adequate documentation.
- Self-learning model governance: For any model that can update its own parameters in production, define the internal criteria that constitute a material model modification requiring formal re-approval.
Phase 3: Implementation and Testing (Q3 2026 — Q1 2027)
- Validation program expansion: Extend independent validation to all models now in scope — including insurers' pricing and reserving models, and any third-party tools previously reviewed only by their vendors.
- Monitoring framework scaling: Upgrade model monitoring infrastructure for AI/ML-specific risks: drift detection, data quality checks, autonomous re-parametrization alerts.
- Training: Ensure model stakeholders — owners, developers, users, reviewers — understand E-23's requirements and the institution's updated framework. E-23 specifically calls for multi-disciplinary teams including legal and ethics professionals.
Phase 4: Readiness Validation (Q1 — Q2 2027)
- Internal audit review: Conduct an internal audit of MRM framework compliance before the May 1, 2027 effective date.
- Documentation: Ensure all framework components, risk ratings, validation reports, and governance decisions are documented and retrievable.
- OSFI engagement: Where significant uncertainties remain — particularly around novel AI/ML models — proactively engage OSFI rather than waiting for supervisory review.
E-23 Compliance Checklist for FRFIs
The following checklist is based on the requirements in the OSFI final E-23 guideline text and analysis from Torys LLP and Protiviti:
Governance - [ ] Board and senior management have defined roles and accountabilities for MRM enterprise-wide - [ ] Model risk is reported to the board through a defined reporting structure - [ ] Multi-disciplinary teams (including legal and ethics) are engaged in model lifecycle decisions - [ ] MRM framework is documented and situated within the broader governance framework
Model Inventory - [ ] Comprehensive inventory covers all models with non-negligible inherent risk - [ ] Inventory includes vendor and third-party models - [ ] Inventory records model origin (internal vs. vendor), purpose, risk rating, validation status, and owner - [ ] Inventory is maintained at the enterprise level and subject to robust controls - [ ] Decommissioned models are retained in inventory for a defined period
Model Risk Rating - [ ] Formal risk rating methodology based on quantitative and qualitative criteria - [ ] Each model is assigned a documented risk rating - [ ] Risk rating drives governance intensity: review frequency, documentation requirements, approval authority, monitoring scope - [ ] Process exists for provisional risk ratings on new models, confirmed during model review - [ ] Triggers defined for risk rating re-assessment (performance decrease, material change in use)
AI/ML-Specific - [ ] Explainability requirements assessed for each AI/ML model relative to its intended use, autonomy, and customer impact - [ ] Bias and data quality checks embedded in AI/ML development and monitoring - [ ] Self-learning model governance: internal criteria defined for what constitutes a material model modification - [ ] Black-box model controls documented for models where direct explainability is not feasible
Third-Party Models - [ ] All vendor and third-party models identified in model inventory - [ ] Third-party models assessed for risk rating on a standalone basis - [ ] Validation and monitoring applied to third-party models commensurate with risk rating - [ ] Vendor documentation requirements specified in procurement and vendor management processes - [ ] Third-party model management aligned with OSFI Guideline B-10
Validation (Model Review) - [ ] Independent validation conducted for all non-negligible risk models - [ ] Validation scope driven by model risk rating - [ ] Validation covers conceptual soundness, performance, data quality, explainability, and third-party components - [ ] Validation triggers defined: new models, material modifications, performance breaches, data changes
Monitoring - [ ] Continuous monitoring of model performance against defined metrics and thresholds - [ ] AI/ML-specific drift detection in place - [ ] Escalation processes defined for performance breaches and threshold violations
The Broader Canadian AI Governance Context
E-23 does not exist in isolation. As the Solytics Partners analysis notes, Canadian institutions now operate in a multi-jurisdictional regulatory landscape where E-23 is converging with Quebec's AMF guideline on model risk management (2025), the NIST AI Risk Management Framework, and the EU AI Act for institutions with European exposure.
For institutions with operations in multiple jurisdictions, the practical implication is that a single, principles-based MRM framework — designed to be proportionate by risk rating — can accommodate the requirements of E-23, SR 11-7, and SS1/23 simultaneously, provided the framework is robust enough at its core. The differences across these frameworks are real but navigable. The more significant challenge is building the operational infrastructure — the inventory systems, the risk rating processes, the validation capabilities, the monitoring tools — to execute that framework consistently across a large and diverse model portfolio.
May 1, 2027 is the compliance deadline. The time to build that infrastructure is now.
Sources: [OSFI Guideline E-23 (2027) full text](https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/guideline-e-23-model-risk-management-2027) · [OSFI E-23 Backgrounder, Sep 11 2025](https://www.osfi-bsif.gc.ca/en/news/backgrounder-guideline-e-23-model-risk-management) · [OSFI E-23 Letter, Sep 11 2025](https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/guideline-e-23-model-risk-management-2027-letter) · [OSFI E-23 2017 Guideline](https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/enterprise-wide-model-risk-management-deposit-taking-institutions-guideline-2017) · [Federal Reserve SR 11-7](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm) · [Bank of England PS6/23 – SS1/23](https://www.bankofengland.co.uk/prudential-regulation/publication/2023/may/model-risk-management-principles-for-banks) · [Blakes E-23 analysis](https://www.blakes.com/insights/osfi-releases-final-guideline-e-23-for-model-risk-management-and-ai-use-by-frfis/) · [BLG analysis](https://www.blg.com/en/insights/2025/11/osfi-responds-to-the-growing-use-of-ai-key-updates-to-guideline-e-23) · [Torys analysis](https://www.torys.com/our-latest-thinking/publications/2025/10/osfi-updates-and-expands-scope-of-guideline-e-23) · [Protiviti E-23 paper](https://www.protiviti.com/gl-en/insights-paper/strengthening-decision-making-with-osfi-e-23-model) · [Sudjianto E-23 vs SR 11-7 comparison](https://www.linkedin.com/posts/agus-sudjianto-76519619_e-23-fourteen-years-after-sr11-7-comparison-activity-7372102426042052608-Jfrj) · [ValidMind SS1/23 guide](https://validmind.com/blog/ss1-23-model-risk-management-compliance-guide/) · [Solytics Partners E-23 and agentic AI](https://www.solytics-partners.com/resources/blogs/ai-and-model-risk-governance-under-osfi-e-23-for-financial-institutions-with-agentic-ai-oversight-and-compliance-controls)
Keep reading
Frameworks
GPAI Code of Practice: Who Signed, Who Didn't, and What It Means for Enterprise AI Buyers
The EU AI Office published the final General-Purpose AI Code of Practice on July 10, 2025. Google, OpenAI, Anthropic, Microsoft, Mistral, Cohere, Amazon, and IBM signed. Meta publicly refused. Here is what the three chapters require, what Article 56 means for non-signatories, and how procurement teams should respond.
Frameworks
The Texas AI Act (TRAIGA): Complete Compliance Guide for January 1, 2026
Texas HB 149 takes effect January 1, 2026. This guide walks through prohibited practices, penalties up to $200,000 per violation, the 60-day cure period, NIST AI RMF safe harbor, and the 36-month sandbox — with every provision cited to primary source.
Frameworks
EU AI Act GPAI Obligations Explained: What Foundation Model Providers Must Do
Articles 53 and 55 of the EU AI Act impose layered obligations on GPAI model providers. Here's what applies, to whom, and when enforcement kicks in.