Search for:

In brief

On 24 September 2024, following an in-depth consultation with industry participants, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC)1 published their findings concerning the use and adoption of artificial intelligence (AI) by federally regulated financial institutions. The report highlighted that a significant majority of financial institutions will adopt AI by 2026, and also set out a number of key risks that arise for financial institutions from AI usage. OSFI and FCAC emphasized the need for financial institutions to adopt a dynamic and responsive risk management system with respect to AI, and confirmed their commitment to work towards more specific best practices for industry participants.


In depth

On 24 September 2024, OSFI and FCAC published a report on AI uses and risks at federally regulated financial institutions (hereafter, the “AI Report“), which included findings from a recent voluntary questionnaire that had been issued to financial institutions in December 2023 that sought feedback on their AI and quantum computing preparedness.

The results from the questionnaire revealed that the use of AI at financial institutions is increasing rapidly, with 70% of financial institutions expecting to use AI by 2026. In fact, the AI Report found that financial institutions are now using AI for more critical use cases, such as pricing, underwriting, claims management, trading, investment decisions and credit adjudication. In addition, financial institutions are facing competitive pressures to adopt AI, leading to further potential business or strategic risks. As such, according to the AI Report, it is critical that financial institutions be vigilant and maintain adaptable risk and control frameworks in order to address both internal and external risks from AI.

Amplified risks from AI adoption

The AI Report outlined key risks that arise for financial institutions from the use of AI, which can come from both internal AI adoption or from the use of AI by external actors.

  1. Data governance risks were identified as a top concern about AI usage. The AI Report noted that addressing AI data governance is crucial, whether through general data governance frameworks, specific AI data governance, or model risk management frameworks.
  2. Model risk and explainability were identified as a key risk, as risks associated with AI models are elevated due to their complexity and opacity. The AI Report noted that financial institutions must ensure that all stakeholders – including users, developers and control functions – are involved in the design and implementation of AI models. In addition, financial institutions need to ensure that there is an appropriate level of explanation in order to inform internal users/customers and also for compliance and governance purposes.
  3. Legal, ethical and reputational risks are a challenge for financial institutions implementing AI systems. The AI Report recommended, among other things, that financial institutions take a comprehensive approach to managing the risks associated with AI, as narrow adherence to jurisdictional legal requirements could expose the financial institution to reputational risks. The report also noted that consumer privacy and consent should be prioritized.
  4. Third-party risks and reliance on third-party providers for AI models and systems were also noted to be formidable challenges, including when seeking to ensure that the third-party complies with a financial institution’s internal standards.
  5. Operational and cybersecurity risks can also be amplified through AI adoption. The AI Report noted that as financial institutions integrate AI into their processes, procedures and controls, operational risks will increase. In addition, cyber risks can stem from using AI tools internally, and can be elevated through complex relationships with third parties. Without proper security measures in place, the use of AI could increase the risk of cyber attacks. Accordingly, the AI Report warned that financial institutions must apply sufficiently robust safeguards around their AI systems to ensure resiliency.
  6. Business and financial risks were noted to include risks associated with financial and competitive pressures for financial institutions that do not adopt AI. Among other things, OSFI and FCAC warned that if AI begins to disrupt the financial industry, firms that lag in adopting AI may find it difficult to respond without having in-house AI expertise and knowledge.
  7. Emerging credit, market and liquidity risks. The AI Report noted that there are macroeconomic impacts of AI on areas like unemployment levels that could lead to credit losses. In addition, as adoption increases, AI models could have significant impacts on asset price volatility and the movement of deposits between financial institutions.

Recommendations for financial Institutions in AI risk management

In response to the risks identified in the AI Report, OSFI and FCAC made a number of recommendations for financial institutions to manage or mitigate such risks within their organizations. The following recommendations were made:

  1. Financial institutions need to conduct risk identification and assessment in a rigorous manner, and establish multidisciplinary and diverse teams to deal with AI use within their organizations.
  2. Financial institutions must be open, honest and transparent in dealing with their customers when it comes to both AI and data.
  3. Financial institutions should plan and define an AI strategy, even if they do not plan to adopt AI in the short term. 
  4. As a transverse risk, AI adoption must be addressed comprehensively, with risk management standards that integrate all related risks. Boards and oversight bodies of financial institutions must be engaged, to ensure that their organizations are properly prepared for AI outcomes, by balancing both the benefits and risks of AI adoption.

Next steps

In AI Report, OSFI and FCAC highlighted their plans to respond dynamically and proactively to an evolving risk environment surrounding AI, as the uncertain impacts of AI represent a challenge for regulators as well. OSFI and FCAC, in partnership with other industry participants, will also aim to build upon prior work on AI to establish more specific best practices.

On 2 October 2024, after having issued the AI Report, OSFI published a semi-annual update noting that, while the risks it had previously identified in its Annual Risk Outlook (Fiscal Year 2024-2025) persisted, integrity and security risks continue to “intensify and multiply” particularly due to, among other things, the risks of artificial intelligence which has “risen in significance since the release of the Annual Risk Outlook”. While its assessment of the impact and interrelation of AI adoption on the risk landscape remains ongoing, OSFI noted that it plans to strengthen existing guidelines to support mitigation of AI-related risks. To that end, as a first step, it will issue an updated Model Risk Management guideline in the summer of 2025 which will include greater clarity on expectations around AI models.

For more information on AI in financial services, please visit our landing page, participate in our events and talk with us.


1 OSFI and FCAC are federal regulators in the banking and financial services sector in Canada.

Author

Usman Sheikh is Chair of the Blockchain & Fintech Practice. He is a Transactional Partner in Baker McKenzie's Toronto office and is also a member of the Firm's Litigation and Government Enforcement Practice Group. A highly regarded thought leader on blockchain and distributed ledger technology, Usman has briefed the offices of several prime ministers, as well as ministers, on blockchain's disruptive power, and is regularly invited to speak to business leaders and at global blockchain conferences throughout the world.
Usman was named as one of the "Top 25 Most Influential Lawyers" by Canadian Lawyer (2018) and as one of the top FinTech lawyers in Canada (Band 1) by Chambers for four consecutive years (2020 - 2023). Most recently, he was recognized in Toronto Life's The Influentials 2021 list, an annual feature that highlights Toronto's most influential people over the last 12 months.
Author of over 25 legal and academic publications, Usman is set to publish The Law of Blockchain Technology (Thomson Reuters) in 2023.
As a global thought leader on blockchain's disruptive power, Usman has lectured at the International Monetary Fund (IMF), the Bank for International Settlements (BIS), the Financial Stability Board (FSB) and the Monetary Authority of Singapore (MAS). He has also co-lectured with the heads of blockchain for Nasdaq and the TMX, and has also presented to the Investment Industry Regulatory Organization of Canada (IIROC), the Mutual Fund Dealers Association (MFDA), the Law Society of Ontario (LSO), the Royal Canadian Mounted Police (RCMP), the Chartered Professional Accountants of Canada (CPA), and several other regulatory organizations. Since 2019, Usman has also been serving as an Adjunct Professor with the University of Toronto (Faculty of Law), teaching a course entitled "Blockchain, Digital Assets, and the Law".

Author

Michael serves as the head of the Financial Services Regulatory Practice for Canada and is a Transactional Partner in Baker McKenzie's Toronto office. His practice focuses on financial regulation and compliance for fintechs, financial institutions and market participants and their business in Canada. When not acting for clients, Michael lectures students at the University of Montreal on corporate and securities laws and in preparing for case competitions. He is a co-author of the Annotated Bank Act (2023 edition) and the Jurisclasseur en valeurs mobilieres, a leading publication on securities laws. Michael is a chartered professional accountant and has worked as an inspector with the Autorité des marches financiers (AMF) and an auditor with the Office of the Auditor General of Canada.