Dechert LLP

10/23/2024 | News release | Archived content

Bank of England weighs in on AI governance

In a speech to central banks, regulators, financial firms and fintechs, the Bank of England ("BoE") outlined its position on the opportunities and risks of AI in the financial services sector. As an example of good AI governance, the BoE explained its approach to AI governance in light of the current "step change" in the power of AI models.

In the speech at the Central Bank AI Conference, the BoE's Chief Data Officer, James Benford, emphasised the importance of integrating AI into operations while ensuring effective governance to foster innovation and mitigate risks. On the same day, the BoE announced the launch of the Artificial Intelligence Consortium ("AI Consortium"), a platform for public-private engagement to gather stakeholder input on AI capabilities, development and use in UK financial services.

What differentiates the latest wave of AI?

Mr. Benford described the long history of financial modelling used by the BoE, but highlighted three characteristics of recent AI developments that require governance efforts to be accelerated:

  1. There has been a substantial increase in the power and complexity of AI models such that they "risk being the ultimate black box" making it very difficult to understand and explain their workings.
  2. AI enables the analysis of more diverse data sets than those traditionally used for financial modelling. Whilst financial modelling has previously focused on structured numerical data, the latest AI models can rapidly analyse data that has been more difficult to analyse - "not just numbers but unstructured text, images, sound, and video".
  3. Advances in large language models have democratised AI by allowing people without specific technical expertise to interact with AI solutions in natural language.

The Bank of England's AI Strategy

Mr. Benford stressed the need for AI governance to ensure an ethical, safe and effective use of AI. Underscoring the significance of data to AI solutions, he advocated that the "new wave of AI solutions is a reason to double down on strengthening data foundations". In particular, the BoE is broadening the scope of its data governance framework to better address unstructured data. In addition to data governance, the BoE is working towards AI literacy for everyone at the BoE and AI fluency for its expert data professionals.

The BoE is developing its use of AI through a "series of targeted experiments" which it is using to build its AI strategy. Based on its use of AI so far, the BoE has developed the backronym 'TRUSTED' for its AI governance:

  • Targeted. AI initiatives should align with and be targeted at the BoE's strategic goals.
  • Reliable. AI systems should perform at high-standards and be based on high-quality relevant data.
  • Understood. AI literacy and fluency will be essential for its effective use. Models and underlying data need to be clear and comprehensible, with transparent design and defined decision flows.
  • Secure. AI systems can increase risks like data breaches and misuse. The BoE will proactively address potential threats with clear terms of use, robust security measures and privacy controls, including data storage and access by third-party providers.
  • [stress]-Tested. As AI systems will become more complex, it will be crucial to have processes that keep humans accountable and mitigate potential risks.
  • Ethical. The BoE is developing a 'Data, Analytics, and AI Ethics Framework' guided by principles of being beneficial, fair, transparent, secure and accountable. Clear responsibilities will be placed on users, with support to judge best practices and understand model strengths and limitations.
  • Durable. The BoE will aim to create sustainable AI systems with evolving data foundations and strategies to meet growing demands.

Mr. Benford highlighted the crucial role of human involvement in every aspect of AI, from development to deployment, operation and usage. Because of the significance of the latest AI developments, Mr. Benford also emphasised the need for humility and collaboration to learn and effectively govern the use of AI.

The Bank of England's AI Consortium

The BoE's AI Consortium aims to foster collaboration between public and private sectors to explore AI's role in UK financial services. The AI Consortium will focus on identifying AI applications, discussing associated benefits and risks and informing the BoE's approach to AI adoption.

The aims of the AI Consortium will include:

  • Exploring AI use in UK financial services, considering new capabilities and technical developments.
  • Discussing benefits, risks and challenges for financial firms and the wider financial system.
  • Informing the BoE's approach to addressing risks and promoting safe AI adoption.

Chaired by Sarah Breeden, BoE Deputy Governor for Financial Stability, the AI Consortium will meet quarterly, unless circumstances require varying the meeting frequency. Its secretariat will produce a summary of discussions from meetings, which will be published on the BoE's website.

The AI Consortium will not evaluate the BoE's use of AI. It will have no decision-making authority, and the BoE will not be required to act on the Consortium's discussions or workshop outcomes.

Membership is by application, with applications open until 8 November 2024.

Comment

AI enables financial firms to use more diverse datasets to steer decisions. A June 2024 paper1 published by the U.S. Senate Committee on Homeland Security & Governmental Affairs on hedge funds' use of AI explained: 'Present-day algorithms leverage market activity and pricing data in forecasting, but implementing [machine learning/deep learning] techniques to analyze alternative data allows firms to consider additional factors like satellite imagery, weather predictions, social-media sentiment, company acquisitions, and movement of shipment containers to apply to financial models for analysis and decision making.'

With data at the centre of AI training and use, and technology making alternative types of data increasingly useful, financial institutions need to ensure that the data they are using is reliable. Moreover, such data must be processed lawfully, including in accordance with data protection laws where personal data is involved.

Last year, SEC Chair Gary Gensler warned that a financial crash triggered by AI was "nearly unavoidable". Whilst not the focus of this speech, the BoE has also previously expressed concerns about how AI could reduce market stability and amplify shocks.2

As financial institutions are increasingly looking to exploit the value of AI, the BoE's approach to the use of AI for its own financial modelling and its AI Consortium provide opportunity to benchmark financial institutions' own approach to AI governance.