Bank Policy Institute

08/12/2024 | Press release | Distributed by Public on 08/13/2024 12:33

BPI Comments on the Uses, Opportunities and Risks of AI in Financial Services

Dear Mr. Kim:

The Bank Policy Institute[1] appreciates the opportunity to respond to the request for information from the U.S. Department of the Treasury relating to the uses, opportunities and risks of artificial intelligence in the financial services sector.

BPI and its member banks are strongly committed to promoting the responsible use of AI given the potential benefits for consumers and the future of financial products and services. The adoption of AI within the financial services industry varies by institution and will continue to evolve as we learn more about AI, including with respect to generative AI. To that end, BPI supports Treasury's efforts to gain more information on the use of AI in the financial services industry, including current and potential AI use cases and how financial institutions currently, and expect to, assess and manage AI-related risks. BPI has endeavored to provide specific and detailed information, where possible and available, with respect to emerging AI technologies (such as generative AI) given the RFI's focus on this topic.[2]

BPI looks forward to further engaging with Treasury on the topics addressed in this response. The evolution of the capabilities of AI, and the corresponding evolution of the use cases adopted by the financial services industry, require an ongoing dialogue among banks, Treasury and other regulators and stakeholders on the issues presented in the RFI.

I. General Use of AI in Financial Services

Question 1: Is the definition of AI used in this RFI appropriate for financial institutions? Should the definition be broader or narrower, given the uses of AI by financial institutions in different contexts? To the extent possible, please provide specific suggestions on the definitions of AI used in this RFI.

The RFI adopts the definition of AI used in President Biden's Executive Order on Safe, Secure, and Trustworthy Development and Use of AI (the "AI Executive Order") and as set forth in 15 U.S.C. 9401(3):

The term "artificial intelligence" means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

This definition, which was developed with input across jurisdictions, sectors and disciplines, generally captures the breadth of modern AI technologies, including AI tools that banks and other financial institutions have had a significant amount of experience using in a regulated environment. Nevertheless, we expect the generally accepted industry definition of "AI" and related terms to continue to evolve and change as the underlying technologies continue to change.

For purposes of this response, we have adopted the definition of "AI" in the AI Executive Order but note, where applicable, when the specific reference is to generative AI[3] (or other forms of AI) to highlight risks and other considerations specific to such forms of AI. However, while BPI appreciates the need for a consistent definition, we caution against a prescriptive, "one-size-fits-all" approach to policymaking that attempts to regulate all technologies that may fall under this definition without regard to the different activities for which such technologies could be used across the financial services industry, or the existing laws and regulations that already cover such activities. As discussed throughout this response, banking organizations' risk management frameworks, which take a comprehensive approach to risk management as required by the banking agencies, are risk-based and technology- neutral and, as such, can be and have been effectively applied to use cases leveraging AI. Moreover, the statutory and regulatory requirements governing fair lending and consumer protection, data and privacy protection, and model risk management ("MRM") apply to all bank processes, irrespective of their inclusion of AI technologies.

AI is one of many tools that may be considered and ultimately used by banks to solve business problems, enhance operations or improve their products and services. Regulations designed specifically for any and all AI (however defined) would be counterproductive given the variety of activities in which AI is used and the complexity in defining aspects of AI that are subject to rapid changes. Instead, BPI advocates for a regulatory approach that evaluates the potential risks and outcomes of using AI for any particular activity in the same way any other methods or mechanisms used to carry out such activity would be evaluated, namely under the broader risk management framework and regulations. This approach would enable banks to safely and responsibly use the solution (whether or not involving AI) that is best suited to address a particular use case. For example, AI models should not be subject to greater compliance and regulatory burdens than traditional models or other technologies merely because they are characterized as such without evaluating whether the AI model and the particular use case in fact introduce greater risk compared to a traditional model or other technologies. Similarly, regulation that takes a risk-based approach can be adaptable to new technologies as they emerge (including future enhancements to AI technology), which encourages responsible innovation and ensures that consumers are protected irrespective of the technology that is used.

To read the full comment letter, please click here, or click on the download button below.

[1] The Bank Policy Institute is a nonpartisan public policy, research and advocacy group that represents universal banks, regional banks and the major foreign banks doing business in the United States. The Institute produces academic research and analysis on regulatory and monetary policy topics, analyzes and comments on proposed regulations and represents the financial services industry with respect to cybersecurity, fraud and other information security issues.

[2] Given BPI's membership, this response focuses on the uses, opportunities and risks of AI within the banking sector, although some of the principles discussed in this response may be relevant to other types of financial institutions.

[3] As defined in the AI Executive Order, "generative AI" means "the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content." The RFI defines "generative AI" as a kind of AI "capable of generating new content such as code, images, music, text, simulations, 3D objects, and videos", noting that it is often used to describe algorithms (such as ChatGPT) that can be used to create new content.