Regulatory Perceptions of Artificial Intelligence and What They Mean for Community Banks

Oct. 30, 2023

Federal banking regulators so far have approached regulating the use of artificial intelligence through the lens of existing regulations, rather than creating a new set of guidelines. But based on various regulatory announcements, there are some salient risks community banks should consider when using the technology.

Supervisory Approach to AI

AI Demystified: A Webinar Series

ICBA’s new demystifying AI webinar series, delivered in three parts, explores market forces creating AI demand and the implications for community banks, including:

· The strengths and weaknesses of AI technology and its potential impact.

· How community banks are leveraging the technology to benefit customers and improve efficiency.

· The regulatory environment for banks adopting this new technology.

Purchase the recorded webinar bundle.

An interagency request for information identified various potential AI risks, such as:

· Cybersecurity risks related to the retention and processing of large amounts of customer data.

· Fair lending risks if using AI for underwriting.

· Risks related to oversight of third-party providers if banks purchase AI software from outside developers.

In general, banks seeking to use AI should review the FDIC’s Supervisory Guidance on Model Risk Management, which outlines the agency’s supervisory approach to all quantitative models, including those using AI.

Fair Lending and AI

Community bankers should also be familiar with the applicability of fair lending laws, including the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act.

ECOA prohibits discrimination based on race, color, religion, national origin, sex or marital status, or age. While ECOA plainly prohibits intentional discrimination (disparate treatment), it has also been used to prohibit activities that unintentionally result in a discriminatory impact on protected classes (disparate impact).

Disparate impact liability presents the biggest challenge for lenders in implementing AI-based underwriting because, under a disparate impact theory, a facially neutral algorithm that does not consider prohibited characteristics like race or sex can give rise to liability even in the absence of discriminatory intent. Banks implementing AI in credit underwriting should be aware that just because loans are underwritten using an AI model that is not aware of an applicant’s race or sex may still be in violation of fair lending laws if it results in a discriminatory effect.

Regulators Address Fair Lending Risks

Federal Reserve Vice Chairman Michael Barr recently gave a speech acknowledging both the potential and risks of using AI in underwriting. In his remarks, Barr said alternative data sources “can provide a window into the creditworthiness” for certain individuals and artificial intelligence techniques “have the potential to leverage these data at scale and at low cost to expand credit.” But he also acknowledged that while “these technologies have enormous potential, they also carry risks of violating fair lending laws and perpetuating the very disparities that they have the potential to address.”

The Consumer Financial Protection Bureau has also warned that the use of chatbots to resolve customer service inquiries risks violating consumer financial protection laws if they provide customers with inaccurate or misleading information. In a report, the CFPB said poorly designed chatbots can pose widespread harm and undermine customer trust. In turn, the bureau said chatbots must comply with all applicable federal consumer financial laws, and entities may be liable for violating those laws when they fail to do so.

The Takeaway for Community Banks

Given the potential customer impact and regulatory approach, community banks should continue to monitor the development of AI and always be aware that any novel technology creates both opportunities for greater efficiency along with compliance and reputational risks.

Fortunately, the community bank business model has always been predicated on the principle that customers are more than an account number, which is something that cannot be replicated by artificial intelligence. As such, community banks will continue to leverage technology where it makes sense while building on the foundations of relationship banking in ultimate service to customers.