Alert
CFPB Takes Adverse Action Against Machine Learning
Read Time: 6 minsThe Consumer Financial Protection Bureau (CFPB) has been contemplating data, algorithms, and machine learning for years. In 2017, as part of a field hearing on alternative data, the CFPB issued a request for information in part to explore “whether reliance on some types of alternative data could result in discrimination, whether inadvertent or otherwise, against certain consumers.”
In 2020, the CFPB blogged that Regulation B’s flexibility can be compatible with AI algorithms, because “although a creditor must provide the specific reasons for an adverse action… a creditor need not describe how or why a disclosed factor adversely affected an application,” how a factor relates to creditworthiness, or use any particular list of adverse action reasons. It also encouraged creditors to use the CFPB’s No-Action Letter or Compliance Assistance Sandbox policies to reduce potential uncertainty.
Those options no longer exist, and the blog now comes with a warning label: “ECOA and Regulation B do not permit creditors to use technology for which they cannot provide accurate reasons for adverse actions. See CFPB Circular 2022-03 for more information.”
In CFPB Circular 2022-03, the third issuance in the CFPB’s recently announced program to provide guidance to other agencies on how the CFPB intends to enforce federal consumer financial law, the CFPB opined that “ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions.” ECOA requires creditors to disclose to denied applicants the principal reason(s) for the adverse action. Creditors who use complex algorithms, including artificial intelligence or machine learning, “sometimes referred to as uninterpretable or ‘black-box’ models,” may not be able to accurately identify the specific reasons for denying credit or taking other adverse actions, the CFPB said, which is “not a cognizable defense against liability.”
It may go without saying, but this is a good time for creditors to evaluate their adverse action notices and practices.
The CFPB regularly enforces (and supervises and writes rules regarding, etc.) fair lending laws, and adverse action disclosure requirements are not new. The CFPB cited adverse action violations in an enforcement action in September 2021, and in an amicus brief and advisory opinion in December 2021 and May 2022, respectively, argued that adverse action and other Regulation B and Equal Credit Opportunity Act protections apply to an “applicant” throughout the credit cycle—and have since 1974.
Although the Circular seems to address a technical fair lending requirement, the developments discussed below suggest that the Circular is primarily intended to disrupt creditors’ use of algorithms.
Recent Developments
Casting a Wide Industry Net
The CFPB announced on March 16th that it will scrutinize discriminatory conduct that violates the federal prohibition against unfair practices in all consumer finance markets, including credit, servicing, collections, consumer reporting, payments, remittances, and deposits. It revised its Unfair, Deceptive, or Abusive Acts or Practices (UDAAP) Supervision and Examination Manual accordingly, in part directing examiners to document the use of models, algorithms, and decision-making processes used in connection with consumer financial products and services.
In a blog issued the same day, the CFPB said that “new manifestations of discrimination, embedded within systems and technologies, harm communities even where such acts are not visible,” and that the CFPB would “focus on the widespread and growing reliance on machine learning models throughout the financial industry and their potential for perpetuating biased outcomes.”
Note: Not only does the CFPB’s interpretation allow it to evaluate companies’ practices for potentially discriminatory conduct outside of the fair lending context, but it provides a basis for finding conduct to be illegal when it evaluates a company’s algorithms and decision making processes.
Waking the CFPB’s Dormant Authority
In comments regarding the Interagency Task Force’s report on Property Appraisal and Valuation Equity, CFPB Director Chopra stated that:
“[We] will be working to implement a dormant authority in federal law to ensure that algorithmic valuations are fair and accurate. We have already begun to solicit input from small businesses in order to develop a proposed rule, and we are committed to addressing potential bias in these automated valuation models. …We will also be taking additional steps through our research, through our supervisory examinations of financial institutions and their service providers, and through law enforcement actions.”
The CFPB announced in April that it would utilize its (dormant) authority to examine nonbank financial companies the CFPB has “reasonable cause to determine pose risks to consumers.” (“Reasonable cause” and “risks to consumers” are undefined terms.) The CFPB implemented its risk-based examination authority in a 2013 rule, but it “has now begun to invoke this authority.” This will allow the CFPB to supervise entities “outside the existing nonbank supervision program.”
Note: Based on the CFPB’s (above) statement on appraisals, appraisal companies would appear to be on the CFPB’s radar, but this no-longer dormant nonbank supervision authority conceivably covers Fintechs, finance companies, and any other provider of a consumer financial product or service or its service provider.
Whistleblowing and Redlining
In December 2021, the CFPB encouraged tech workers to whistleblow: “data and technology, marketed as Artificial Intelligence (AI), have become commonplace in nearly every consumer financial market. These technologies can help intentional and unintentional discrimination burrow into our decision-making systems, and whistleblowers can help ensure that these technologies are applied in law-abiding ways.”
At an October 2021 joint press conference regarding an old-school redlining enforcement matter, Director Chopra said:
[W]e will also be closely watching for digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed. Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including loan underwriting and advertising. While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening.
Last November, the CFPB issued an advisory opinion stating that a consumer reporting company’s practice of matching consumer records solely through the matching of names is illegal under the Fair Credit Reporting Act. In accompanying remarks, Director Chopra stated that: “When background screening companies and their algorithms carelessly assign a false identity to applicants for jobs and housing, they are breaking the law.”
Also in November, in comments to the CFPB’s Consumer Advisory Board, Deputy Director Martinez said:
We know one of the main risks currently emerging is that of Big Tech’s entry into consumer markets, including consumer reporting. While we all know technology can create innovative products that benefit consumers, we also know the dangers technology can foster, like black box algorithms perpetuating digital redlining and discrimination in mortgage underwriting.
Religious Discrimination and Fair Lending
In a January 2022 blog post regarding religious discrimination, the CFPB said:
We’re particularly concerned about how financial institutions might be making use of artificial intelligence and other algorithmic decision tools. For example, let’s say a lender uses third-party data to analyze geolocation data to power their credit decision tools. If the algorithm leads to an applicant getting penalized for attending religious services on a regular basis this could lead to sanctions under fair lending laws.
In Director Chopra’s April 2022 Congressional testimony, he said: “The outsized influence of such dominant tech conglomerates over the financial services ecosystem comes with risks and raises a host of questions about privacy, fraud, discrimination, and more.”
The CFPB’s May 2022 annual Fair Lending Report to Congress addressed algorithms multiple times, including Assistant Director Ficklin’s comment that the Assistant Director is “skeptical of claims that advanced algorithms are the cure-all for bias in credit underwriting and pricing,” and in closing remarks that:
Most importantly, the CFPB is looking ahead to the future of financial services markets, which will be increasingly shaped by predictive analytics, algorithms, and machine learning. While technology holds great promise, it can also reinforce historical biases that have excluded too many Americans from opportunities. In particular, the CFPB will be sharpening its focus on digital redlining and algorithmic bias. As more technology platforms, including Big Tech firms, influence the financial services marketplace, the CFPB will be working to identify emerging risks and to develop appropriate policy responses.
Summary
Addressing and perhaps preventing algorithms’ impacts seem to occupy a substantial part of the CFPB’s attention, or at least its creativity. In less than a year, the CFPB has interpreted the existing prohibition against unfairness to include discrimination (particularly algorithm-related discrimination), expanded its authority over new kinds of conduct, and revived a sidelined rule to permit the CFPB to examine companies outside of its normal supervisory authority. It has exhorted the public to come forward with rulemaking petitions or as tech whistleblowers, and it has encouraged other regulators to join the CFPB’s efforts. (See the CFPB’s recent interpretive rule describing states’ authorities and its new submission process for public rulemaking petitions.)
The CFPB recently terminated several company-friendly policies of the past, dismissing them as ineffective. As part of a program it created last month, it drafted the adverse action circular concerning routine disclosure processes that, at the least, will cause very careful consideration of the use of credit decision algorithms; and it also drafted two advisory opinions and the start of an algorithmic appraisal valuation rule.
The CFPB’s algorithm-related changes may reach all segments of the market for consumer financial services, from lending and marketing to credit reporting, appraisal practices, appraisal companies, deposit taking, and so on. Entities within the CFPB’s jurisdiction should consider how these developments may affect their business practices.
Finally, the CFPB’s big data and discrimination concerns may overlap, but they are not identical. So while considering the fair lending and other implications of AI, algorithmic decision tools, machine learning, big data, and black-box models, consumer financial service providers also should keep traditional fair lending risks in mind.