The views expressed by contributors are their own and not the view of The Hill

Making equitable access to credit a reality in the age of algorithms

Getty Images

Last week we saw yet another reminder of the ways algorithms will perpetuate historical bias if left unchecked and unrestrained. The Department of Housing and Urban Development’s (HUD) proposed rule released Monday announced the intent to reduce key protections afforded to consumers under the Fair Housing Act. The new rule would revise the HUD interpretation of Title VIII of the Civil Rights Act of 1968 to eliminate the disparate impact standard, which prohibits policies or procedures that appear to be neutral but actually result in disproportionate adverse impact on protected groups.

The proposed rule also eliminates a key protection for consumers by shielding institutions that use algorithms resulting in bias as long as the challenged model is produced, maintained, or distributed by a recognized third party. Many of the AI-based financial products used by institutions today are not created in-house, but rather by outside vendors. Thus, if a bank were to use an AI product to determine whether to award a loan, and it turned out that the product employed algorithms that result in race- and gender-based loan decisions, the bank would not be liable for its racist, sexist practices. This not only limits consumers’ ability to defend themselves from discriminatory practices, but also eliminates incentive for institutions to investigate whether the algorithms are discriminatory in the first place.

Even before this rule change came to light, the ability of algorithms to limit opportunity has become increasingly clear on the national stage. Presidential hopefuls have unveiled plans to address wealth disparities between white and non-white Americans, as well as between men and women, with particular interest in boosting the disbursement of loans to underserved populations. The truth is that access to loans has long been a critical way for Americans to build wealth, yet too often patterns of loan allocation have reinforced, rather than rectified, patterns of inequality by neglecting underserved populations. Some candidates propose to respond to this challenge with new regulations on non-banks that issue loans and credit scores that incorporate rent and phone bills. While comprehensive, these plans would be greatly enhanced if they took into account the role of artificial intelligence in financial services.

There’s no question: artificial intelligence is changing the landscape of financial services. Some changes are positive. By identifying patterns that humans would otherwise miss, algorithms can detect suspicious activity, optimize portfolios to minimize systemic risk, recommend strategic investments, and assess a borrower’s creditworthiness. These innovations can increase opportunity and expedite and expand the disbursement of loans to populations that have previously been “credit invisible.” But it’s now clear that AI can also further income and opportunity disparities. To avoid this fate, both industry and government need to take on two central challenges: (1) managing algorithms’ use of variables that are de facto proxies for protected characteristics; and (2) explaining the “black box” of algorithms to prospective borrowers.

First, financial institutions must be held accountable to ensure that their algorithms do not use factors that correlate closely with race or gender and thus become de facto proxies for those characteristics. HUD’s proposed rule announced Monday would add a layer of difficulty in realizing this goal. In concept, racist and sexist-based decisions are clearly and unequivocally illicit. The Fair Housing Act and Equal Credit Opportunity Act prohibit lenders from considering race or gender directly in loan decisions. Yet there are several other factors the financial sector may use that, while not explicitly equivalent to race or gender, correlate with those characteristics. Institutions may decide, for example, that unbanked individuals are less creditworthy. Seventeen percent of African Americans and fourteen percent of Hispanic Americans are unbanked, compared to just 3 percent of white Americans. Fifteen percent of unmarried female-headed family households are also unbanked, so using this as a factor in loan decisions would unintentionally disadvantage people of color and women.

Financial institutions must be encouraged, if not required, to interrogate each and every factor used to make credit decisions by an algorithm to ensure none is a proxy for a protected characteristic. They should not be protected from liability behind the shield of a third party if we intend to root out unjust, biased determinations on who should benefit from opportunities such as home ownership. 

Second, financial institutions must make denial notices explainable to borrowers. When a bank denies a prospective borrower a loan, that individual is due a notice of why this decision was made (a requirement set by the Fair Credit Reporting Act). However, as algorithms increasingly take over the role of determining creditworthiness, it will grow more difficult for this standard to be met. Federal Reserve Governor Lael Brainard argued in 2018 that “Compliance with [FCRA] requirements implies finding a way to explain AI decisions,” yet these are often quite opaque. Even if algorithms used by an institution are devoid of explicit bias, institutions must be able to accurately describe how decisions were reached to ensure they aren’t rooted in undetected, illegal bias. Institutions should therefore be required to ensure their denial of benefit notices meet standards for explainability. Specifically, they should be required to list the individual factors used to make a decision and, if applicable, the one or several factors which were determinative. Both Bank of America and Capital One have committed themselves to improving explainability. More banks should be encouraged to follow their lead whether by regulation, legislation or public pressure.

While we can hope that financial institutions will do the right thing and support as many people from as many backgrounds as possible, they cannot be expected or required to do more than required under laws and regulations. We must accept that they are not in the business of public service. Private financial institutions must answer to their boards and shareholders who hold them accountable for following the law, yes, but also to maximizing profit to the fullest extent possible. If we want to ensure that the ‘fullest extent’ does not mean gender and/or race-based practices, then it is incumbent upon us to ensure that we do everything in our power to solidify this tenet. For the majority of us, those powers rest in our wallets as consumers and as public citizens who can demand our representatives ensure sufficient laws and regulations. 

Government must pay close attention to the two challenges outlined above, particularly since the regulatory and legal regimes protecting prospective borrowers are increasingly fragile.

Congress should step in to take three steps to bolster the legal avenues available to prospective borrowers. First, it should consider new legislation to clarify the scope of what can be considered a “legitimate business need.” Second, to safeguard against weakening support by the court and the executive cranch, it should enshrine into law that the doctrines of both disparate treatment and disparate impact can be invoked by prospective borrowers against banks that use algorithms, without providing the false shield for them to hide behind a third party vendor or other institution. Third, Congress should create a standard for explainability to cover denial of credit decisions. The opacity of AI must not render such denial notices as incomprehensible as GDPR cookie notices and acceptance pop-ups or worse — end up as a shield for bias against historically targeted, protected classes.

As forums of discrimination shift from the physical to the digital, both governments and businesses must adapt. We cannot neglect the unique ways that digital tools, if left unfettered, will enable bias to fester and perpetuate, harming individuals, protected classes, and our economy as a whole as prospective borrowers become excluded from the American Dream.

Miriam Vogel is executive director of EqualAI 

Tags algorithms Loans

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more