Schedule a Call

ECOA: Balancing Ethics and Efficiency in AI-Driven Underwriting

Shaked Igal

09/02/2023

With the explosive launch of Open AI’s ChatGPT there is a growing consensus that Artificial Intelligence (AI) is positioned to become a true game-changer in most industries and professions, and the financial sector is certainly no exception. One interesting area is the traditional loan and credit underwriting field, which has been made much more efficient by integrating AI tools and automations. At the same time, however, this practice also raises some serious ethical and legal concerns that lenders ought to pay attention to; namely, the potential for algorithmic discrimination.

A History of Discriminatory Lending in the US

“Lending discrimination runs counter to fundamental promises of our economic system. When people are denied credit simply because of their race or national origin, their ability to share in our nation’s prosperity is all but eliminated.” – Attorney General Merrick B. Garland

A few decades ago, loan officers could simply deny people because they “did not look creditworthy.” The lender (an employee of the bank) had complete and total discretion over the underwriting and loan approval process. As a result, human biases played a huge role in perpetuating and reinforcing systems of inequality and harmful patterns of discriminatory lending practices in America, the legacy of which are still felt today.

In order to mitigate human bias in the loan approval process and create a more neutral, fact-based system, the lending industry moved toward a system of credit scoring. The FICO scoring system, which ranks a person’s financial history on a scale of 300 (poor) to 850 (strong), is used in more than 90% of credit decisions made in the US. It takes into account certain financial data points – payment history, current level of indebtedness, types of credit used, length of credit history, and new credit accounts – to determine an applicant’s creditworthiness.

But rather than eliminate discriminatory lending, many critics point to the fact that the scoring system itself (developed in the 1950s by Bill Fair and Earl Issac) is inherently biased and fails to recognize that many of the financial instrument being measured were historically inaccessible to communities of color. They argue that FICO does not consider other important financial data points, like an applicant’s timely payment of rent and utilities, as an indicator of their ability to consistently pay their bills. Data shows that Black and Hispanic consumers have consistently, disproportionately lower FICO scores.

Does Algorithmic Lending Mitigate or Perpetuate Human Biases?

“The data used in current credit scoring models are not neutral. It’s a mirror of inequalities from the past. By using this data, we’re amplifying those inequalities today. It has striking effects on people’s life chances.” – Frederick Wherry, Director of the Dignity and Debt Network at Princeton University

The role of underwriting has become increasingly important as loan portfolios become larger and more complex, and as the competitive environment becomes more challenging. To meet the demands of this dynamic market, underwriters have become more specialized, leveraging innovative technologies and advanced data analytics to improve their accuracy and speed in assessing the creditworthiness of potential buyers. This ensures that only creditworthy buyers receive loans, reducing the risk of loan defaults and financial losses for the bank. Similar to credit scoring, one of the goals of implementing these technologies in the underwriting process is to eliminate discrimination by removing human biases.

But what happens when the AI systems themselves are inherently biased?

While reducing human bias in credit and risk allocation has the potential to eliminate discriminatory lending practices, it could also go in the other direction and reinforce cycles of biased credit allocation. That’s because AI systems are only as good as the data they are trained on. If the AI system is trained on biased data (algorithms that determine an individual’s creditworthiness or risk level based on factors such as race, religion, gender or location), it may unfairly perpetuate those biases in its underwriting decisions.

How Should We Think About ECOA Violations in AI-Driven Underwriting?

The Equal Credit Opportunity Act (ECOA), enacted in 1974, prohibits creditors from discriminating against applicants on the basis of race, color, religion, national origin, sex, marital status, age, because an applicant receives income from a public assistance program, or because an applicant has in good faith exercised any right under the Consumer Credit Protection Act. Lenders found in violation of ECOA can potentially face class-action lawsuits and pay punitive damages to cover any costs incurred by the wronged party.

But detecting ECOA violations in AI-driven underwriting is a complex and highly specialized task that requires an in-depth knowledge of the law, as well as the technology and data analysis techniques used in the loan underwriting process. That is because loan underwriting involves the analysis of large amounts of data, all of which must be collected, stored, and analyzed in a way that allows for the identification of ECOA violations. It can be difficult to detect because of the complexity of the data and algorithms used to make credit decisions.

A good understanding of algorithms, models, and data inputs used to make credit decisions is needed to detect these types of violations. But due to the constant evolution of the technologies used, even very experienced professionals may find it challenging to stay on top of new developments in the field. Any change to credit scoring (i.e.: the addition of alternative data sources) can impact ECOA compliance.

Of course, one must also be able to identify and prove that certain discriminatory patterns exist, which can be a huge challenge in detecting ECOA violations in AI-driven underwriting. In the instance that an underwriting model is suspected to be discriminatory, lawyers may have to then cross reference evidence from a variety of sources and databases in order to prove a legal violation has occurred. As a legal data specialist, detecting and understanding these types of violations  and patterns is exactly what I’m focused on. In today’s world, where AI is becoming more and more a part of every industry, the legal world needs to keep pace with technological advancements or it will fail to protect our rights.