Lender Faces $2.5 Million Settlement for Algorithmic Discrimination

by Zachary Barlow

July 22, 2025

Algorithmic discrimination is a major risk of automated decision-making systems. We’ve seen these risks manifest internally with AI-driven hiring platforms. While HR remains a major risk area for algorithmic discrimination, the risk doesn’t stop there. Algorithmic discrimination can impact how companies serve their clients, and importantly, which clients companies choose to serve. This is particularly important for financial services firms where access to services is highly regulated. Earnest Operations LLC was recently hit with a $2.5 million settlement from the Massachusetts Attorney General for allegedly using an artificial intelligence model that had a disparate impact on Black and Hispanic student loan borrowers. A recent Debevoise & Plimpton memo discusses the details:

“CDR is produced by the U.S. Department of Education and describes the average rate of loan defaults associated with specific higher education institutions. The Massachusetts AG alleged that Earnest’s use of CDR in its underwriting model resulted in a disparate impact in approval rates and that loan terms with Black and Hispanic applicants were more likely to be penalized than White applicants.”

Companies should scrutinize how their AI algorithms make decisions. Taking care to fully consider the downstream implications of all data fed into the algorithm and how they are weighted. Disparate impacts are often unintended, and externalities must be fully thought through when designing algorithmic systems. It’s important to note that the AG’s allegations do not rest on a new AI-specific law. Instead, the AG invokes traditional fair lending and consumer protection laws in the settlement. This is another example of how new technologies can run afoul of laws that have been on the books for many years. Even if a company operates in a state that doesn’t specifically ban algorithmic discrimination, it may still face legal liability if its AI use results in discriminatory outcomes.