Article illustration 1

Algorithmic lending platforms promise objectivity through data-driven decisions, but a new study exposes how excluding protected characteristics can paradoxically create discriminatory outcomes. Researchers from Stanford and Harvard analyzed approximately 80,000 personal loans from a major U.S. fintech platform, introducing a novel profit-based measure of lending discrimination that reveals hidden biases in machine learning underwriting systems.

The findings, detailed in A Profit-Based Measure of Lending Discrimination, show loans to male and Black borrowers yielded significantly lower profits than those to other groups. This indicates these groups received unexpectedly favorable lending terms—a counterintuitive result traced directly to model miscalibration:

  • The platform's algorithm underestimated credit risk for Black borrowers by 11-17%
  • Simultaneously overestimated risk for women by 7-9%
  • Resulted in interest rate disparities averaging 1.8 percentage points

"This creates a tension between competing notions of fairness," the authors state, highlighting a core dilemma: Explicitly including race and gender in models corrected the miscalibration but conflicts with fair lending regulations like the Equal Credit Opportunity Act that prohibit such variables.

The Regulatory-Technical Clash
The research demonstrates how technical solutions to bias (attribute exclusion) can inadvertently harm the groups they aim to protect. As fintech increasingly dominates consumer credit, this exposes critical limitations in current compliance paradigms:

  • Blind attribute removal fails when proxy variables embed historical biases
  • Profit-based auditing provides concrete business incentives for fairness
  • Miscalibration stems from training data reflecting societal inequities

Industry Implications
For developers and fintech leaders, this study mandates:
1. Audit redesign: Shift from demographic parity checks to profit/disparity impact metrics
2. Causal modeling: Prioritize techniques that distinguish discriminatory proxies
3. Regulatory dialogue: Advocate for nuanced frameworks accommodating ML realities

The findings underscore that truly fair algorithms require moving beyond simplistic fairness definitions toward systems that acknowledge and correct embedded societal biases—a challenge demanding equal parts technical innovation and policy evolution.

Source: Coots, M., Bartlett, R., Nyarko, J., & Goel, S. (2025). A Profit-Based Measure of Lending Discrimination. arXiv preprint arXiv:2512.20753.