4 Ways to Avoid Bias in AI Lending Models 

Takeaways from American Bankers’ Retail Banking Conference

For all its promise, artificial intelligence (AI) has its share of pitfalls.

One downside is the potential for AI algorithms to reinforce societal biases. Predictive models are only as good as the data they’re built on. And if that data reflects racial, ethnic or gender biases, so will the decisions or actions these models recommend.

This is especially relevant to lenders who use AI models to predict risk of default. Plenty of people have written about biased AI models in finance and other industries. But fewer have written about how to avoid bias when creating an AI model.

My firm 2River Consulting Group had the opportunity to help LiftFund, one of the nation’s largest small business lenders, develop models that improved loan processing time by 10%. Nelly Rojas-Moreno, LiftFund’s Chief Credit Officer, and I talked about the process of avoiding bias in AI lending models at last month’s American Bankers Retail Banking 2019 conference in Austin.

Here are the four messages we relayed at the conference:

1. Make sure your data set is diverse.

AI models are trained on historical data. A model is less likely to produce biased predictions if there is significant diversity of lending cases in your training database.

A lender that typically serves high-income and established businesses and fewer low-to-moderate-income entrepreneurs may wind up with a model that excludes those entrepreneurs. If a significant number of applicants in this group are from a protected class, this can lead to bias. By the nature of its work, LiftFund, a Community Development Financing Institution (CDFI), has a diverse borrower base. But other lenders may need to sample cases carefully to ensure a diverse training set of borrowers for their AI models to avoid bias.

2. Use multiple features for each case in your model. But make sure they’re the right ones.

LiftFund’s borrowers have an average FICO score below 600. Yet the lender’s repayment rate was a stunning 95% in 2017 and increased to 97% in 2018 after implementing the AI model. A model depending only on credit scores would have denied most of these borrowers.

Adding a second variable, such as a third-party bankruptcy index wouldn’t solve the problem. Credit score and bankruptcy risk provide important information about potential borrowers. But used alone, they can create a proxy variable for a protected class. In the case of LiftFund’s model, we use nearly 80 additional variables measuring cash flow, credit, collateral and character, to create a holistic decision model. And we review each decision to makes sure no one variable is a key decision-driver.

3. Make sure you can explain your model.

It is not enough to build a data set with diverse cases and dozens of features. If you cannot explain how each decision is made, you cannot be sure that the model is truly unbiased.

In a heavily regulated industry like lending, transparency in explaining denials is crucial. The more attributes you use, the more complicated your model becomes. But researchers have advanced methods to explain the mathematical models used in machine learning. These techniques (e.g., LIME, game theory approaches based on Shapley values) allow one to interrogate the machine learning model and identify the attributes that lead to the model’s outcome. This improves the transparency of decision models.

4. Review and refresh your model every few months.

While AI has increased efficiency in many areas of lending, it is still no substitute for human judgment. LiftFund’s underwriting and loan officer team proves that. Remember, this team achieved 95% repayment from borrowers with average FICO scores below 600.

After creating the model, we made sure this team of experts reviewed the technology to ensure it reflected not only the data but also their judgment. We also planned to review the model every few months. Changes in the broader economy can affect the impact of some attributes. Keeping on top of those changes is important to ensure a sustainable, unbiased model.

If we learned anything from the experience, it’s that the importance of authentic human interaction remains key in this AI age. At the Retail Banking Conference last month, almost every panel and side discussion mentioned that banking customers – consumer or personal, small or large – still want to deal with a person at all stages of the banking process.

Lending decisions affect people’s lives. People, with the help of AI, should make them wisely.

Recent Posts

The Archive