main-article-of-news-banner.png

Combating AI bias in the financial sector

 

Using explainable AI models is critical to avoiding bias for enterprises in most sectors of the economy, but especially in finance.

In the U.S., credit unions and banks that deny consumers credit cards, car loans or mortgages without a reasonable explanation can be subject to fines due under the Fair Credit Reporting Act. However, AI bias is still pervasive in the finance industry.

It's a problem that some government agencies are trying to address, but there is no easy fix, said Moutusi Sau, an analyst at Gartner.

"Without the existence of common standards in the financial services industry, it becomes hard to measure what is treated as bias," Sau said. "The solution of the bias issue goes down to modeling and should start at pre-modeling level, taking it to modeling and then post-modeling measures of deviations."

Pre-modeling explainability can eliminate bias in the data set. Meanwhile, explainability models enable users to interpret complex models. Post-modeling explainability provides explanations for pre-developed models, Sau wrote in a 2021 research paper.

Due to the lack of consensus on creating fair models among government agencies, the financial sector and IT professionals, companies approach the problem in differing ways.

 

Valores, Comercio, Monitor, Negocio

 

Zest AI

"Financial services are particularly problematic because of the history of bias practices," said Jay Budzik, CTO at Zest AI, during a panel discussion about equity at the ScaleUp:AI conference on April 7.

Zest AI is a financial services vendor that develops machine learning software for credit underwriting.

"We take the view that credit is broken -- that the math that was invented in the '50s and really sort of popularized FICO [the credit reporting score] was great at the time, but it also reflected a certain set of values and social norms," Budzik said in an interview.

The vendor, based in Burbank, Calif., provides software and services to banks that enable them to take advantage of a machine learning model's predictive power to create a less racially biased and inaccurate scoring model.

Its platform uses game theory, an applied mathematics method that analyzes situations where the players make interdependent decisions. Zest AI uses this method to analyze how machine learning models make decisions for fair lending.

"For fair lending and race discrimination, that's really important too because you want to make sure that your model isn't penalizing people ... on the basis of something improper," Budzik said in the interview.

In addition to using game theory, the vendor trains models to focus not only on accuracy, but also fairness -- a method it calls "adversarial debiasing."

This enables Zest AI to inject the notion of fairness into its model-training process so that each cycle of data the model looks at is evaluated on not only accuracy, but also fairness for protected groups, including Black and Hispanic people, immigrants, and others. The model then receives feedback from a second, or "helper," model, which tells it if it is being fair or not.

"This method ... makes use of all the power of machine learning and the fact that it can explore billions of alternatives in order to find the one that achieves a fair outcome, but still provides that high level of accuracy," Budzik said.

But adversarial debiasing is not foolproof, he noted.

"Sometimes we're not able to find a model that's fairer that is just as accurate," he said. This leads to a compromise approach in which a significant amount of accuracy or even a small amount of accuracy is traded for fairness.

 

Financiero, Analítica, Difuminar

 

Another approach to avoiding AI bias in finance

Credit Karma, a brand of Intuit, tries to eliminate bias by not using personally identifiable information (PII) data, said Supriya Gupta, general manager for recommendations at the personal finance company.

Credit Karma partners with financial institutions that adhere to fair lending practices, Gupta said. Instead of using personal identifiers such as gender and race, the company uses other attributes to provide financial recommendations for the more than 120 million consumers it works with.

The attributes include a person's credit score, personal transactions, assets, liabilities, loans, income and ways the person is paying bills.

Credit Karma runs deep learning models with these attributes to create 35 billion model predictions a day, according to Gupta. These predictions drive the AI engine to predict whether members will be approved for one of the offers they see on Credit Karma. The recommendations also provide insight into ways members might be able to improve their personal finances.

"That's really the power of AI," Gupta said.

© 2022 LeackStat.com