[Progress News] [Progress OpenEdge ABL] Unintentional Biases in Big Data: The Case of Bank Loans and Insurance

Status
Not open for further replies.
J

John Iwuozor

Guest
Underwriting, lending and insurance premium calculations are some examples of where the financial and insurance industries need to keep a watchful eye for data bias.

Big data promises a lot, but one of its major pitfalls is being able to unintentionally perpetuate unfair biases.

Let’s say a bank has been collecting customer data for decades and then decides to use that data to build a model to determine who qualifies for loans. If the data in consideration reflects historical bias, the model will learn and apply those same biases.

Because of this, qualified applicants may be wrongly denied loans or insurance coverage depriving them of opportunities to protect their assets or grow their business. In this article, we’ll explore how biases make their way into big data models for loans and insurance, look at real-world examples, and see what can be done to fix that.

Automation Does Not Rule Out Discrimination​


The first thing to understand is that automation is not some magic wand that solves and rationalizes every use case. This particularly applies to decision-makers in the bank and insurance industry.

Here are some cases:

Underwriting​


Automated underwriting is a process used in mortgage and insurance industries to evaluate risk and determine whether to approve or deny an application. It uses artificial intelligence (AI) and machine learning (ML) to make decisions based on data such as credit scores, leverage and other observable applicant-risk factors. While AI/ML offer a host of benefits, bias can affect these systems, leading to racial disparities in approval rates. For example, this research showed how minority applicants are less likely to receive algorithmic approval from race-blind government automated underwriting systems (AUS) due to lower credit scores and higher leverage.

Lending​


Discrimination in lending has been a persistent problem in the past, with certain groups being unfairly excluded from access to credit and financial services. According to HBR, AI can be used to reverse past discrimination in lending, but it can also perpetuate existing biases if not designed and tested carefully. These models exhibit biases that stem from their programming and data sources, resulting in discriminatory practices and unfair treatment of other groups of people.

Fred Winchar, CEO at MaxCash, opined that women, for the longest time, have always been victims of data bias through unconscious lending decisions. For example, a study carried out by authors Astrid Kunze and Amalia R. Miller showed that women are 2.9% less likely to get a promotion than men. What’s interesting about this study is that it was carried out on data from Norway, one of the most gender-equal nations on the globe.

Insurance Premium Calculations​


According to a report, Black consumers often pay higher premiums due to historically biased insurance rules such as race-based insurance premiums and subtle proxies for unfair discrimination. ProPublica conducted a nationwide study that found predominantly African-American neighborhoods pay 70% more on average for car insurance premiums than other areas do. This shows how historical discrimination, the use of proxy variables and a lack of diversity in the tech field can lead to unintentional racial biases in big data and algorithms used by insurance companies.

How to Address These Biases​

Regular Auditing​


Insurance companies should be able to audit and validate their algorithms for emerging biases. This includes monitoring for disparate impact or unequal outcomes across groups.

Fairness and Transparency​


Developers must consider fairness and ethical issues from the very beginning of building AI models. This means taking into account factors such as race, gender and socioeconomic status, which can all be used to unfairly discriminate against certain groups of people.

If a group was underserved historically due to societal factors, the model may see them as “riskier” borrowers just due to a lack of data. This creates a vicious cycle where the group continues to be underserved.

According to Jack Prenter, CEO of DollarWise:

“In order to give fairness and transparency, insurers need to have a complete understanding of why they are pricing risks a certain way. That’s going to become harder going forward as more decisions are made using AI / ML models. Until a massive breakthrough in the ML space allows us to reverse engineer the decisions of models like Deep Neural Nets, transparency is not going to be possible. And without transparency, it’s hard to ensure fairness. Insurers need to take massive steps to ensure that they “clean” the data they are inputting as thoroughly as possible to remove bias, such that they can be confident that the output of these models is as fair as possible.”

Diverse and Representative Data​


It’s important to ensure that the data used for training is diverse and represents all groups fairly. The features or attributes used in the model should be carefully selected, taking into account how different groups may be affected. Features closely correlated with protected attributes should be avoided. The choice of algorithm can also impact bias, with opaque or “black box” algorithms making it harder to detect and address bias.

The Role of AI/ML Software and Algorithms​


The major pros of AI systems are speed and efficiency. They help evaluate loan applications and assess them and the creditworthiness of the applicant in real-time, which helps with loan approvals, fraud detection, and saving time and effort.

However, Prenter believes that it’s essential to monitor these systems to prevent discrimination and ensure accuracy because many of these ML models are “black box” models, in that it’s hard to know why it’s outputting a certain response, which can make it very challenging to remove unwanted biases.

How Progress Corticon Can Help​


Progress Corticon Business Rules Management System (BRMS) offers significant benefits to individuals involved in loan and insurance decisions. In the insurance industry, where decisions represent a large portion of operational tasks, Corticon enables companies to automate recurring operational decisions efficiently. By automating the rules that drive decisions and deploying them as services, Corticon improves response times, increases decision consistency, and reduces costs by eliminating unnecessary manual steps. It also allows IT teams to focus on innovation rather than coding updates.

Leading insurance providers have already experienced the benefits of Corticon, including reduced customer acquisition costs, streamlined underwriting tasks and the transition of decision modeling and maintenance duties to the business team.

See how Progress Corticon accelerates insurance processes.

Continue reading...
 
Status
Not open for further replies.
Top