Editor's Pick Opinion Alternative Lending Digital Banking Savings And Investment Crypto

How the credit sector is tackling AI bias

Artificial intelligence (AI) is becoming an increasingly important tool within the credit industry, writes Chirag Shah, founding partner at Nucleus Commercial Finance. But new risks are building as a result.

a woman with a black hair and a green and white sign

Pexels/Cottonbro Studio

Artificial intelligence (AI) is an extremely powerful tool that, if used correctly, has the potential to provide huge business value. Such is the speed of its uptake that almost 100 per cent of organisations will have adopted it by 2025, according to research by Forrester

But despite being a force for good, on the flip side, AI has the potential to incorrectly and unfairly interpret data, often favouring one group of people over another when it comes to consumers. 

This has been perpetuated as humans choose the data that the algorithm will use and how those results will be applied and then the AI systems merely replicate those biased models in one big, continuous vicious circle. In business lending, this bias manifests itself in the form of company age, sector and geography.

A Gartner report published in 2018 predicted that through 2022, 85 per cent of AI projects would produce erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. That fact is that, several years later, despite new advances, AI bias is still a big problem that companies are struggling to get to grips with.

How AI bias manifests itself

There are three key ways that bias can manifest itself in AI. The first is in input data, where any bias present, such as gender, racial or ideological, and incomplete or unrepresentative datasets will limit AI’s ability to be objective. 

And because some methods of AI training may obscure how data is used in decisions, this creates the potential for discrimination.

The second way is in development. Many AI systems will continue to be trained using certain data, creating an ongoing cycle of bias, while a subconscious bias or lack of diversity among development teams may influence how AI is trained, perpetuating bias within the model.

The third way is in post-training, where there is a continuous learning drift towards discrimination. As AI systems learn and self-improve, so they may acquire new behaviours that have unintended consequences, such as an online lending platform suddenly deciding to reject loan applications from ethnic minorities or women more than any other group.

AI may also make it harder to explain solutions, compounding the impact of potential discrimination by making it more difficult to establish safeguards against the issue. 

The problem is worsened by the fact that regulators often lack the necessary technical expertise, time and resources to inspect algorithms, especially if their development isn’t properly documented or there are persistent, system-wide gaps in governance.

Credit industry

Not one industry, it seems, is immune from AI bias, either. Over time, its results have exposed everything from racism in the American healthcare system to depicting CEOs as purely male, and even Amazon’s algorithm for hiring, which favoured male candidates.

But one sector which has made big strides in tackling the problem in recent years is the credit industry. 

AI bias can manifest itself in many forms within the segment, ultimately resulting in discrimination against certain customers and rejecting their application for credit on that basis, whether it’s trying to secure a mortgage, a car loan or something else.

The industry has taken several steps to mitigate the problem and it’s a blueprint for others to follow. The first was organisations investing in a diverse data and decision science team to safeguard against biases resulting from a homogenous workforce.

As AI models continue to develop, it’s becoming increasingly important that bias is checked for by a human. While it’s hard to eliminate it completely, this approach has been found to improve the models and reduce bias.

In the same vein, banks and financial institutions have been focused on establishing multidisciplinary teams to work on AI initiatives. 

These include not only developers and business managers but also human resources and legal professionals, to name but a few roles.

Greater governance

They have also been tightening up their governance of AI to minimise the risk of bias and discrimination. 

This has involved creating multidisciplinary internal teams or contracting third parties to audit AI models and analyse the data, as well as establishing a fully transparent policy for developing both algorithms and metrics for measuring bias, while keeping up with regulations and best practice.

In addition, organisations have continued to build diverse data sets and harness unstructured data from internal and external sources to ensure greater inclusivity. 

They are also constantly checking for skewed or biased data through the different stages of the model’s development.

Another area of focus is on monitoring AI and machine learning models for data and concept drift. 

By scanning training and testing data, they can determine if protected characteristics and/or attributes are underrepresented, and retrain models when issues are detected.

Trust is also a vital component. Companies need to trust in their AI output, data integrity, and ability to both understand and implement the output, as well as to control it

By understanding how algorithms make decisions, and closely monitoring the data they use to swiftly detect and remove any issues, the credit sector is leading the way when it comes to tackling AI bias.

But it still needs to go further to ensure that certain groups of customers are not unfairly discriminated against when it comes to key lending decisions.

The views and opinions expressed are not necessarily those of AltFi.

More Like This