Generative AI: The Next Wave in Credit Assessment?
Banks must now consider how the latest advancement in AI can help risk analysts measure consumer and corporate credit.
Friday, October 13, 2023
By Tony Hughes
How might financial institutions use generative AI for both retail and corporate credit risk assessment? Potentially, this innovative technology could be used for everything from credit scoring, reporting and processing to group communication to reviews of credit ratings and the determination of management overlays.
But how much of this is realistic?
Certainly, credit risk analysts can imagine future interactions with generative AI chatbots like ChatGPT. For example, on the consumer side, if you receive a credit card application, you could ask the bot, simply, "should I accept this application?" It may then offer a response that reads something like the following: "An initial credit limit of $1000 would likely be profitable and consistent with your stated risk appetite. Here's how I reached this conclusion…"
That's all well and good, but it's not that far removed from the level of pertinent interaction you get from a simple credit score, and we don't need much of a dialogue on this matter.
It's easy, of course, to imagine a traditional scoring model being replaced by an AI variant. However, while AI could be game-changing from the perspective of predictive accuracy, adding the generative modifier – the part that can understand natural language and provide responses that accurately mimic a human style – doesn't add much to the forecasting equation.
One place where these tools could be useful is in the interface between technical credit scores and credit analysts. It is quite common for line-ball applications to be delegated to human beings for a final up-or-down decision to be made. If generative AI can help the analyst better understand the reasoning of the black-box algorithm, it will be a boon to the process.
The other element that could be game changing is the treatment of people who are denied credit. Instead of dry reason codes, generative AI could be used to provide rejected loan applicants with fully-fledged, humanistic decision statements, better conveying the reason for the rejected application.
This is a very sensitive topic, though, and I doubt the technology is empathetic enough to handle this type of interaction right now.
On the corporate credit side, application of generative AI is likely to be more widespread.
For one thing, public companies make many financial disclosures, releasing a trove of information related to operations, communications with clients and information about products and services being offered. The traditional role of a credit analyst was to sift through this information, looking for clues as to the company's creditworthiness and future direction. It's now easy to imagine this work being done – very effectively – by a well-trained generative AI.
Beyond this, I'm trying to imagine a bot like ChatGPT actively participating in credit rating review committees. The tool should be able to produce reports helpful to human participants right now, but could it, in future, also participate actively in the proceedings? How high up the credit food chain are jobs at risk of future replacement?
Currently the tool seems to be designed for one-on-one interactions, but it would be extremely cool if generative AI could navigate the vagaries of a conference room filled with smart, opinionated people. If you're talking about mastery of human interaction, such situations would be very close to the apex.
One can also imagine generative AI forming prescient views of likely credit performance. Could these views, one day, make a positive contribution to the determination of, for example, a management overlay? Given the success of AI in other fields, it can't be that far away.
Conversing with ChatGPT
The best way to better understand the functionality of generative AI is to interact with the technology. So, after reading Marco Folpmers’ excellent piece on generative AI a few weeks ago, I decided to spend a couple of hours in the company of ChatGPT. I asked it – the bot’s preferred pronoun – many of the questions I have addressed in Risk Weighted columns over the past few years.
Its answers were realistic and generally well informed. I did find the bot to be more equivocal than I tend to be and more willing to accept arguments from authority.
For example, I asked, “Is scenario analysis a valid form of scientific inquiry?” It correctly informed me that it was not, but it went on to explain that, nevertheless, many institutions find the technique very useful in forming strategy. That's probably true, but not especially relevant to the question that was asked. ChatGPT seems to want to make sure that it covers both sides of any contentious question it is asked.
Anyway, I digress. The technology is clearly impressive.
My more technical colleagues will chide me if I don’t point out that AI techniques have been a core element of the statistical curriculum for many decades. Indeed, we've known for several years that AI tools have been getting pretty good at solving credit problems – and the success of ChatGPT is a testament to the collective creative efforts of past researchers in the field.
In the world of banking, black-box solutions are generally frowned upon – mostly by regulators and, by extension, banks and their managers. However, generative AI brings a new dimension to the table, and the rise of ChatGPT should have several practical and positive implications for banks.
Perhaps the biggest impact, though, will be improved public relations for AI. The benefits of the tools, well known to those with technical expertise, are often somewhat hidden from the average layperson. Literally anyone can grab a virtual coffee with ChatGPT, though, and instantly see that the technology is pretty amazing.
The success of generative AI is a sign of advances in AI generally. It might be time for bankers to take yet another look at the technology.
Tony Hughes is an expert risk modeler. He has more than 20 years of experience as a senior risk professional in North America, Europe and Australia, specializing in model risk management, model build/validation and quantitative climate risk solutions.