The growth of artificial intelligence in the financial services industry has been somewhat kept in check by regulators, but risk managers still make effective use of this next-generation technology. Where does AI stand today, and how will it evolve in the coming years?
Friday, January 20, 2023
By Tony Hughes
It’s easy to get the impression that artificial intelligence is a 21st century phenomenon, but the idea is actually rather ancient. Neural networks, a central idea in the field today, were first theorized in the late 1800s – and first proposed as a statistical tool as early as the 1940s. Of course, back then, computers were too weak to put the theory into worthwhile practice.
Over the past 60 years, the growth of computational power has been far smoother than the development of AI. Moore’s Law – ostensibly, that computer power doubles every two years – was too pessimistic in the sense that computer power has grown somewhat faster than expected since it was first proposed in the 1960s.
The development of AI, meanwhile, has happened in fits and starts, with periods of intense optimism followed by disappointment, despair and a withdrawal of research funding. These periods, which were common in the late 20th century, were known in the industry as “AI winters.”
Now it is 2023 and AI is a daily reality for anyone who owns a smartphone. An AI maximalist could, however, point to many aspects of our lives that one day may be further enhanced by the use of AI-type techniques. AI research will never wrap-up, as such.
The question is whether development of new techniques will now be more linear - even exponential - or whether another AI winter is a possibility. Failures of AI are always easy to find, so the seeds of pessimism are ever present.
In the banking industry, diffusion of AI technology has been rather more constrained. While it has certainly presented value for risk managers in areas like anti-money laundering (AML) and fraud detection, questions about AI’s explainability and interpretability have limited its reach.
If one were to imagine a completely laissez-faire banking sector – free from rules regarding capital adequacy, sound governance and the fair treatment of all customers – advanced computational methods would now be far more widespread. In an unregulated, profit-centric, data-rich industry, “Moneyball” techniques will usually outperform all other approaches.
So, regulation has forced the development of banking AI to be more circumspect than in most other industries. Many bankers may have been susceptible to hype regarding the prospects of new AI developments, but were prevented from acting on their predilections by their regulatory masters. By breaking the hype-cycle, the threat of an AI winter in the context of banking applications is far less likely than it is in the broader research community.
A Regulator’s Perspective
As regulatory pressures are a key determinant of the speed of AI development in the financial sector, statements of intent by regulators are of particular interest.
Recently, Jessica Rusu – the chief data, information and intelligence officer at the UK’s Financial Conduct Authority (FCA) – gave a speech on AI, optimistically titled: “AI: Moving from fear to trust.” It presented the results of an important AI survey of industry insiders, and also expressed the FCA’s perspective on the same issues.
The survey itself was not particularly revealing: UK financial firms are increasing their use of machine-learning (ML) techniques, while their main concerns are customer fairness and model interpretability. These problems have been discussed at length in past Risk Weighted columns and many other places.
Seventy-two percent of UK banks and insurance companies currently use ML techniques in their businesses, which sounds impressive until you realize that nearly a third of firms don't use the methods at all. Usage was found to be largely confined to AML, fraud detection and operational efficiency enhancements – areas with relatively low risk to the reputation and financial standing of the organizations surveyed.
Rusu’s comments were also fairly commonplace – she clearly communicated both the benefits and risks of AI in financial applications. The importance of governance and oversight was stressed, and Rusu outlined a number of ways the FCA is using AI techniques in its monitoring processes.
The critical point is that the UK regulator is optimistic about the future of AI, which should ensure that it continues to grow in prominence over the next decade. The sentiment should give license to banks and insurers to pursue new projects.
Risk managers working at banks have by this point become quite familiar with the pros and cons of AI.
Beyond the banking sector, my general feeling is that we have entered something of an AI autumn. The primary data point that leads me to this view is the recent lack of progress toward the utopian world of driverless vehicles. Five years ago, AI-enabled truck convoys and the widespread use of robo-taxis both seemed imminent, but neither have come to fruition beyond a handful of localized beta trials.
The current chill, however, will not presage a deeper winter – because of the lower cost and wider availability of computing resources in 2023 relative to 1999.
In banking and finance specifically, regulation tends to cool the hype surrounding new technology, so we are less likely to experience the disappointment of unfulfilled promises. Indeed, with the acquiescence of regulators, we may even enter a golden era of AI research in the banking industry. The enthusiasm for such a push seems to be there.
Tony Hughes is an expert risk modeler. He has more than 20 years of experience as a senior risk professional in North America, Europe and Australia, specializing in model risk management, model build/validation and quantitative climate risk solutions.