Risk modelers around the financial services industry are grappling with the question of how to handle pandemic-era data. As is well known, the vast majority of pre-COVID models dramatically overshot the mark in 2020/21, predicting that high, headline-grabbing unemployment and rapidly declining GDP would translate into a substantial increase in credit stress around the world. For a variety of reasons, this did not happen.
Here, we want to look forward, rather than backward. When the dust finally settles, and we've fully digested our pandemic experiences, will we be left with a better set of industry risk models?
For me, the answer to this question is a categorical “yes!” Many in the industry, however, including key regulators, seem to be weighing the merits of expunging pandemic-era data from models used for stress testing and capital adequacy calculations. This suggestion is a crime against sound empirical logic, and the only benefit of such an approach is effort minimization.
Prior to 2020, if asked how the financial system would perform during a global pandemic, the only honest answer would have been, “I don’t know.” In 2022, we do know. Since we now have more information, our models should do a better job.
Rather than simply producing only a rerun of the global financial crisis (GFC), risk models should be able to handle a variety of situations. Some day (hopefully, many decades in the future), our data will cover scores of recessions – all of which are likely to be unique in their dynamic properties. At such a point, our models would ideally be able to explain what happened in every one of them.
Previous Lessons Learned
One of the key features of the GFC, relevant for the current discussion, is that pre-GFC risk models were similarly found to be inadequate. Back in 2007, most models predicted that subprime mortgage losses would remain low, despite reports of severe problems emerging across the industry. As the recession developed, it quickly became clear that most prior predictions were far too optimistic.
Before 2008, risk managers would commonly use models that did not incorporate macroeconomic data. On the retail side, techniques like roll-rates were common, and logistic PD models – also in widespread use – typically relied only on loan-level characteristics.
In the corporate space, Merton models were widely used, but it was rare for these to be augmented with macro business cycle adjustments. Moreover, vintage effects, which are crucial for a proper understanding of the tsunami of losses seen during the GFC, were rarely used in either retail or wholesale settings.
The GFC and the subsequent demand for stress tests transformed the risk modeling process, unquestionably for the better.
As we look ahead, it would be foolish to think that the pandemic could be equally powerful in driving risk model reform. The impetus is simply not there: remember, strictly from a financial standpoint, the pandemic was a non-crisis, because presumed deep losses never materialized.
As Winston Churchill might have said, though, we should not let a good non-crisis go to waste. Indeed, it’s an ideal time to take stock and identify and fix model weaknesses that led us astray in the early days of the pandemic.
Issues to Consider
One thing that has to change is the belief that marginal effects of macro variables on credit outcomes are constant over time. This is the notion that an increase in the unemployment rate – from, say, 5% to 10% – will always yield the same level of stress for bank portfolios. History instead tells us that recessions of similar intensity will yield a wide range of different credit outcomes.
One key driver of these differences is vintage effects. If your bank originates a really terrible cohort of loans, it will then be susceptible to even minor economic disruptions. High-quality cohorts, on the contrary, tend to be more robust, even in the face of a severe downturn.
The population of loans on the eve of the GFC, especially in mortgage, was pure vinegar, and a systemic failure of the banking system was inevitable. In contrast, the loan cohort on the eve of the pandemic was basically sound, and subsequent losses were found to be low.
Modeling with vintage effects is well known across the industry, and the resultant models are highly intuitive. There’s no reason why these approaches are not in more widespread use nearly 15 years after the GFC.
The second major point of difference between the GFC and the pandemic pertains to uncertainty about the timing of recessions.
For two years, from late 2006 until the failure of Lehman in late 2008, the industry endlessly debated whether a recession would even happen. In hindsight, it’s fair to say that such an outcome was an inevitability all along. On the other hand, the COVID-induced recession could be dated, in real time, precisely to the moment the first local lockdown was announced. It was always known that the development of a vaccine would provide a clear way out of the morass.
People often attribute the low losses of the pandemic to government support programs, but these would have been impossible without the certain timing of the pandemic. Moreover, I suspect that consumers and businesses, if given their druthers, would prefer to rip the recession band-aid off quickly rather than slowly. Recession uncertainty stretches the whole process out, exerting a lot of upward pressure on credit losses.
The bottom line is that two fairly deep recessions with very different dynamic features yielded remarkably different outcomes for banks. These events were neither typical nor atypical.
Recessions, by definition, are disruptions to the normal economic order, and are always a bit weird. Our task as risk managers is to understand these oddities, so that we can better plan for the future.
If we hold our nerve and do the required research, we should be able to understand the features of the pandemic that made it so financially benign. If these features can be captured in risk models, there is no question that the industry’s analytical assets will be more valuable than those they will replace.
Calls to expunge pandemic-era data should therefore be resisted forcefully.
Tony Hughes is an expert risk modeler for Grant Thornton in London, UK. His team specializes in model risk management, model build/validation and quantitative climate risk solutions. He has extensive experience as a senior risk professional in North America, Europe and Australia.