Risk Weighted

The Limitations of Models and the Need for Simpler Scenarios

Risk models today are asked to perform statistical miracles, covering a multitude of far-flung scenarios that cannot be validated. This approach simply isn’t realistic, and modelers who want to improve the accuracy of their projections would be better off focusing on fewer and more basic scenarios that closely reflect recent events.

Friday, June 14, 2024

By Tony Hughes

The range of potential threats faced by banks today is limitless, and scenarios are therefore becoming more and more complex. But is this a good trend?  

When we take a closer look at the evolution of scenario analysis and models, and when we consider the current flaws in modeling and validation, there’s a strong argument that we should actually be simplifying scenarios.

In the early days of stress testing, the scenarios represented a repeat performance of the Great Recession of 2008/09. Virtually everyone in the industry was intimately familiar with these events, and risk models were calibrated using recent data. The demands were straightforward, and users could have confidence that the projections were accurate.

tony-hughesTony Hughes

But as risk management has become more conjectural, the nature of the required scenarios has broadened. Risk modelers must now develop idiosyncratic scenarios and model what-if situations in which liquidity is threatened. Moreover, they must consider the effect of short and long-term climate scenarios.

Scenario builders, in short, could now be asked to model pretty much anything. But models have limits.

Model Shortcomings

There’s an adage in academic circles, which is “one idea, one paper.” In statistics, this advice is priceless, because it means that you can direct data collection and modeling efforts to launch a single mission and build only one model – or a small handful of similar models. While you still need to include appropriate controls, these will usually be well defined in the literature and easy to access. The models developed will be parsimonious and therefore consistent with core statistical principles.

Models developed in industry these days, by way of contrast, must capture a range of disparate scenarios – many ideas, one model. Managers do not want to commission separate specifications to cover every element, because doing so triggers a long list of tasks that must be checked off. Consequently, we end up with complex models that are asked to perform several statistical miracles.

This is dangerous because there’s a tendency for less-technical business managers to believe, without question, every set of projections thrown on their desks. In reality, of course, even the best models will be hampered by the statistical uncertainty inherent in the data, in addition to the model risk that will be ever present. Even worse, the level of uncertainty created by the use of expert judgment cannot be determined.

The degree of uncertainty for scenario projections is rarely confessed – but it is always there. Model risk invariably increases as the number of demands placed on a given specification proliferates.

Validation Challenges

I’ve written many times about the difficulties we have in properly validating – or invalidating – scenarios. In a nutshell, the scenarios created by risk modelers can never happen, so we can never see how well we’re doing, either in prospect or in hindsight.

This problem becomes more acute as narratives become more complex. The more we layer additional elements into the hypothesized events, the less likely it becomes that something similar might occur. Indeed, complex scenarios tend to become increasingly remote from our lived experience.

Take, for example, the global financial crisis (GFC). Following the GFC, early stress tests considered a carbon copy of recent events – and, accordingly, the projections produced in the aftermath of that disaster were straightforward and reliable. But what if Lehman Brothers had not been bailed out, the Troubled Asset Relief Program had never been implemented and President Obama chose not to rescue General Motors from bankruptcy.

Under those hypothetical conditions, would your post-GFC stress projections have been as reliable?

The Case for Simple Scenarios

Sticking with scenarios that closely resemble recent events is obviously rather dissatisfying, even though it gives us our best chance of producing accurate stressed projections.

We need statistics to push back the curtain and inform us about stuff that hasn’t happened yet. It should be possible to come up with a set of events that keep the requirements of the models to a minimum – while still allowing us to address the questions we want answered.

Consider the following scenarios:

  1. The Fed cuts rates before the end of the year.
  2. The Fed does not cut rates.

These are obviously pertinent to current concerns and are clearly very straightforward in their conception. They are also exhaustive and mutually exclusive.

By specifying the scenarios using the smallest possible number of conditions, we are approaching as closely as possible the practice of loss forecasting, which is used across the industry and has a well-established set of protocols for model validation and assessment.

Let’s say that you have a modeling team responsible for projecting credit losses. If the team predicts portfolio credit losses of $12 million in Q3, the challenger model predicts $14.5 million and actual losses turn out to $14 million, then the challenger is the clear victor. If a similar result happens regularly, it will become increasingly clear that challenger is the superior model for the stated task.

Since the simple proposed scenarios are exhaustive, one of our projections will remain relevant to the business for the remainder of the year and could be assessed for accuracy in real time. If the same set of simple scenarios is used repeatedly, a track record can be established, indicating whether the models and/or analysts are doing a good job.

Of course, if we only consider conditions such as whether the Fed does or does not cut rates, only half the scenarios would be relevant in any given period. Nonetheless, this is still better than holding a raft of scenarios of dubious or indeterminate quality. Since we can validate some of the projections, we will be able to see what works, and then hone the methodology for future use. This is impossible with complex scenarios.

This same simplified approach could be taken with other problems common across the industry – even for more extreme scenarios. For example, in the climate space, you could differentiate the scenarios on the basis of whether the Paris Accord targets are achieved or missed. A mortgage modeler, meanwhile, could simulate the distribution of credit losses – by, say, assuming house prices fall over the next two years, and by then comparing this to the distribution, assuming price gains.

You could even combine conditions, though I’d be careful not to take this too far. For example, you could mingle the simple rate cut scenarios with the house price scenarios, and then project expected losses. This will give you four potential states of the world – one of which will actually be pertinent.

Parting Thoughts

The approach suggested is obviously, but deliberately, very similar to sensitivity analysis. As a statistician, my instincts tell me to begin with a tried-and-true method, and then take baby steps to adapt it for a new application.

The origin of scenario analysis was suitably simple, but subsequent expectations for the technique have been too grand. People think they can reliably project anything, but this is a statistical fantasy.

We must get back to using simpler techniques that can be properly validated. We should remain humble, understand the limits of the data and try to incrementally improve our understanding of the vulnerabilities present in the banking system.

Shooting for the analytical stars sounds like a wonderful philosophy. But I’m sorry, it just isn’t plausible.


Tony Hughes is an expert risk modeler. He has more than 20 years of experience as a senior risk professional in North America, Europe and Australia, specializing in model risk management, model build/validation and quantitative climate risk solutions. He writes regularly on climate-related risk management issues at

We are a not-for-profit organization and the leading globally recognized membership association for risk managers.

weChat QR code.
red QR code.

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals