Risk Weighted

A Modest Suggestion to Improve Stress Testing

The current process for ensuring that banks have enough capital to cover their potential losses is too reliant on conditional scenario analysis. Replacing flawed baseline scenarios with unconditional scenarios would enable validation and hold banks more accountable for their portfolio forecasting, ultimately resulting in better stress tests.

Friday, April 26, 2024

By Tony Hughes


Over the last 15 years, every stress testing issue that has cropped up has prompted precisely the same response from regulators. Scenario analysis is apparently the right tool for the job – from climate risk to concerns about bank liquidity, from questions about capital adequacy to the calculation of loan loss reserves.

However, under the current setup, the validation of scenarios is next to impossible.

While we can assess the underlying stress-testing models indirectly, by pretending that they are destined for use in structural analysis or for developing baseline forecasts, we can't directly assess the scenario projections for accuracy. The paths described by regulators never precisely occur and, even if we get a close approximation, we will still lack the repeat experiments required to demonstrate consistent performance.

tony-hughesTony Hughes

Given that these projections are fundamental to so many critical risk management functions, this lack of affirmation is discomforting to say the least. But is there a way to introduce a new element to the stress testing process that gives us something we can fully validate?

While there is no panacea for addressing all the critical weaknesses of validating stressed projections, there is a simple, cheap and very modest suggestion that may nonetheless prove to be very powerful.

Before we dive into this proposed solution, it’s important to provide context about the flaws in existing scenario analysis.

Baseline Inefficacy

When a set of scenarios are constructed, a routine element is the inclusion of a baseline scenario. The pathways for this exercise are defined by the regulator – just like the severely adverse scenario and other, more extreme scenario pathways under consideration.

The point of the baseline is to establish a benchmark against which the other scenarios can be compared. Few users ever pay much attention to this scenario, but it is a routine element of stress tests conducted around the world.

Since the baseline portfolio projections are conditional, and assume that the regulator's stated path will actually be traversed, they are not traditional, no-holds-barred predictions of future performance. As such, many of the same validation issues discussed earlier apply to its calculation.  

When considering severe scenarios, one of the difficulties is that the events described are far removed from anything we have previously witnessed. The problem is we simply have no way of determining whether the models we have developed are best able to cope with such conditions.

One difference with the baseline is that we will presumably model it better, simply because, by definition, it will not represent a major deviation from recent behavior. But the baseline is still conditional – e.g., it requires banks to forecast portfolio losses (among other things) based on projected changes in interest rates, inflation and other economic factors over a specific period of time.

Consequently, if the projections for the bank’s portfolio are subsequently found to be off base, the responsible modeler could argue that the errors were caused by the deviation of the regulator’s prescribed pathway from reality. In other words, my model was right, but your economic baseline missed the mark.

A Straightforward Solution

My ridiculously simple suggestion is to make the baseline an unconditional forecast. No rules, no expectations – just give me the best possible prediction for your portfolio.

This could be an addition to the standard conditional approach, or it could be a replacement. In almost all circumstances, its use would not impact the interpretation of the stressed scenarios. The bank’s own baseline would normally be very similar to the regulator-mandated conditional projection that it already constructs.

Why bother with such a minor change?

The critical point is that it gives us something we can validate. Ideally, the unconditional forecast would be updated on a quarterly basis, allowing a bank’s prognostication prowess to be frequently measured by regulators and, potentially, the broader public. A particular bank may miss in any given quarter, but over time observers will be able to develop a view of how well they are performing this seemingly simple analytical task.

If one bank regularly predicts portfolio losses within 2% while a peer struggles to hit 5%, it will provide a signal that the first bank is producing better analytics. If forecast errors are observed to slip from 5% to 10% while other banks are found to be stable, it may suggest a declining culture of modeling excellence at the institution in question. What’s more, if the regulator produces competing forecasts and regularly beats a particular bank for accuracy, it would indicate some sort of risk management problem.

Every bank should be able to forecast itself better than an outsider, simply because it has the inside scoop.

Parting Thoughts

The picture I’m painting suggests that this unconditional approach would be all nice and clean, but, of course, it won’t be when put into practice. The data will be noisy, and it will take a long time for patterns to emerge. Complicating matters further, banks are heterogeneous, so finding relevant peers with whom to draw meaningful comparisons will be very challenging.

But at present we get nothing from the conditional baseline forecasts currently incorporated in stress testing protocols. If we were to replace them with unconditional projections, they would not be missed. They are nothing more than a starting point for the analysis of the stress scenarios – a role that can be fulfilled using an unconditional set of forecasts.

Perhaps supervisors can accurately judge the internal practices of banks using subjective methods, but seemingly sloppy teams can sometimes produce accurate work. When only conditional scenarios are constructed, neither the supervisors nor the banks can tell whether the predicted paths are truly sound. Indeed, under this approach, whether banks are performing better as time goes by will remain unclear.

In contrast, nothing sharpens the analytical reflexes quite like an unconditional forecasting cage-match held in the public square.

There’s nowhere to hide. Poor-quality analytics will eventually be identified.


Tony Hughes is an expert risk modeler. He has more than 20 years of experience as a senior risk professional in North America, Europe and Australia, specializing in model risk management, model build/validation and quantitative climate risk solutions. He writes regularly on climate-related risk management issues at


BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals