Model Risk Management Lessons Learned: Tracing Issues from the Pandemic to the Great Recession

Risk models have been strongly criticized for projecting high credit losses that never materialized during the pandemic, but these methodologies truly started going down the wrong path in the aftermath of the global financial crisis of the late 2000s. The problem is that by the time the pandemic rolled around, the pendulum had swung too far in favor of checklist-driven models that did not place great value on customizability and adaptability.

Friday, July 8, 2022

By Deniz Tudor

Even before the pandemic, model risk management (MRM) departments seemed to be on shaky ground. Indeed, at many financial institutions, credit risk modeling had become largely a check-the-box exercise that relied too much on outdated data.

Then, COVID-19 hit and MRM spun completely out of control, partly because model risk managers did not understand the impact of government stimulus programs (which minimized defaults) and partly because at lease some of these risk managers leaned on data from a completely dissimilar crisis – the Great Recession of the late 2000s – to forecast potential losses from the pandemic.

Complicating matters further, MRM staff at financial institutions were generally trained in technical aspects of modeling (such as textbook statistical tests), but lacked an in-depth understanding of the fundamentals of their employers’ businesses. They were also critical of managerial overlays, because they didn’t understand the impact of the pandemic economy on models, and therefore couldn’t properly determine whether the COVID-19 methodologies that were being used were fit for purpose.

DenizTudor.Pic.RiskIntellDeniz Tudor

What’s more, causality was not taken properly into account. In fact, amid the pandemic, monitoring and maintenance (M&M) reports were used as key evidence to decide whether a model was still performing, ignoring the possible length and shape of the new crisis that was unfolding right before risk modelers' eyes.

Problems were also sparked by confusing regulatory guidance. Clear instructions were not provided, for example, on how pandemic-era tests should be performed, including the data that should be included and omitted.

Models, in short, were reviewed during the pandemic based largely on statistical tests that were not suitable for every circumstance and every time period, rather than on common sense and on forecasting that considers the unique circumstances of a specific time period.

MRM departments that were so used to thinking in black-and-white and completing checklists couldn’t grasp the impact of government intervention – particularly during a pandemic that did not have anywhere near the same economic undercurrents as the Great Recession. But what happened in the aftermath of that crisis that sent models in the wrong direction?

MRM Evolution: Ramifications of the Great Recession

Following the global economic downturn of the late 2000s, there was a need for a strong second line of defense function to challenge the way first line of defense ran the business at banks. Eventually, this gave to a need to develop MRM departments within banks – most of which had been bereft of this type of modeling “team.”

Prior to the Great Recession, models commonly used in banks largely went unchallenged, causing a lot of issues for both banks and consumers. When this became obvious, banks invested heavily in enterprise risk management, and MRM units were created to scrutinize first-line-of-defense models. Moreover, various regulators encouraged and welcomed this development, issuing guidance for banks to bring some ground rules for an effective challenge.

In the years after the Great Recession, many of these MRM departments received praise from regulators and upper management for helping to restore banks’ reputations. MRM departments followed regulators’ guidance and interpreted it internally, within the limits of their understanding and the economic conditions of the broad post-recession expansionary period.

This approach worked just fine during this period when everything was more or less stable. Not much had to be rethought or reinterpreted, and most MRM departments then started blindly checking off boxes, using regulatory guidance as a checklist.

If the guidelines mentioned a test, for instance, the test was sought regardless of whether it was relevant to the model in question. It was easier for MRM departments to ask for everything and evaluate later, rather than thinking ahead and identifying relevance.

Asking questions without much thought became the norm for model risk managers – and the more they asked, the more they got applauded. The pendulum had swung too far.

During this period, many MRM departments also lacked the discipline to organize their questions and reports, as regulators increasingly praised the evidence of more work – instead of relevant and effective work. For instance, the higher the number of findings, the better the MRM departments thought they were doing. Consequently, same or similar questions were often repeated, without much value-added.

Embedded questions and never-ending request/response cycles were deemed extremely valuable for MRM departments (effectively justifying their existence), but no one dared bring up the poor return on investment this strategy typically yielded for banks.

The lack of written industry best practices, especially when it came to concepts like performance thresholds, also fueled these never-ending model validation cycles. Indeed, a Russian-doll approach to challenging the first line of defense seemed to legitimize the presence of MRM units.

A lot of MRM departments therefore started losing credibility internally within banks, and some were even eventually considered a burden that business units should try to avoid at all costs.

Parting Thoughts

We are at a crossroads in MRM practices, where teams should reevaluate the validity and relevance of their checklists and start using an approach that is more customized than simply asking as many questions as possible and writing extremely long reports.

A thoughtless challenge to models helps no one but those who are getting praised for the wrong reasons. As some great compliance professionals know, “Proof of work does not show proof of effectiveness.”

The best approach for overhauling MRM is to start with the regulators’ guidance and then build internal checklists (in a customized and thoughtful manner) based on that advice. Today, MRM teams need to think long and hard about which methods and tests would be more appropriate in the aftermath of Covid-19.

If evaluated properly, the pandemic can offer great lessons for thoughtful model risk managers who are responsible for challenging banks’ business models. Modelers who think outside of the box can save their employers from undesired risks that can wreak economic and reputational havoc.


Deniz Tudor works in the banking industry as a lead model developer and risk strategist. She specializes in econometrics, statistics and enterprise risk management, and can be reached at The views expressed in this article are her own.

We are a not-for-profit organization and the leading globally recognized membership association for risk managers.

weChat QR code.
red QR code.

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals