Credit Risk | Insights, Resources & Best Practices

Credit Risk Measurement: Alternatives for PD-LGD-EAD on the Horizon?

Written by Marco Folpmers | February 7, 2025

Probability of default (PD), loss-given default (LGD) and exposure at default (EAD) have been the go-to methodologies for credit risk measurement for the past four decades. However, they are imperfect, and there seems to be room for the emergence of a new, more powerful and more accurate framework for assessing default risk.

What might this alternative credit risk system look like? One strong possibility is a cash-flow-based model that uses machine-learning algorithms. That type of framework would have more computational power, and potentially could be superior at predicting defaults than traditional tools. Before we further examine this alternative, it’s important for us to understand the current uses of the PD-LGD-EAD framework, as well as how we have arrived at this stage.

The PD-LGD-EAD risk parameters are crucial factors in the calculation of both expected loss (per exposure) and, after summation, portfolio expected loss. They also provide the raw material for calculating risk-weighted assets (RWA). Though the RWA formula is more involved than a simple multiplication, it is still easily implementable in computer code or even via a spreadsheet application. Just like expected loss, moreover, the RWA per exposure is additive from the individual exposure to the portfolio level.

40 Years of PD-LGD-EAD

It is difficult to pinpoint an exact start date for PD-LGD-EAD, but it is fair to say this methodology emerged in the mid-1980s – roughly four decades ago.

Before the 1980s, credit risk assessment was primarily qualitative, and the credit risk officer was responsible for examining the metrics of financial strength of an obligor. During that decade, however, the idea arose that credit risk could be separated into a PD component, a loss (LGD) component and an exposure (EAD) component.

Marco Folpmers

The dawn of a new era broke with the predecessor to modern PD: Altman’s z-score, a statistical bankruptcy prediction model. By the debut of the Basel I international banking regulations in 1988, PD had not yet been officially adopted – but it gained traction in the early 1990s. The term “LGD” then arose (it was previously called “recovery rates”), and the PD-LGD-EAD trio was ultimately encoded in the 2004 Basel II framework. Basel II also prescribed the closed-form formula for RWA, with standard values for the asset correlation.

The rest is history. Since the 2004 debut of Basel II, all credit risk measurement starts with PD, LGD and EAD. Some adaptations have been implemented (e.g., a new definition of default has been adopted, and input and output floors have been added), but overall the framework remains robust and dominant. Indeed, a challenger model to PD-LGD-EAD is not expected soon, partly because of regulatory inertia but also because banks risk systems are largely based on these basic building blocks.

Just because PD-LGD-EAD remains dominant, however, does not mean that other approaches to credit risk are unthinkable.

Cash Flow: The Heart of an Alternative Approach

If one were to construct a framework for credit risk from scratch today, what would it look like? There are no immediate challenger models, so this is a difficult question to answer. But let’s try.

An alternative model, instead of starting with an event (à la PD-LGD-EAD), could begin with the cash flow. The value of the loan, under this approach, would be the sum of the discounted cash flows. It’s not just the cash flows themselves that are important but also their timing and their dependence on expected future developments of the obligor, as well as his or her broader environment.

For the development of a predictive model, the cash flow behavior of previous cohorts of loans could be captured and connected, as model inputs, to the obligor environment and to macro data. These model inputs can then be immediately linked to the loan’s cash flow performance – the model output. Though the link function is complex, it can be established through machine learning (ML) – e.g., a neural network.

This alternative model would need a lot of data and processing power, but, with current technology, this is not a big problem. ML models are now readily available to pinpoint trends in cash-flow dependency, and can identify and analyze everything from changing obligor characteristics (e.g., a divorce or specific movements on the client’s current account) to macro drivers (e.g., house prices), including the lagged impact of these properties.

Model Use: Processing Many “States of the World”

Once the cash-flow-driven model has been fitted, it can be used to make predictions for the current portfolio. To determine expected loss or regulatory capital or the IFRS 9 provisions of a current portfolio, for example, such a model can go through many scenarios per loan.

A simple example is presented in Figure 1, which depicts a bullet loan of €100,000, with 5% yearly interest, for a maturity of 10 years.

Figure 1: Cash Flow Scenarios

 

Scenario 1 in Figure 1 depicts a standard cash-flow pattern for the aforementioned loan, showing the yearly interest payments and the repayment of the loan (plus the interest rate) in year 10. However, this pattern is only one “state of the world.”

In another hypothetical situation, the obligor gets into financial trouble and will forfeit the yearly interest payments that will be added to his or debt. This debt is paid off in its entirety in year 10, possibly by refinancing the loan elsewhere (scenario 2). In yet another scenario (Scenario 3 in Figure 1), the financial troubles gets so large that the customer does not pay back anything and, consequently, there is no cash flow at all.

These three scenarios, of course, are only a very small percentage of the large universe of possible future “states of the world.” With enhanced computing power, it will be possible to go through many possibilities per loan, to apply probability weighting and to arrive at an expected loss and a 99th percentile loss.

The values for intermediate variables, like the financial health of countries or customer groups, must be derived first. Subsequently, these variables can be used (partly) to determine the scenarios at the obligor level, so that co-occurrences of unpaid cash flows can be considered, factoring in the collective dependency on common drivers.

Making use of ML that has been trained on many years of cash flow data of the historical portfolio, cash-flow-driven models can have many intermediate layers, yielding a predictive accuracy that is unseen today.

One advantage of such a system is that it allows for a direct estimate of the loan’s losses (missed cash flows). It also follows the behavior of the loan’s cash flows over time, instead of focusing just on a one-year PD.

This approach can also better address specific phenomena, like PD-LGD correlation. Indeed, under a cash-flow-driven model, if the default probability and the loss rate strongly correlate (e.g., due to a combined dependency on a macro driver), the system will pick this up automatically, because the combined impact is captured in the cash-flow pattern.

On the contrary, for separate PD and LGD models, this works differently: there is an indirect estimation of the potential loss (through a separate event probability and loss percentage), which means that wrong-way risk and PD-LGD correlation can be easily overlooked. Under such a scenario, the expected loss is not accurately calculated, because the covariance term is neglected and the simple multiplication of PD, LGD and EAD no longer works.

In the table below, we have summarized some key differences between the traditional PD-LGD-EAD framework and an alternative cash-flow-based system.

Table 1: PD-LGD-EAD Models Vs. Cash-Flow-Based Systems

Item
PD-LGD-EAD Framework
Cash-Flow-Based System

First building block

Default event

Cash flow

Direct vs. indirect estimation

Three components and multiple subcomponents

Direct estimation

Link function

Traditional statistics (logistic regression) or ML

Only ML

Integrating macro factors

Transforming PPT to PIT for IFRS 9, subsequently assessing dependency from macro drivers

Macro factors are a layer of the neural network

Testing

Traditional testing

Enhanced testing for ML models, especially for overfitting

Explainability

High

Low

(Expected) predictive power

Moderate-to-high

High

Correlation of risk drivers and wrong way risk

Difficult to capture in structural models

Inherently captured in direct estimation system with intermediate layers

 

Although it may sound strange, advanced cash-flow-based models may be easier to estimate than the current PD-LGD-EAD framework. The reason is that, once we have the historical cash flows of each loan and some additional macro information, the model is easily fitted. Architectural design of the model landscape is therefore unnecessary.

What’s more, the maintenance of a credit risk framework with multiple components (such as PD-LGD-EAD) is more cumbersome. Indeed, under this traditional approach, tests typically need to be applied to determine reconciliation between the input data sets for PD and LGD – to check whether the number of defaults in the PD data is the same as the total number of observations of the LGD modeling set. In a direct estimation methodology (like the proposed cash-flow-driven model), such tests are not necessary.

Do You Trust a Neural Network?

On the other side of the spectrum, the neural networks may have a trust issue. Even if back-tests are applied and neural networks are further tested for overfitting, we will never be able to fully understand a cash-flow-driven model at the same level as we understand a logistic regression or Altman’s z-score. We can, however, compare outcomes for different credit risk measurement approaches, while also applying overall sanity checks on the output.

It’s also true that while older generations of risk practitioners may have been trained in groundwork statistics (like logistic regression), younger FRMs can and do make use of more advanced model-fitting routines, including ML-driven, fully-fledged Python suites. What’s more, generative AI tools, such as ChatGPT and Copilot, can help FRMs navigate the statistical environment. Consequently, model coding is no longer needed; rather, only a functional knowledge of the procedures is required.

Current Uses of PD-LGD-EAD

As we consider an alternative approach to credit risk measurement, we also need to be careful not to forget the benefits of the existing approach.

The strength of the PD-LGD-EAD framework is that the three risk parameters are not only clearly defined but also linked to each other. PD is the probability (percentage) of default within a calendar year, while LGD is the percentage loss that arises if there is a default. The multiplication of PD and LGD is the expected loss as a percentage of the exposure. When PD and LGD are multiplied with the EAD, the percentage is converted to a bank’s preferred currency.

The PD-LGD-EAD approach also links directly to parallel actuarial approaches, where risk management is categorized by (1) the definition of an event (for credit risk: a default); (2) the probability that this event occurs within a year (the PD); and (3) the accompanying loss amount (PD times LGD times EAD).

These three risk parameters are also input for the calculation of expected credit loss under IFRS 9 - an advanced, forward-looking financial accounting standard. IFRS 9 works with PD, LGD and EAD curves across time, factoring in macro developments.

However, under IFRS 9, the definitions of these risk parameters are slightly different – e.g., PD is considered a best estimate, rather than a conservative estimate. This means that, if a bank works with an IFRS 9 overlay model, it first needs to adapt the input parameters before proceeding with the ECL calculations. That said, one thing remains clear: the PD-LGD-EAD methodology is still the basis for IFRS 9.

PD-LGD-EAD is also the framework du jour  for all other credit risk calculations, such as pricing models, affordability analysis and early warning systems. Its robustness is further enforced by its encoding in the Basel standard (see section “CRE”), which has also been adopted in local regulation (e.g., in the Code of Federal Regulations in the U.S. and in CRR3 in the EU).

Parting Thoughts: What Does the Future Hold?

Cash-flow-based models for credit risk measurement would take advantage of superior data and computing power to produce more accurate predictions. It is therefore very unlikely that the traditional PD-LGD-EAD framework will continue for another 40 years.

This transition will not take place in the short run. PD-LGD-EAD, after all, is a legacy framework with a large buy-in from financial institutions. Regulatory inertia, moreover, will also slow things down. The shift to a more powerful and more accurate cash-flow-based model for credit risk may therefore take between five and 10 years.

 

Dr. Marco Folpmers (FRM) is a partner for Financial Risk Management at Deloitte the Netherlands.