Market Turbulence Raises Questions About Future of Risk Models

The race to create more sophisticated risk models has been radically altered by the financial crisis.

Monday, October 05, 2009 , By Ludovic Lelegard

printPrint   |  Order  |  Email this Story  | 

One of the strengths of risk models is supposed to be their ability to use a limited range of parameters to predict events. However, the reliability of these mathematically-driven models has been called into question by the subprime turmoil, and regulators are now eager to rationalize models and to develop stronger risk management policies.

Risk models depend partly on financial research, which must continually be updated to keep in step with the dynamics of ever-changing markets. This constant, rapid evolution heightens the challenge of developing reliable models.

The lack of data available on extreme events also makes it difficult to validate models. Nicole El Karoui, a professor at l'École Polytechnique in Paris, uses an example from the auto industry to illustrate why models have proven insufficient during periods of extreme market stress. When a car is built to drive at 70 miles per hour (mph) and is driven consistently at 110 mph, she notes, breakdowns can occur.

Another example of a model failure connected to insufficient extreme events data can be found in the aerospace industry. On June 4, 1996, Ariane 5, an unmanned rocket launched by the European Space Agency, exploded roughly 40 seconds after lift-off.

Unfortunately, Ariane 5's guidance system was based on the model developed for its predecessor (Ariane 4), and it therefore could not cope with the large increase in velocity it experienced. Engineers could not forecast this extreme event, even after 10 years of extensive engineering and reliability tests.

In finance, model calibration faces the bias variance dilemma when forecasting the evolution of markets. Basically, this means that overall behavior patterns cannot be captured because the data needed to capture different patterns is insufficient.

Generally speaking, during the crisis, there has been too much trust in risk models developed by external data providers, such as rating agencies. But internal models also share part of the blame for the credit crunch.

For example, Gary Gorton, a model consultant for American International Group, expressed his full confidence in his firm's credit-default-swap models and highlighted their independence from external data during an AIG investor meeting in December 2007. His confidence was not necessarily misplaced, but the models he cited were not designed to manage collateral margins resulting from huge drops in value for insured credit assets.

Ignoring Human Behavior

Financial laws are very different from the laws of physics because they rely on human reactions, which are difficult to predict. Humans react differently when faced with unexpected risks, such as the terrorist attacks of September 2001. Such events could result in panic and amplification of the risk event.

Whereas in physics, models are used to predict the future fairly efficiently (e.g., weather forecasts), the forecasts of financial models are inherently biased. Consequently, to make good decisions, one must consider uncertainty and go beyond numbers.

There is no denying that computations are essential in any decision-making process, but risk management could take some pointers from fields such as surgery, which relies heavily on human decision making. Lest we forget the power of this tool, we should remember the example of the pilot who managed to land a commercial aircraft manually (and safely) on the Hudson River in New York City in January 2009.

The most sophisticated, quantitative forecasting tools are not necessarily the most reliable. Whether a model is deterministic or stochastic, it may not be very realistic, and its efficiency may be highly dependent on market conditions. The well known proverb "no risk, no reward" is certainly true, but potential profit should outweigh potential loss in a risk-sensitive environment.

Market Risk Measurement Biases

Market risk measurement is based on value-at-risk (VaR), a model which has recently been the subject of much criticism and which is defined as the potential loss at a specific confidence level, over a certain period of time. In most cases, VaR considers probability of losses at either 95% or 99% confidence levels. For the latter, this means that we expect that the VaR will be outstripped five times over a two-year period. Financial institutions design their own VaR model to take into account their constraints. It could be historical, simulated or statistical.

Nassim Nicholas Taleb, a financial derivatives specialist and former trader, has vigorously criticized VaR, describing it as "the great intellectual fraud," partly because of its inability to measure accurately "fat tail" events (also known as Black Swans).

It is difficult for any measurement tool to assign a probability to these rare events, but the credit crisis has revealed one other VaR flaw: liquidity risk measurement. VaR assumes normal market conditions, where assets are liquid enough and the history is meaningful. So it has recently proven very inefficient in the measurement of illiquid instruments, like credit derivatives.

What About International Regulations?

In January 2009, fueled in part by numerous bankruptcy filings in the financial services industry (which proved that some financial institutions lacked the capital needed to cover themselves in very illiquid markets), the Bank Committee on Banking Supervision (BCBS) published consultative papers to amend the Basel II capital accord. The amendments call for an increase in capital charges and a decrease in the amount of leverage financial institutions can hold.

Basel II, of course, was initially designed to improve risk management and to enable banks to better align their risks with their regulatory capital. The amendments show that the BCBS recognize that capital has become a scarce and precious resource, and also demonstrate that securitization will no longer be recognized as a risk mitigation tool.

Back-testing and stress testing are among the other important requirements of the enhanced Basel II accord. The validity of the VaR assumptions can be assessed through back-testing, which measures the exceedance of losses compared to the VaR number.

The stress testing requirement, meanwhile, underlines one of the shortcomings of VaR -- its inability to measure exceptional market moves beyond normal conditions. Stress tests play a significant role in the quantification of downside risk versus business opportunities; they make use of historical and hypothetical scenarios that not only assess risk but also facilitate risk mitigation.

Closing Thoughts

Overall, in response to the crisis, risk managers have improved risk analysis processes and general policies, and regulators have taken some steps to enhance risk management practices.

While quantitative risk models definitely enhance the decision-making process (as long as a model's assumptions are clearly defined), their inherent uncertainty should encourage financial institutions to use them more cautiously, to supplement their use with qualitative analysis and to be more sensitive to their risks.

Ludovic Lelégard (FRM) is a risk manager at HSBC France, Global Banking and Markets. He can be reached at This article expresses the personal opinions of the author, which are not necessarily shared by his employer or any other entity.

Risk Management e-Journal
The Risk Management e-Journal publishes paper abstracts on the topics that matter most to risk professionals. See what your risk manager colleagues are reading about today.




Get Free Updates on the Dodd-Frank Act
Register for Morrison & Foerster's FrankNDodd service to receive Daily News Alerts on the Dodd-Frank Act, gain access to regulatory highlights and commentary, and use the exclusive FrankNDodd Tracker tool.


Banner Picture