Article

Beyond Volatility: The Case for Measuring Market Fragility

May 8, 2026 | 4 minutes reading time | By Bhavesh Kamdar

Why traditional risk metrics may miss the more fundamental question – and what a structurally grounded approach might look like.

When Lehman Brothers collapsed in September 2008, most volatility-based risk models failed to anticipate the cascading losses that followed – not because they were poorly built, but because they were answering the wrong question. They measured how much prices were moving, not how brittle the underlying system had become.

This distinction matters more than it may initially appear. Volatility is a fluctuation metric; it captures the amplitude of price changes over a recent window. Fragility is a structural property; it measures the accumulation of vulnerabilities that reduce a market’s capacity to absorb shocks – even when those vulnerabilities have not yet produced visible price dislocations.

A Recurring Pattern

Historical crises share a common pre-crisis signature. In the months before the Global Financial Crisis, before the 2011 eurozone sovereign stress episode, and before the COVID-19 market shock of March 2020, markets became structurally brittle well before headline volatility spiked or prices collapsed. Balance sheets were impaired. Correlations were quietly rising. Liquidity was thinning at the margins.

These were not random fluctuations; they were structural conditions.

Standard risk metrics, calibrated to recent realized returns, were not designed to capture the stress events. The implication is that volatility-based tools such as value-at-risk (VaR), useful as they are for daily risk management, share a structural limitation: They are inherently backward-looking, responding to dislocations that have already occurred rather than vulnerabilities that are accumulating. Risk management theory has long acknowledged this gap, but practice has been slower to adapt.

Measuring Structural Fragility

If fragility is genuinely distinct from volatility, a principled framework for measuring it would need to address several separate failure mechanisms, not just price variability.

bkamdar - 160 x 190Bhavesh Kamdar

Drawdown stress deserves independent treatment. Markets that have already absorbed sustained losses are structurally weaker even when prices appear to stabilize. Prior drawdowns reflect capital erosion, impaired balance sheets, and reduced risk appetite – conditions that persist beneath surface-level price recovery. Historical crises consistently show embedded drawdowns as a primary damage channel.

Tail risk as distinct from variance warrants its own signal. Conditional value-at-risk (CVaR), or expected shortfall, measures not just the threshold of extreme losses, but their expected severity once that threshold is breached. The feedback loops and forced-selling dynamics that overwhelmed risk controls during the Long-Term Capital Management crisis in 1998, and again during the Lehman failure a decade later, were not adequately foreshadowed by variance-based measures alone.

Transmission Risk and Contagion

Transmission risk – how easily a local stress event becomes systemic – requires dedicated measurement. In stable regimes, asset classes exhibit heterogeneous behavior. During fragile periods, correlations rise as diversification breaks down and shocks propagate across markets. This pattern was visible during the 2011 eurozone crisis and the cross-asset selloff of March 2020. A framework that does not capture cross-market correlation dynamics will underestimate contagion risk.

Beyond these primary dimensions, realized volatility, trend deviation from long-run price levels, and abnormal trading volumes all carry meaningful amplifying information. Elevated volatility tightens risk limits and accelerates margin calls. Persistent deviation from a structural price trend signals decay that short-term noise cannot explain. Abnormal volumes, whether sudden spikes or conspicuous withdrawals, indicate forced activity or market-making disengagement, both of which increase the price impact of any subsequent shock.

The conceptual challenge is aggregation. These dimensions are not independent, and no single measure is sufficient to characterize fragility on its own. It is their joint elevation – across multiple structural signals simultaneously – that constitutes genuine brittleness.

Design Principles

Methodological choices in any fragility framework carry significant consequences for institutional credibility.

One critical choice concerns normalization. Mean-based normalization is particularly ill-suited to stress measurement.

During a crisis, extreme observations pull the reference level toward the tail, weakening signals at precisely the moment they should be strongest. Normalization anchored to the median and median absolute deviation (MAD) preserves sensitivity to genuine fragility relative to typical historical behavior, rather than distorting the baseline around past episodes of stress.

A second choice concerns adaptivity. Machine-learned or dynamically recalibrated weights may improve in-sample fit but introduce overfitting risk and reduce transparency. Fixed, ex-ante pillar weights are more auditable, more defensible to governance committees and regulators, and more likely to generalize across regimes not observed in the calibration window.

A third consideration is the separation of interpretive tools from core measurement. Regime-classification models and network-based correlation diagnostics – such as hidden Markov models or minimum spanning tree analysis – are valuable for interpretation and communication. But their inclusion in composite scores risks double-counting and reduces methodological clarity. Maintaining a clean boundary between measurement and interpretation is a discipline that practitioners would benefit from applying more consistently.

Why This Matters Now

The financial system has grown substantially more interconnected since the Global Financial Crisis. Cross-market transmission risks that were episodic in earlier decades have become structural features of the landscape. As central banks have operated at the limits of conventional policy frameworks, the buffer between elevated structural vulnerability and realized stress has become thinner.

Practitioners who rely exclusively on volatility-based metrics may be measuring the right quantity for short-term risk management, while missing the structural conditions that determine how severely a shock, once arrived, will propagate.

The distinction between a market that is noisy and a market that is brittle is not merely academic. It is the difference between a system with residual shock-absorption capacity and one that is primed to amplify.

A useful analogy: A thermometer is not judged by whether it predicts illness, but by whether it reliably shows elevated temperature before or during sickness. The same evaluative standard could reasonably be applied to structural fragility measures: not whether they predict the precise date of a crisis, but whether they reliably signal elevated vulnerability before markets experience realized stress. This is a lower bar than prediction – and a more honest one.

Risk managers operating in today’s interconnected markets need tools that are not only analytically sound but also explainable to boards, regulators, and investment committees. Structural fragility measurement, built on transparent and auditable principles, offers one approach to filling a gap that purely fluctuation-based metrics leave open.

 

Bhavesh Kamdar is an experienced investment and risk professional, currently Senior Investment Risk Manager, AIA Investment Management (AIAIM). He is the developer of the Institutional Fragility Monitor, a multi-asset framework for measuring structural market vulnerability across liquid asset classes. The views expressed in the above article and the work he describes are his own and not representative of his organization.

Topics: Metrics, Data

Share

Related Insights