Tech Perspectives
Friday, August 23, 2024
By Aaron Brown
A rapid unwind of carry trades on August 5 was one of the biggest financial events of the last month. Although there seems to be no lasting damage from the market turmoil, the mechanism by which it was transmitted — exchange-traded funds (ETFs) — should give risk managers pause.
The event was blamed on the Bank of Japan’s July 31 decision to hike interest rates to 0.25%, and the largest affected ETF was WisdomTree Japan Hedged Equity Fund (DXJ) with $4.1 billion in assets. Worryingly, a similar sell-off in corporate bonds could hit far larger ETFs, like Vanguard’s $320 billion Total Bond Market (BND). Corporate bonds are far less liquid than Yen futures, and ETFs in even less liquid asset classes – like private equity, private credit, venture capital and crypto – have been growing rapidly.
Aaron Brown
My main concern, though, is not recent events, nor possible future ones. Rather, the carry-trade turmoil serves as a useful illustration of some important technology risk management principles.
The first principle is that the reforms adopted after a crisis are often the cause of the next crisis, as well as the source of the next great opportunities.
Younger readers may not remember that ETFs were invented by the Securities and Exchange Commission in reaction to the stock market crash of October 19, 1987. They may not seem high-tech, but they were a major innovation and a very clever idea for their time.
Back on Black Monday in 1987, stocks began to fall rapidly — eventually dropping nearly 25% —without any major news. This prompted many investors to want to reduce their equity exposure. Some people blamed the strategy of “portfolio insurance,” which recommended replicating a put option by selling as prices declined to limit losses. But investors have never needed a mathematical or financial theory to dump assets falling in price.
The quickest and cheapest way to reduce equity exposure in those days was to go short S&P 500 futures contracts in Chicago. Dealers took the opposite side of those trades and instructed their floor traders in New York to sell the individual S&P 500 stocks to hedge their positions.
Since each futures trade required 500 stock trades to hedge, and because NYSE technology was antiquated by modern standards, there were constant trading halts and delays, with orders waiting up to an hour to be filled and reporting of trades lagging by up to 90 minutes.
Communication between Chicago and New York was often slow, and it didn’t help that Chicago traded for half an hour after New York closed. Fund transfers were hampered by delays and shutdowns in Fedwire and NYSE SuperDOT. All of this exacerbated the confusion and panic.
The problems got worse in the evening and the following day. The investors who had entered into short futures contracts had reduced their economic exposure to equities, as they wished, but did not have any cash from the reduction. Dealers had large losses in Chicago and large gains in New York. But the New York gains would not settle for five business days, while cash margin for Chicago futures was required immediately.
The SEC wanted a way to trade the S&P 500 in a single trade that priced and settled in the same systems as the underlying stocks, and that would allow people who reduced exposure to collect cash — something essential during crises. It wanted to concentrate liquidity of people trying to change their overall equity exposure and to segregate it from people betting on individual stocks.
The tricky part — high-tech for the time — was to design mechanisms to keep the S&P 500 security price aligned with the individual stock prices. Eventually that was solved, and in 1993 the first ETF was issued.
The second technology risk management principle is that failures in complex systems designed by humans are usually the result of unexpected interactions among components that seemed well-designed individually.
It is important to recall that it took about a decade for ETFs to really take off, and it was not because of their risk management advantages. Rather than seeing them as hedging instruments for portfolios of individual stocks, or as a way to rapidly change exposure in a crisis, investors simply preferred ETFs to public mutual funds.
ETFs have distinguished themselves a few times in ways consistent with their original design, but for the most part the issues that inspired them were solved in other ways. Improved systems designs and communication technology, for example, made handling much larger trading volumes easy, and kept global systems in close sync. Improved settlement, clearing, margin and cash technology, meanwhile, allowed efficient cross-market clearing.
What’s more, once investors embraced equity ETFs, the vehicles mutated in ways not envisioned by their inventors. Today’s nearly 10,000 global ETFs no longer concentrate liquidity – but, instead, fragment it. Some of the largest and fastest-growing ETFs have underlyings much less liquid than S&P 500 stocks — and are, indeed, sometimes entirely illiquid. Though many ETFs are actively traded, some do not reveal their positions.
The original S&P 500 SPDRs, or “Spiders,” had a simple price synchronization mechanisms. “Authorized participants” could either exchange their SPDRs for the underlying stocks or deliver the underlying stocks and get an SPDR in return.
New ETFs, on the other hand, require more complex and less reliable ways to keep ETF prices in line with underlyings.
The third technology risk management principle is that many people, especially those trained in mathematical probability and academic statistics, are apt to have a narrow view of risk — what Nassim Taleb calls the “ludic fallacy.”
The mathematical theory of probability began with consideration of gambling games. These games use manufactured randomness — devices like dice or cards. This type of randomness does not exist: it is an illusion, like a pseudorandom number generator. Cards and dice follow the same laws of physics as other material objects; we treat them as random, but, by design, they have difficult-to-predict individual outcomes and easily predictable long-term average outcomes.
Although the advance of science has turned up several types of natural randomness, many people ignore them in favor of the easy-to-work-with ludic probabilities. The particular natural randomness relevant to this discussion is statistical thermodynamics.
By the end of the 19th century, physicists understood gasses to consist of molecules racing around, bouncing off each other and the walls of their container. Despite the huge number of independent particles acting without coordination, this system obeyed macroscopic rules such as Amontons’s Law, which states that the pressure of a gas is directly proportional to its temperature (on the Kelvin scale) when the volume of the gas is held constant.
But how do the individual molecules know to speed up or slow down due to changes at the faraway borders of the container? A proton zipping around in a container the size of a 12-ounce soda can would find the can as big as a sphere the size of up to 50 times the orbit of Pluto to a human.
Many readers, I’m sure, have studied physics and can explain things. But my concern is a similar question in finance rather than physics.
A simple picture of finance is that each security has a price that is affected by news about that security. The value of indices like the S&P 500, or aggregations like interest rates, is simply the result of averaging individual security prices. But if that’s true, how are aggregate relationships maintained?
For example, suppose there’s some market-moving event — e.g., an earthquake in Japan, an Iranian attack on an oil tanker in the Persian Gulf, or the successful clinical trial of a blockbuster drug. After such an event, we tend to see that major indices move first, but key data also has to be transmitted to individual securities to keep markets in sync.
This process is neither smooth nor particularly rational. It’s complex, but a simple version is that if the index goes down, traders in individual components ignore good news about individual securities and exaggerate bad news, until individual prices line up with the index. This is followed by a slower process of sorting out the actual effects of the move.
The syncing of indices and individual securities after market-moving events changed radically with the introduction of public financial futures and options in 1973, and then again with launch of ETFs. Today, ETFs are mostly responsible for this syncing, even in less liquid markets.
No one designed this, and we can only guess how it might play out in a crisis. Consider, for example, a bond sell-off.
In the 1980s, institutions that wanted to take off fixed income risk quickly either sold the more liquid treasury issues, took a short position in interest rate futures or entered into over-the-counter forwards or swaps. These were all liquid markets that could adjust quickly and were tightly linked to keep prices in sync.
Prices on the vast majority of bonds, which had limited liquidity, fell into line only slowly. What trading there was had large bid/ask spreads.
Today, few institutions make extensive use of fixed-income ETFs. That’s partly because it’s cheaper to buy bonds at issue and hold to maturity than to hold fixed-income ETFs – and the expected returns over inflation, from near 0% to about 0.50%, are too low to tolerate even moderate fees.
Equity ETFs, in contrast, are an efficient way for institutions to hold stocks, because fees for the major indices are low enough to compete with direct holdings of stocks. (The current expected return of equities over inflation, say 6% or so, is far higher than fees.)
Most fixed-income ETFs are held by retail investors and active traders like hedge funds. In a bond sell-off, institutions can take advantage of the recent innovation of portfolio trading. They can put together large packages of their bonds and sell them at lower cost than executing ETF trades. Furthermore, AI pricing tools — like Bloomberg’s BVAL — allow both institution and dealer to ascertain accurate prices.
The dealers who buy the ETFs have powerful optimization software that can combine them with long and short inventory held by the firm, as well as into pools deliverable into the 2,500 available fixed-income ETFs. (To further zero out their exposure, dealers can also buy or sell some individual bonds.)
Instead of ETFs providing their original function of turning 500 equity trades into one ETF trade, today one portfolio trade in fixed income can lead to hundreds of ETF and individual bond trades — plus perhaps some futures and other derivatives for additional hedging. In principle, each of those trades could touch off hundreds more. Meanwhile, hedge funds and dealers are making arbitrage trades to keep ETF prices in line with each other and with individual bond prices.
Thanks to modern technology, the sheer number of trades is not likely to cause problems. Rather, it’s the number of entities and systems touching the markets that is worrisome.
It’s hard to predict the overall market effect if one of the 2,500 fixed-income ETFs screws up its rules for deliverable portfolios – or if one dealer or hedge fund or prime broker miscalculates its risk, or if there is a failure of one entity with long and short positions in thousands of ETFs and bonds. What’s more, if BVAL breaks down in rapidly moving markets without much individual name liquidity, the consequences could be dire.
Lastly, while clearing and settlement systems are far more robust than the 1980s, they basically rely on VaR margining and risk management — or similar tools — to monitor portfolio risks and ensure sufficient cash cushions. If these calculations fail, especially in systematic ways, there may not be cash available to cover losses.
I don’t say this is likely to happen, only that thinking about ETFs with 1980s assumptions misses the dangers and opportunities that have been created by modern technology.
One lesson risk professionals should take away from these technology risk management principles is that the past must not be confused with the future.
On the one hand, if a market-altering event has happened in the past, it could happen again, however confident people are that a problem has been solved forever. On the other hand, the assumption that the future will be like the past is among the most dangerous to make. However much better the new dam is than the one that failed, it might still fail, but the path of the flood and the damage it causes cannot be predicted from the last flood.
Despite all the risk management progress that has been made since Black Monday, the chance of a sell-off in a major market cascading into panic and a liquidity crisis due to market infrastructure failures is not zero. However, the path such a disaster would take today would likely be quite different.
Technology risk managers, in short, need long memories, but must also be careful not place too much emphasis on history.
Aaron Brown worked on Wall Street since the early 1980s as a trader, portfolio manager, head of mortgage securities and risk manager for several global financial institutions. Most recently he served for 10 years as chief risk officer of the large hedge fund AQR Capital Management. He was named the 2011 GARP Risk Manager of the Year. His books on risk management include The Poker Face of Wall Street, Red-Blooded Risk, Financial Risk Management for Dummies and A World of Chance (with Reuven and Gabriel Brenner). He currently teaches finance and mathematics as an adjunct and writes columns for Bloomberg.
•Bylaws •Code of Conduct •Privacy Notice •Terms of Use © 2024 Global Association of Risk Professionals