Skip to content
Article

What Risk Managers Can Learn from the Roots of Modern Technology

July 26, 2024 | 1 minutes reading time | By Aaron Brown

Technology is ubiquitous in risk management today, and its uses and potential dangers are too numerous to list. Risk practitioners must comprehend the how and why of all the different tools at their disposal to leverage them fully and to be better prepared for vulnerabilities like the CrowdStrike-caused IT outage.

Risk managers who want to take advantage of technology cannot simply have access to it. Whether we’re talking about, say, cybersecurity or machine learning or blockchain or digital currencies, to truly benefit from technology, you must understand how and why it was developed.

By now, we all realize the interconnectedness and the effects – both good and bad – of the global technology ecosystem. The latter was on full display with last week’s faulty CrowdStrike cybersecurity update to the Microsoft Windows operating system, which caused a worldwide IT outage that impacted some large banks, among many other businesses.

This event drove home the need for risk managers to better understand the “how and why” of modern technology. Knowledge of history and of the evolution of innovation can, of course, help with this pursuit – but risk managers who are fortunate also learn valuable lessons from advisers. It was almost exactly 50 years ago that I met the first of four people I consider my mentors: the mathematical sociologist Harrison White, who recently died.

The Importance of Mentors and of Analytical Thinking

It was under White’s tutelage that I first realized the significance of comprehending the roots of current risk management technology. We tend to focus on the latest buds and flowers on the technology tree, taking the branches, trunks and roots for granted, but it’s vital to understand the whole tree.

Modern financial risk management was developed in the late 1980s and early 1990s by members of my generation: the Baby Boomers. The main technologies involved were personal computers and early networks — e.g., DARPANet, UseNet and dial-up bulletin boards.

aaron-brownAaron Brown

The underpinning mathematical and financial theory, and underlying technologies, were the product of the previous generation, known as the Silent Generation (born from 1928 to 1945), who  grew up during the Great Depression and WWII. As the name implies, these people are often overshadowed by the more dramatic Greatest Generation and by my Baby Boomers.

The Silent Generation was too young to fight in WWII. They graduated from college from the late 1940s to the early 1960s — often rubbing shoulders with older veterans. The ones I’m concerned with here generally took PhDs in physics or applied mathematics, regardless of what field they would eventually pursue.

Harrison White was typical of this group — born in 1930, he earned a PhD in physics from MIT in 1955 and a second PhD in sociology from Princeton in 1960. Like many others, he worked on whatever problems interested him in different fields, with both academic and private-sector sponsors. He came to my attention when I was in high school, with his brilliant paper applying matrix theory to aboriginal Australian incest taboos. I was (and am) fascinated by the idea of discovering the underlying mathematics of human behavior.

I entered Harvard as an undergraduate in 1974 and had great trouble selecting courses among all the famous professors whose work had intrigued me. When I finally made my choices, my assigned faculty advisor refused to sign my card. I had too many courses, which were too advanced and which didn’t lead to any major’s requirements.

So, I read the rules and discovered I didn’t need my advisor’s signature; any faculty member could sign. Subsequently, I went to Harrison White’s office and explained the situation. He told me he’d never been asked to sign an undergraduate program card – but that, of course, I should take the courses in which I was most interested.

More specifically, he signed me into his graduate seminar in mathematical sociology, which I attended for all four years I spent at Harvard. The seminar was a weekly three-hour freewheeling discussion of an extraordinary variety of research projects being pursued by a dozen or so students, many of whom went on to fame and fortune in a variety of fields.

White brought his famous network theories to all sorts of applications — how art is valued (look at who the artist knew, not what’s on the canvas), what makes a good leader (look to connections to followers, not personal characteristics), and how market prices move (examine information flow networks, not cash flows or interest rates).

The common denominator of hundreds of projects discussed in the seminar, both White’s work and student work, was to start from mistakes. If you want to understand human vision, he reasoned, it’s a mistake to start by assuming there is a three-dimensional world out there that the eye and brain somehow “see.” Think, instead, about how you might try to reconstruct such a world from the data the eye actually receives.

Study optical illusions and misperceptions and blind spots, White further counseled, to figure out the underlying engineering. If you assume a system is perfect, then it just is, and there’s no way to understand it. It works that way because it’s perfect, and it’s perfect because it works that way. Mistakes are the clues to mechanisms.

Similarly, if you imagine people thinking in language, and then speaking the words they think, you won’t understand either language or thinking. To understand any field you're studying, you must consider thinking errors – and you need to realize the differences between what we think and what we say, and the grammar errors, and the speech of people with various impairments, and how people who speak different languages communicate.

If you want to understand government, for example, throw away your civics text and forget the stuff that works well; focus instead on the perverse results, the inefficiencies and the corruptions. If you want to understand evolution, look at the design errors in living things, the waste and the extinctions.

Black Monday’s Transcendent Risk Management Lessons

In 1987, I was a card-carrying member of the first generation of Wall Street quants, and we thought we had financial markets pretty well figured out. Then, on October 19, markets did things that were supposed to be impossible.

It wasn’t just Black Monday’s unprecedented decline in equity prices that shocked us – it was a hundred other violations of established principles and a compete reintegration of markets. All of which happened overnight.

To people trained in the post-WWII applied mathematics revolution, this massive mistake was like an X-ray into markets that taught us more than 100 years of price data. It took almost a decade, with hundreds of major contributors, to codify this into modern financial risk management.

But we didn’t waste any time trying to explain away events, or fix things to prevent recurrence, or integrate October 19 with previous understanding. Rather, we reset all assumptions and started from scratch. And with personal computers and primitive network connections, we had the technology we needed for the effort.

The role of the risk manager is not just to identify and fix the occasional deviations. Instead, he or she must keep in mind that organizations are evolved entities, consisting mainly of networks, that can handle daily tasks and moderate disruptions – but that might break down or act perversely if conditions change.

Your clues for how things work are the mistakes: the losses, the things everyone complains about but no one seems able to fix, the things that happen contrary to rule or assumption, the unusual. It’s not the size of these mistakes that matter, it’s what you can learn from them. The more you understand about how an organization has evolved to deal with problems, the better you can think of the problems it will respond to badly.

The Rise of Technology, and How to Use It

Modern technology gives us many tools for problem-solving. Many of these were unavailable when I was in college, or even in 1987. One example I have always found useful is the minimal spanning tree.

You start with a list of everyone in the organization, and you find the pair that is most closely connected — perhaps defined as the employees who exchange the most messages over the last week. You connect them, then look for the second-closest pair. You continue doing this with one exception; you never make a second connection between two people already in the same group.

In other words, if A is connected to B and B is connected to C, and if C’s closest connection is A, you ignore those employees and look for the next closest individual to C. You end up with a diagram of everyone in the organization connected to everyone else, with no loops. There are fast computer algorithms for constructing and displaying these trees.

Minimal spanning trees never look remotely like organization charts, nor any other mental picture people have of an organization. You will find there are obscure low-level and mid-level people who are major information hubs in the organization, and highly visible senior people who are literally out on a limb.

You will find silos, but not the silos everyone thought you would find. You will find thick connections among people who, for compliance or security reasons, should not be too close to each other. Moreover, you will find people who seem to outrank their bosses.

Parting Thoughts

It is the work of Harrison White and other Silent Generation applied mathematicians that allow modern risk managers to apply mathematics, computers and networks to their work. Knowing how and why a technology was developed is the key to ultimately using it to your benefit.

 

Aaron Brown worked on Wall Street since the early 1980s as a trader, portfolio manager, head of mortgage securities and risk manager for several global financial institutions. Most recently he served for 10 years as chief risk officer of the large hedge fund AQR Capital Management. He was named the 2011 GARP Risk Manager of the Year. His books on risk management include The Poker Face of Wall Street, Red-Blooded Risk, Financial Risk Management for Dummies and A World of Chance (with Reuven and Gabriel Brenner). He currently teaches finance and mathematics as an adjunct and writes columns for Bloomberg.

Topics: Data, Innovation

Advertisement

Share

Trending