Menu

Modeling Risk

The Joys and Perils of Spreadsheet Models

Excel-driven financial risk methodologies are convenient and fast – but also vulnerable to human errors. Financial institutions can mitigate the operational risk of these models, and maximize their benefits, through the adoption of best practices.

Friday, February 4, 2022

By Cristian deRitis

Advertisement

Recently, I was surprised to learn that revenues from competitive esports exceeded $1 billion in 2021, an increase of nearly 15% from 2020. I don’t understand the appeal of watching somebody else play a video game, but then I learned that these competitions have been extended to spreadsheet modeling. Now we’re talking!

The Financial Modeling World Cup (FMWC) challenges competitors from around the world to use Microsoft Excel to solve complex problems in pursuit of a $10,000 cash prize and the respect of their peers. The competition is complete with live streaming, countdown clocks, scoring algorithms and color commentary.

Cristian deRitisCristian deRitis

While this might sound boring to some, the competition is particularly enthralling for those of us who use spreadsheets daily. For all its drawbacks, Excel remains a primary tool for economists, risk modelers, finance professionals and a wide range of other occupations. Unlike other analytical tools, spreadsheets are fast, eye-catching, easily accessible and ubiquitous.

However, from a model governance perspective, the flexibility that makes spreadsheets so powerful also makes them especially vulnerable. A Google search for “famous spreadsheet errors” turns up dozens of high-profile examples – ranging from financial reporting to academic research to policy analysis.

As every spreadsheet user knows, it’s far too easy to point to the wrong cell in a formula or to forget to update a reference, leading to a miscalculation. Consequently, use of spreadsheets for accounting applications immediately raises red flags for auditors. Suspicion runs so high, in fact, that spreadsheets used in regular business processes may receive the dreaded “end-user computing” designation, with onerous audit and validation requirements designed to disincentive their use.

Given the heightened concerns of model governance and auditing teams, why do financial institutions (among other organizations) continue to use spreadsheets?

One reason is convenience. Spreadsheets allow us to (1) calculate period-to-period differences or growth rates in a flash; (2) generate simple forecasts just by dragging across a few cells; and (3) visualize data in a chart with just a couple of clicks.

Most importantly, sharing our work with others is effortless, with no special software required. Our colleagues can review our results, easily modify assumptions, and/or extend the analysis. Indeed, everyone from the CEO to the most junior analyst can easily collaborate on the same analysis.

In short: spreadsheets democratize data analysis. For this reason, they remain a primary tool for researchers and analysts, despite the availability of other (more powerful) programming languages and applications.

Managing Data Risk: Searching for the Middle Road

Organizations take differing approaches toward managing data analysis. Some allow staff members to choose whatever tools they like to generate analysis and to support decision making. Others are so concerned about controls that they force everyone onto a single platform – or at least try hard to move everyone in this direction.

But companies sometimes take things too far. For example, I once worked for an organization where the governance team proposed banning the use of spreadsheets altogether to minimize operational risk. Unsurprisingly, that did not go over well.

As usual, the best approach for managing the risk lies somewhere in the middle. Some applications, such as financial reporting, absolutely require tight controls around data and reproducibility. A spreadsheet for financial reporting would therefore be insufficient to manage complex calculations and audit them easily.

For many other applications, however, spreadsheet modeling may be a productive starting point for analysts to explore and visualize data – before committing the overhead to write code that inducts, summarizes, models and forecasts data in a controlled setting. A quick review with a spreadsheet may catch obvious errors, anomalies or other data issues before starting a modeling project, saving both time and effort.

Recently, I’ve been exploring ways to leverage all the advantages of spreadsheets while making them more robust and less prone to “Big Whale” type errors. Suggestions include:

  1. Store raw data on its own separate tab.

This accomplishes a couple of things. First, it clearly delineates between input data and any derived or calculated data. Second, it’s much easier to update the spreadsheet with new data from a single source, rather than attempt to manually input the data in individual fields and formulas.

  1. Leverage application programming interfaces (APIs) as much as possible.

Related to item 1, to reduce operator error, we should strive to minimize manual cutting and pasting.

Fortunately, most data providers now offer APIs that allow users to access and refresh data feeds directly. For example, my company, Moody's Analytics, permits users to link their Excel spreadsheets directly to our databases, so that they can load in the latest economic indicators and forecasts without having to copy them manually or reference another file. This greatly reduces the chances of copy-and-paste errors.

  1. Integrate spreadsheets with other programming tools for more complex calculations.

This approach offers the best of both worlds in terms of extending capabilities to a broader audience in a controlled fashion, without requiring users to acquire new skills.

To illustrate, suppose you have a credit default model developed in the R programming language that takes as input not only borrower and loan characteristics but also forecasts for several economic indicators, such as GDP, interest rates and the employment to population ratio.

Coding that model in R would provide all the advantages of reproducibility, process control and auditability that a model governance policy requires – but would also force users to have a working knowledge of R to operate the model. Alternatively, users can avoid having to learn R by leveraging an Excel add-in, like the Basic Excel R Toolkit (BERT); through this add-in, users are able to specify all the required input fields in a spreadsheet, pass these to the R code in the background, and deliver the results from the credit model – without ever having to leave Excel.

Spreadsheets will undoubtedly continue to evolve – to the point that they can learn from users and generate auditable code directly in the background. Someday, we may be building complex regression models or running machine-learning algorithms in Excel that are automatically back-tested and documented, further reducing their operational risk and extending capabilities to an even larger audience of non-programmers. Until that day, we can dramatically lower the operational risk of using Excel with a few best practices.

Parting Thoughts

My own software journey started with the spreadsheet program, Lotus 1-2-3. This was an amazing piece of software that improved the quality and productivity of analysts, who previously had to perform calculations with paper, pencil and a hand calculator (if they were lucky).

Then Excel came and unlocked even greater capabilities with its graphical point-and-click environment. Suddenly, non-programmers could leverage a tool that permitted them to update numbers in real time and quickly run what-if scenarios, without memorizing a long string of commands. They could even produce professional graphs to visualize the data and communicate their ideas to a broader audience.

From a model risk management and governance perspective, spreadsheets do pose a risk. But it would be a mistake to mitigate this risk by banning their use. They are the ideal tool for applications where not only the answer, but the communication of that answer, is important, as it’s relatively easy to “show your work” in a spreadsheet.

Excel is also a great first step for more complex analysis. It’s the first place I went to understand trends in COVID-19 infections across countries, before transferring that work to a programming environment. In fact, as the Financial Modeling World Cup demonstrates, spreadsheets can be used to solve a wide variety of problems in creative and innovative ways.

Rather than bemoan them, risk managers and auditors need to acknowledge the place that spreadsheets have in a well-functioning risk organization. Best practices along with new capabilities can significantly reduce their operational risk, while allowing an organization to leverage the value they bring through expanded analytical capabilities.

 

Cristian deRitis is the Deputy Chief Economist at Moody's Analytics. As the head of model research and development, he specializes in the analysis of current and future economic conditions, consumer credit markets and housing. Before joining Moody's Analytics, he worked for Fannie Mae. In addition to his published research, Cristian is named on two US patents for credit modeling techniques. He can be reached at cristian.deritis@moodys.com.

P.S. If you think your spreadsheet skills are ready for the big league, check out some of the sample cases competitors had to complete at this year’s Financial Modeling World Cup, along with videos from the competition.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals