Costly, impactful operational risk events – ranging from fraud, money laundering and record-keeping failures to geopolitical risk and Libor manipulation – grabbed many headlines in 2022. The businesses at the heart of several scandals, including Deutsche Bank, Goldman Sachs, JPMorgan Chase, Charles Schwab, UBS and NatWest, read like a who’s who of financial institutions.
What’s more, these types of operational risk disasters stretch back a decade. (Think Wells Fargo and Citigroup, among many other firms.) So, it’s fair to ask whether financial institutions need a systems engineering solution – similar to NASA’s renowned approach – to their seemingly never-ending operational problems.
For years I’ve been fixated on finding explanations to these headline-grabbing operational risk events at large, complex banks. We hear blame assigned to everything from “too much regulation” to the “too big to fail” concept. There is some truth in those rationales, but I believe there are other factors that lie below the surface of banking operations that might explain these episodic events.
In a minute, we’ll discuss one flexible approach that has been adopted by banks as part of an effort to mitigate operational risk, and then examine the case for a systems engineering solution. But let’s first take a quick look at how we arrived at this stage.
Too Many Failures
Banks, of course, were forced to overhaul their creaky systems and processes in the aftermath of the 2008 global financial crisis. During the mortgage boom that preceded that crisis, and the subsequent bust that followed, it became clear that banks did not have the proper processes and controls in place to ward off massive loan manufacturing and servicing defects.
Operational risk eventually became the watchword of that post-crisis era, ushering in an environment focused on process and control.
Lamentably, as we now understand, that certainly did not resolve all problems. Banks tend to be short-sighted organizations that historically have not been particularly effective at tackling large, complex systems over a long period of time.
Wells Fargo typifies the operational weaknesses that often manifest across systemically important financial institutions. Citing recurring patterns that stretched over several years, the Consumer Financial Protection Bureau in 2022 ordered Wells Fargo to pay $3.7 billion for errors in fees and interest charges imposed on mortgage and auto loans, as well as other illegal conduct.
What is curious is that Wells Fargo has spent significant resources buttressing its systems and processes in the wake of the cross-selling scandal that has held the company in a regulatory penalty box for years. Some of that money was allocated to the adoption of Agile – an interdisciplinary process that breaks down large, complex systems into manageable components, allowing teams to iterate and develop products on a recurring basis over a shorter window of time.
But that begs the question: how could a company that has said it is “all-in for Agile” suffer such operational problems repeatedly? Agile processes certainly have their place in helping firms become nimbler and more customer focused. Banks, however, may be better off either replacing or at least supplementing Agile with a systems engineering approach for product and service management.
Systems Engineering vs. Agile
Agile management has become popular at many banks in the last few years, as banks have come to grips with increasing competition from fintech firms and recognition of their limitations in managing complex systems and technology platforms in general.
The Agile process originated in software development and over time found its way into other areas. While this approach to product development has merit for accelerating development in a controlled manner and instilling a sense of broad-based ownership and team, it’s not as robust as systems engineering.
Similar to Agile, systems engineering is interdisciplinary in scope and touts a lifecycle approach to product development focused on the big picture – but it also attempts to strike a balance between technical and nontechnical aspects of a product.
Importantly, systems engineering has a strong risk management orientation. It has been around since the 1940s and is used extensively in other industries with complex products and systems. Systems engineering is embedded, for example, at NASA for all their flight programs.
How companies adopt a systems engineering approach to product development can vary by industry; however, there are a few common features and phases. These include product design, product realization (implementation) and technical management.
Systems engineering takes a holistic approach to managing bank products. It thinks of a product, such as a mortgage loan, as an integrated system of software (e.g., loan origination and servicing platforms) and business rules (e.g., credit underwriting and pricing engines), rather than as a standalone offering.
Improving Bank Product Execution and Risk Management
In some instances, Agile has been successful as a process for banks to develop and deliver products more quickly and with greater customer satisfaction. However, as we’ve seen with certain operational risk failures, bank products and services don’t end with delivery.
This is where systems engineering – with its ability to identify weaknesses in product features or infrastructure, its long-run focus and its feedback mechanisms for adjusting to and improving product architecture as market conditions change – can come in particularly handy.
Today, many firms get bogged down in the process of using Agile itself, rather than focusing on the system-wide implications of product design and deployment. Since Agile and systems engineering aren’t mutually exclusive, a hybrid approach can be deployed. When these processes are used in tandem, they can radically transform an organization’s ability to build and deploy a marketable, sustainable product that balances business needs against operational and regulatory risks.
Importantly, neither Agile nor systems engineering should be driven by an organization’s IT department. I’ve seen too many instances in my experience where IT-driven Agile projects led to products foundering, either because the product was not a true software solution or because the process itself was not easily internalized by business units with a more hierarchical command and control structure.
While Wells Fargo is just the latest firm to get caught up in the headlines, other banks have suffered operational risk failures that have cost hundreds of millions of dollars. Citigroup, for example, was required to pay a $400 million civil money penalty for risk management deficiencies and inadequate data governance controls, following its $900 million payment error to several Revlon loan creditors.
Despite their scale and resources, large banks now face an increasingly challenging business landscape – ranging from customer demand for technology-driven products and rising competition from fintechs to a level of regulatory scrutiny not seen since 2008.
To fend off encroachment from technology-oriented competitors and to meet customer demand for fast-paced delivery of products, many big banks have adopted Agile management processes. But these organizations have historically proven to have underwhelming technical proficiency, and are now facing more heat from regulators.
Banks that want to compete with fintechs and comply with regulators while avoiding the reputational and financial damage of an operational risk scandal should consider a systems engineering solution to their recurring operational dilemmas.
Clifford Rossi (PhD) is a Professor-of-the-Practice and Executive-in-Residence at the Robert H. Smith School of Business, University of Maryland. Before joining academia, he spent 25-plus years in the financial sector, as both a C-level risk executive at several top financial institutions and a federal banking regulator. He is the former managing director and CRO of Citigroup’s Consumer Lending Group.