The views expressed by contributors are their own and not the view of The Hill

Too-big-to-fail banks: Is more capital the best way forward?

The use of the term “too-big-to-fail” (TBTF) found its way into the lexicon of finance after the global financial crisis of 2007-2009. At its core, the issue of TBTF is whether the financial system can survive the failure of a large, globally-interconnected financial institution.

The issue of TBTF continues to be redefined and argued: Which financial institutions should be so identified, and have current regulations resolved the problems of the financial crisis? 

The TBTF business model proved faulty, but not because it was wrong to be big, global and diversified. These financial institutions were simply following the lead and servicing needs of their global clients. It proved faulty because the capability for seeing into these financial behemoths was missing.

{mosads}Regulators needed a blueprint and a plan to fix long overdue data standards, legacy systems, risk-data aggregation issues and infrastructure problems. Without these improvements, TBTF CEOs and their regulators could not — and still cannot — see risk exposures building up in a single TBTF institution nor across multiply interconnected ones.

 

With these improvements, TBTF institutions and their regulators will be better able to determine and monitor their risk. They can then be rewarded with less capital, not more. 

Recently, Neel Kashkari, president of the Minneapolis Fed and former head of the government’s Troubled Asset Relief Program (TARP) in his Too Big to Fail Speech at the Economic Club of New York said the problem is still with us. The solution? TBTF institutions need more capital.

Jamie Dimon, CEO of JPMorgan Chase writing in his recent chairman and CEO letter to shareholders said the opposite — we have more than enough capital. Mark Carney, governor of the Bank of England, in a speech at the Institute of International Finance’s Washington Policy Summit, said we still have more to do in solving the TBTF problem.

Richard Fischer, Vice Chairman of the Federal Reserve, in an interview on CNBC, opined on the usefulness of the Dodd-Frank remedies required of TBTF institutions, like living-wills and bankruptcy resolution plans, that have yet to be tested. 

The issue of TBTF has revolved around whether more capital (the equivalent of saving more for a rainy day) will support the systemic impact of another financial crisis. Sadly, as we learned, capital was then and is still now a measure with which to count down to failure. Even with mandates for more capital, higher-quality capital and more-liquid capital, capital will be depleted as before.

More capital will buy a bit more time, not the time needed to successfully unwind a TBTF financial institution nor even a modestly-sized, globally-interconnected one. As we learned from the last financial crisis, no amount of capital would have prevented the withdrawal of business from those thought to be “weak” institutions in what amounted to a classic bank-run panic.

Regulators knew little of the risks that fast-deteriorating conditions would have on these institutions. Neither did accountants, lawyers, consultants, investment bankers, credit agencies and others who were entrusted with due diligence. Nor did external auditors who signed off on these firms’ books and records. 

Why were regulators, who were mandated to oversee these financial institutions, not able to see the risks they were taking?

After the financial crisis, it became widely recognized that the computer-coded numbers that underpin risk and transaction-processing systems and that represent the identity of counterparties and financial products (the equivalent of tax and vehicle identification numbers) were not standardized. Many codes existed for the same entity or product, making aggregation and valuation of risk positions in financial products of a single client technically cumbersome and untimely.  

In addition, risk data could not be aggregated in any timely manner across the many businesses of global financial companies individually, nor across the many globally-connected financial institutions. Regulators could not see the risk exposure that any one financial institution had within the interconnected global financial system.

This inconvenient truth was revealed by what was found in the digitized records of the Lehman Brothers bankruptcy. Those who looked into the books and records of Lehman — regulators, forensic accountants, bankruptcy lawyers, creditors and counterparties — observed a disparate deluge of data and a huge swamp of risk and no way of aggregating nor valuing what they found.

There was no consistency in automated records, neither identifying nor distinguishing Lehman as a trading partner or servicer to other financial institutions. There was no mechanism to aggregate Lehman’s products and businesses into a total view of Lehman’s own risk nor the risk others had with Lehman.

It wasn’t just Lehman; it was a fundamental flaw in the infrastructure of the global financial system — no universal, digitized identification of financial market participants; their hierarchies of business ownership; the products they own; the monies they owe; the collateral they had pledged and how that is all aggregated to calculate the risks they are exposed to.

Why, after a half-century of automation, were these still best practices? It was primarily due to a culture of performance tied directly to incentive compensation. This drove the self-interests of silo-organized business managers, an organizational structure prevalent in the management of behemoth financial enterprises.

This further incentivized front office revenue-generating automation while leaving back- and middle-office processes to languish in underfunded, risk-prone and costly legacy applications.

The result was hastily conceived point-in-time unintegrated technology implementations that supported the creation of each business unit’s own business function on its own non-standard data sets. This resulted in inelegant, costly and risk-prone mapping processes for accommodating enterprise-wide aggregated risk measures. 

Diverse data sets also existed across the entire global financial supply chain. Trying to aggregate risk across these supply chain participants has also created costly and risk-prone interconnection problems.

When we couple the organizational complexity of thousands of legal entities comprising a TBTF financial institution with the underlying complexity of generations of silo-built legacy systems, we can begin to understand the enormity of the task of dismantling global financial conglomerates.

Too little is still known about how these giants were assembled and how they interoperate, leading many to ponder how they could be broken up through a living will process. A living will requires the drafter to have a full inventory of assets and liabilities and organizational components.

In addition, it must contain an inventory of internal systems and interconnections, as well as external entanglements with all outside facilities operators and infrastructure organizations. Without such a technology blueprint for breaking up these financial behemoths, regulators may inadvertently pull the wrong brick or tug the wrong pipe and topple the whole edifice. 

We need something more than additional capital and the living wills that regulators have demanded as a roadmap in order to dismantle a failing TBTF institution. Contemplating the death of a financial institution through a living will should be replaced with a reengineering plan to create risk transparency and allow TBTF financial institutions to survive in perpetuity.

 

Allan Grody is the president of Financial InterGroup Advisors and an Editorial Board member of the Journal of Risk Management in Financial Institutions. He writes on subjects at the intersection of risk, regulation, data and technology.


The views expressed by contributors are their own and not the views of The Hill.