I was very struck by a piece by Steve Randy Waldmann at Interfluidity yesterday, entitled Capital Can’t be Measured. He is basically arguing that modern financial institutions are sufficiently complex that the concept of their “capital” is subject to measurement errors of the same order of magnitude as the capital itself. This rang true to me, and put into words something that had nagged at me in reading about financial reforms, but had not come clearly to the surface of mind.
Sure, “hard” capital and solvency constraints for big banks are better than mealy-mouthed technocratic flexibility. But absent much deeper reforms, totemic leverage restrictions will not meaningfully constrain bank behavior. Bank capital cannot be measured. Think about that until you really get it. “Large complex financial institutions” report leverage ratios and “tier one” capital and all kinds of aromatic stuff. But those numbers are meaningless. For any large complex financial institution levered at the House-proposed limit of 15×, a reasonable confidence interval surrounding its estimate of bank capital would be greater than 100% of the reported value. In English, we cannot distinguish “well capitalized” from insolvent banks, even in good times, and regardless of their formal statements.
Lehman is a case-in-point. On September 10, 2008, Lehman reported 11% “tier one” capital and very conservative “net leverage“. On September 25 15, 2008, Lehman declared bankruptcy. Despite reported shareholder’s equity of $28.4B just prior to the bankruptcy, the net worth of the holding company in liquidation is estimated to be anywhere from negative $20B to $130B, implying a swing in value of between $50B and $160B. That is shocking. For an industrial firm, one expects liquidation value to be much less than “going concern” value, because fixed capital intended for a particular production process cannot easily be repurposed and has to be taken apart and sold for scrap. But the assets of a financial holding company are business units and financial positions, which can be sold if they are have value. Yes, liquidation hits intangible “franchise” value and reputation, but those assets are mostly excluded from bank balance sheets, and they are certainly excluded from “tier one” capital calculations. The orderly liquidation of a well-capitalized financial holding company ought to yield something close to tangible net worth, which for Lehman would have been about $24B.
So Lehman misreported its net worth, right? Not according to the law. From the Valukas Report, Section III.A.2: Valuation — Executive Summary:
The Examiner did not find sufficient evidence to support a colorable claim for breach of fiduciary duty in connection with any of Lehman’s valuations. In particular, in the third quarter of 2008 there is evidence that certain executives felt pressure to not take all of the write‐downs on real estate positions that they determined were appropriate; there is some evidence that the pressure actually resulted in unreasonable marks. But, as the evidence is in conflict, the Examiner determines that there is insufficient evidence to support a colorable claim that Lehman’s senior management imposed arbitrary limits on write‐downs of real estate positions during that quarter.
In other words, the definitive legal account of the Lehman bankruptcy has concluded that while executives may have shaded things a bit, from the perspective of what is actionable within the law, Lehman’s valuations were legally indistinguishable from accurate. Yet, the estimate of net worth computed from these valuations turned out to be off by 200% or more.
Certainly read the whole thing.
This seems to me to bear on one of the hardest issues at the heart of any “Risks to Civilization” discussion: how resilient is modern civilization really? In the past, in discussions with collaborators trying to figure out how vulnerable civilization is to large scale cyber-attacks, I’ve run into the fact that different folks have radically different intuitions about this, and it’s very difficult to come up with a methodology that makes the situation more objective. The situation is summarized in this Congressional Research Report from a few years ago:
The importance of electronic information systems is obvious to all participants in the modern economy. When information fails to circulate, whole sectors of the economy are vulnerable. Finance, wholesale and retail trade, transportation, much of manufacturing, and many service industries would slow to a crawl without computers. Vital public services – utilities, national defense, and medicine – are equally dependent.
Information security – the safeguarding of computer systems and the integrity, confidentiality, and availability of the data they contain – has long been recognized as a critical national policy issue. Two current trends indicate that its importance is growing. First, the integration of computers into more and more aspects of modern life continues. Second, cyber-attacks, or breaches of information security, appear to be increasing in frequency, and few observers are willing to ignore the possibility that future attacks could have much more severe consequences than what has been observed to date.
The core issue, in both public and private sectors, is whether we are devoting enough resources to information security. Part of the answer must come from economic analysis. What are the costs, both historical and potential, of security breaches? How frequently can attacks be expected? Can these factors be quantified precisely, so that business firms and other organizations can determine the optimal amount to spend on information security and measure the effectiveness of that spending?
This report surveys the state of knowledge on the cost of cyber-attacks and the economics of information security. First, we summarize several studies that use stock market capitalization as a measure of the cost of cyber-attacks to victim firms. The studies find substantial short-term drops in the prices of shares of firms following the announcement of an information security breach: between 1% and 5% of market capitalization, with greater losses (up to 15%) recorded by some financial institutions where attackers had gained access to confidential customer records.
Second, we present summaries of the existing empirical data on costs attributable to cyber-crime and computer worms and viruses. What is available is a limited amount of survey data, which is frankly described by its compilers as anecdotal, but is nonetheless widely reported in the press.
Third, we analyze the reasons for the lack of statistical data: firms and organizations have strong incentives to conceal information about cyber-attacks, and there are significant uncertainties and measurement difficulties that limit our ability to specify the dollar amount at risk from information security breaches. Theoretical models that describe the returns to spending on information security shed some light on the size of potential losses, but – in the absence of better statistical data – assigning an overall figure to the cost of cyber-attacks remains highly speculative.
My personal intuition here is that the worst case is potentially very bad indeed, but I agree with the report’s judgement that this is “highly speculative” and I continue to lack any tools that would allow me to convince a reasonable and open-minded person whose natural intuitions were much happier than mine. In general, we lack any real methodology for assessing how resilient civilization is in the face of serious cyber-attacks.
Similarly, much of the heat in the discussion over the impacts of peak oil comes from a similar lack of agreement on the resilience of society in the face of that particular challenge (whenever it finally occurs). For example, there is a recent report Tipping Point: Near-Term Systemic Implications of a Peak in Global Oil Production, by David Korowicz at FEASTA, which is the most articulate and intelligent statement of the “Fast Crash Doomer” view of Peak Oil that I have yet seen. In general, at the heart of Korowicz’s argument is an assumption that civilization is quite fragile, so that peak oil can easily destroy it altogether:
Money only has value because it can be exchanged for a real asset such as food, clothing, or a train journey. As long as we share the confidence in monetary stability we can save, trade and invest. Like bonds and shares, it is a virtual asset, as it represents only a claim on something physically useful. However, the current valuation of virtual assets towers over real productive assets on which their value is supposed to be based. A bond is valuable because we expect to be paid back with interest some years hence; paying twenty times earnings for shares in a company is a measure of confidence in the future growth of that company. The output of real productive assets must collapse because of energy and resource constraints and the failing operational fabric. The implication is that virtual wealth including pension funds, insurance collateral, and debt will become worthless.
The acknowledgment by market participants that peak oil is upon us, coupled with an understanding of the consequences is likely to permanently crash the global financial system. That is, the behavior of the market is based on fundamental physical constraints, such as rising loan defaults induced by the current economic crisis further constrained by energy and food price inflation-and its interactions with the hopes and fears of market participants, particularly their faith in the overall stability and continued growth of the system. The transition from few market participants accepting the idea, and large-scale acceptance can be very rapid, though the onset of the fast transition can be difficult to predict. In other words: growing government, corporate, and public acceptance of peak oil, will initiate a fear-driven conversion of a mountain of paper virtual assets into mole-hill of resilient real assets which will help precipitate an irretrievable collapse of the financial and economic system. Such a transition can be expected to be fear-driven and mutually re-enforcing. This is part of the reflexivity of markets, in George Soros’s phrase; or an example of a positive feedback, in the language of dynamical systems. In this context we can understand reported pressure placed upon the International Energy Agency by the United States to overstate future production in its World Energy Outlook 2009.
The end-point will be a collapse in bond and equity values. This is a result of various reinforcing processes, including loss of confidence in debt repayment, monetary confidence, supply-chain disruption, evolving dis-economies of scale, and massive potential losses in discretionary consumption.
I have tended to find this kind of argument quite unpersuasive. First of all, note that it’s almost entirely an argument about social psychology, and in particular, it posits that people will a) recognize at around the same time that peak oil is occurring, b) agree on the implications for the future of society, and c) immediately act on those implications to bring about a societal collapse.
Now, while this might be true for a society consisting entirely of people who thought like fast-crash doomers, I think it ignores the fact that most people don’t think like fast-crash doomers. And in particular, it ignores how most people react to uncertainty – for there are numerous uncertainties here.
Even amongst people studying the issue in good faith, there is no consensus on when peak oil will occur (or if it has already), and resolving that uncertainty is very difficult – it involves things like knowing how much oil there is under the territory of Middle Eastern governments that don’t have an incentive to be transparent about their reserves, so that we are reduced to complex data forensics exercises that are not likely to be broadly persuasive to the population at large until long after the fact.
Furthermore, there is uncertainty about what will happen after peak oil. My own view is the most likely outcome is that, as long as the decline rate is not too severe, society can adapt by gradually getting more efficient, and that price signals will tend to bring this about. However, since the situation will be somewhat unprecedented, it remains necessarily rather speculative – it’s not like we have twenty other planets with petroleum-powered global industrial civilizations that we got to watch go through their own oil peaks, thus allowing us to begin to generalize empirically about what happens in that case.
In the face of all this uncertainty, which is unlikely to resolve quickly, what will people do? Well, there are two general observations that I think are likely to be the most important: in the face of great difficulty figuring out the situation for some issue, people’s natural tendency is to do nothing about it (presumably in the hope that, if the situation is really important, it will get clearer over time). Secondly, most people are biassed optimistic (and thus generally want little to do with the peak oil scene). Thus I tend to think that most people will assume that peak oil is not happening until after it’s an incontrovertible fact, and furthermore, even once they believe in peak oil, they will hope that they can muddle through it unless and until it’s proven that they cannot. Thus I think Korowicz’s argument fails: he’s extrapolating from a tiny and very atypical minority of people (which includes him), and assuming that society at large will think very like this, when they largely won’t.
At least, so it seems to me, but here again I recognize that all speculations about mass social psychology in circumstances that lack any close historical precedent are just that: speculations.
Into this mix of thoughts, I have been musing on the implications of Waldmann’s idea that the capital of large banks is fundamentally unmeasurable. Further on in his essay, he writes:
So, for large complex financials, capital cannot be measured precisely enough to distinguish conservatively solvent from insolvent banks, and capital positions are always optimistically padded. Given these facts, and I think they are facts, even “hard” capital and leverage restraints are unlikely to prevent misbehavior. Can anything be done about this? Are we doomed to some post-modern quantum mechanical nightmare wherein “Schrödinger’s Banks” are simultaneously alive and dead until some politically-shaped measurement by a regulator forces a collapse of the superposition of states into hunky-doriness?
Yes, we are doomed, unless and until we simplify the structure of the banks. When I say stuff like “confidence intervals surrounding measures of bank capital are greater than 100%”, what does that even mean? Capital does not exist in the world. It is not accessible to the senses. When we claim a bank or any other firm has so much “capital” we are modeling its assets and liabilities and contingent positions and coming up with a number. Unfortunately, there is not one uniquely “true” model of bank capital. Even hewing to GAAP and all regulatory requirements, thousands of estimates and arbitrary choices must be made to compute the capital position of a modern bank. There is a broad, multidimensional “space” of defensible models by which capital might be computed. When we “measure” capital, we select a model and then compute. If we were to randomly select among potential models (even weighted by regulatory acceptability, so that a compliant model is much more likely than an iffy one), we would generate a probability distribution of capital values. That distribution would be very broad, so that for large, complex banks negative values would be moderately probable, as would the highly positive values that actually get reported. If we want to make capital measurable in any practical sense, we have to dramatically narrow the range of models, so that all compliant models produce values tightly clumped around the number we’ll call capital. But every customized derivative, nontraded asset, or unusual liability in a bank’s capital structure requires modeling. The interaction between a bank holding company and its subsidiaries requires multiple modeling choices, especially when those subsidiaries have crossholdings. A wide variety of contingent liabilities — of holding companies directly, of subsidiaries, of affiliated or spun-off entities like SIVs and securitizations — all require modeling choices. Given the heterogeneity of real-world arrangements, no “one-size-fits-all” model can be legislated or regulated to ensure a consistent capital measure. We cannot have both free-form, “innovative” banks and meaningful measures of regulatory capital. If we want to base a regulatory scheme on formal capital measures, we’ll need to circumscribe the structure and composition of banks so that they can only carry positions and relationships for which we have standard regulatory models. “Banks’ internal risk models” or “internal valuations of Level 3 assets” don’t cut it. They are gateways to regulatory postmodernism.
I am prepared to assume that the Obama administration’s instincts are completely opposed to the kind of radical simplification Waldmann calls for, and so that we will continue to have Schrodinger banks for the forseeable future. It seems to me this breaks both ways. On the one hand, if it ever really comes to the test, it is impossible for the banks to prove to a sceptical audit that they are solvent. On the other hand, in the meantime, it is impossible for an outsider to provide convincing proof that the banks are insolvent (either individually or collectively).
“Regulatory postmodernism” is indeed a very appropriate term here: as long as we continue to believe in our large financial institutions, they will continue to exist. But if we ever lose faith, they will no longer exist.
Given, again, the idea that most people are biassed optimistic, and most people will not take strong action in the face of uncertainty, it seems to me this lends the following character to the situation: there is no near-term inevitable reason why the financial sector cannot continue with business as usual. I still see no reason to assume, with Stoneleigh and crew, that the situation is certain to implode this year or next. At the same time, the unverifiability of bank solvency creates a potentially serious amplifier of any other kind of shock. We saw this with the bursting of the housing bubbble, were the markets went into a complete panic about bank solvency that was only resolved by massive government intervention, and it seems that any other future event that casts credible doubt on bank solvency will have similar implications. Basically, it’s a black swan amplifier.
And, while I still can’t quantify it, it does seem to me that’s important for the general issue of assessing the resilience of civilization. It seems hard to believe that modern civilization, with its massively automated, just-in-time, globally integrated character, could operate terribly well if its banks went up in a puff of smoke. And so, if Waldmann is right that they cannot satisfactorily prove their solvency in a pinch, that places a limit on how resilient we can possibly be.
I’m reminded of historian Niall Ferguson’s recent piece Complexity and Collapse.
Perhaps the most famous story of imperial decline is that of ancient Rome. In The History of the Decline and Fall of the Roman Empire, published in six volumes between 1776 and 1788, Edward Gibbon covered more than 1,400 years of history, from 180 to 1590. This was history over the very long run, in which the causes of decline ranged from the personality disorders of individual emperors to the power of the Praetorian Guard and the rise of monotheism. After the death of Marcus Aurelius in 180, civil war became a recurring problem, as aspiring emperors competed for the spoils of supreme power. By the fourth century, barbarian invasions or migrations were well under way and only intensified as the Huns moved west. Meanwhile, the challenge posed by Sassanid Persia to the Eastern Roman Empire was steadily growing.
But what if fourth-century Rome was simply functioning normally as a complex adaptive system, with political strife, barbarian migration, and imperial rivalry all just integral features of late antiquity? Through this lens, Rome’s fall was sudden and dramatic — just as one would expect when such a system goes critical. As the Oxford historians Peter Heather and Bryan Ward-Perkins have argued, the final breakdown in the Western Roman Empire began in 406, when Germanic invaders poured across the Rhine into Gaul and then Italy. Rome itself was sacked by the Goths in 410. Co-opted by an enfeebled emperor, the Goths then fought the Vandals for control of Spain, but this merely shifted the problem south. Between 429 and 439, Genseric led the Vandals to victory after victory in North Africa, culminating in the fall of Carthage. Rome lost its southern Mediterranean breadbasket and, along with it, a huge source of tax revenue. Roman soldiers were just barely able to defeat Attila’s Huns as they swept west from the Balkans. By 452, the Western Roman Empire had lost all of Britain, most of Spain, the richest provinces of North Africa, and southwestern and southeastern Gaul. Not much was left besides Italy. Basiliscus, brother-in-law of Emperor Leo I, tried and failed to recapture Carthage in 468. Byzantium lived on, but the Western Roman Empire was dead. By 476, Rome was the fiefdom of Odoacer, king of the Goths.
What is most striking about this history is the speed of the Roman Empire’s collapse. In just five decades, the population of Rome itself fell by three-quarters. Archaeological evidence from the late fifth century — inferior housing, more primitive pottery, fewer coins, smaller cattle — shows that the benign influence of Rome diminished rapidly in the rest of western Europe. What Ward-Perkins calls “the end of civilization” came within the span of a single generation.
Other great empires have suffered comparably swift collapses. The Ming dynasty in China began in 1368, when the warlord Zhu Yuanzhang renamed himself Emperor Hongwu, the word hongwu meaning “vast military power.” For most of the next three centuries, Ming China was the world’s most sophisticated civilization by almost any measure. Then, in the mid-seventeenth century, political factionalism, fiscal crisis, famine, and epidemic disease opened the door to rebellion within and incursions from without. In 1636, the Manchu leader Huang Taiji proclaimed the advent of the Qing dynasty. Just eight years later, Beijing, the magnificent Ming capital, fell to the rebel leader Li Zicheng, and the last Ming emperor hanged himself out of shame. The transition from Confucian equipoise to anarchy took little more than a decade.
In much the same way, the Bourbon monarchy in France passed from triumph to terror with astonishing rapidity. French intervention on the side of the colonial rebels against British rule in North America in the 1770s seemed like a good idea at the time — a chance for revenge after Great Britain’s victory in the Seven Years’ War a decade earlier — but it served to tip French finances into a critical state. In May 1789, the summoning of the Estates-General, France’s long-dormant representative assembly, unleashed a political chain reaction that led to a swift collapse of royal legitimacy in France. Only four years later, in January 1793, Louis XVI was decapitated by guillotine.
Although several narrative fallacies suggest that the Hapsburg, Ottoman, and Romanov empires were doomed for decades before World War I, the disintegration of the dynastic land empires of eastern Europe came with equal swiftness. What was impressive, in fact, was how well these ancient empires were able to withstand the test of total war. Their collapse only began with the Bolshevik Revolution of October 1917. A mere five years later, Mehmed VI, the last sultan of the Ottoman Empire, departed Constantinople aboard a British warship. With that, all three dynasties were defunct.
The sun set on the British Empire almost as suddenly. In February 1945, Prime Minister Winston Churchill was at Yalta, dividing up the world with U.S. President Franklin Roosevelt and Soviet Premier Joseph Stalin. As World War II was ending, he was swept from office in the July 1945 general election. Within a decade, the United Kingdom had conceded independence to Bangladesh, Bhutan, Burma, Egypt, Eritrea, India, Iran, Israel, Jordan, Libya, Madagascar, Pakistan, and Sri Lanka. The Suez crisis in 1956 proved that the United Kingdom could not act in defiance of the United States in the Middle East, setting the seal on the end of empire. Although it took until the 1960s for independence to reach sub-Saharan Africa and the remnants of colonial rule east of the Suez, the United Kingdom’s age of hegemony was effectively over less than a dozen years after its victories over Germany and Japan.
The most recent and familiar example of precipitous decline is, of course, the collapse of the Soviet Union. With the benefit of hindsight, historians have traced all kinds of rot within the Soviet system back to the Brezhnev era and beyond. Perhaps, as the historian and political scientist Stephen Kotkin has argued, it was only the high oil prices of the 1970s that “averted Armageddon.” But this did not seem to be the case at the time. In March 1985, when Mikhail Gorbachev became general secretary of the Soviet Communist Party, the CIA estimated the Soviet economy to be approximately 60 percent the size of the U.S. economy. This estimate is now known to have been wrong, but the Soviet nuclear arsenal was genuinely larger than the U.S. stockpile. And governments in what was then called the Third World, from Vietnam to Nicaragua, had been tilting in the Soviets’ favor for most of the previous 20 years. Yet less than five years after Gorbachev took power, the Soviet imperium in central and Eastern Europe had fallen apart, followed by the Soviet Union itself in 1991. If ever an empire fell off a cliff — rather than gently declining — it was the one founded by Lenin.
It seems to me that one might consider a four-factor model here, in thinking about collapse of empires and civilizations.
- Long-standing societal vulnerabilities. For example, the fact that the Roman empire generally was too big to be governed by a single emperor, and failed to evolve an orderly way of managing imperial succession, was a situation that went on for centuries without causing a catastrophic problem. Yet it seems clearly unsatisfactory to us, and indeed was a fault line in the civilization that did become very important in the eventual failure of the empire. Similarly, today, one might point to the presence of a globally integrated economy and financial system with no global regulator with executive authority over the entire system as a potential vulnerability. “Regulatory postmodernism” is also clearly in this character.
- Increasing external stressors. The increasing strain on Rome due to Persian and Hunnic pressure was clearly a factor in its eventual collapse. It appears to me that peak oil is best thought of in this category. It’s not sufficient by itself to cause a collapse of society, but it certainly does place an additional strain on society and reduce the room for manoeuver in confronting other problems. To use the complex system/critical point metaphor, it is such stressors that move the system from a stable configuration into a potentially unstable one in which system failures of large magnitude can potentially occur. Similarly the hollowing out of the traditional US working class by some combination of increasing automation, Asian competition, and illegal immigration strikes me as a key nexus of gradually increasing social stress within the US in particular.
- A serious shock of some kind. The Crossing of the Rhine in 406 was the key external shock in the demise of the western Roman empire. The financial crisis of 2008 was clearly a potentially catastrophic event for modern civilization if the entire global financial system had been allowed to collapse.
- Failure to effectively confront and manage the shock. Instead of uniting to kick out the barbarians in 406, the Romans were involved in a succession struggle and the various emperors and would-be emperors were involved in fighting each other, which proved to be a fatal mistake. In our modern instance, it seems, at least at present, that the 2008 shock has been successfully contained. However, the underlying system vulnerability has not been removed and it seems like there could be more shocks in the future.
I’m not sure that this model is particularly predictively helpful – it’s too vague and qualitative for that. But perhaps it does contain some useful insight notwithstanding.