The Index Investor Blog

Why Did So Many Ignore Warnings Before the Crash of 2021?

Tesla. Bitcoin. GameStop. The equity market as a whole. Even the bond markets. The list goes on. Why did so many investors ignore the warning signs that were flashing red before the Crash of 2021? And more importantly, what was different about those who did not ignore those warnings?

As always, there were many root causes that amplified each other’s effects.

Let’s start at the individual level.

Tali Sharot’s research has shown how humans have a natural bias towards optimism. We are much more prone to updating our beliefs when a new piece of information is positive (i.e., better than expected in light of our goals) rather than negative (“How Unrealistic Optimism is Maintained in the Face of Reality”).

Individuals seek more information about possible future gains than about possible losses (e.g., “Valuation Of Knowledge And Ignorance In Mesolimbic Reward Circuitry”, by Charpentier et al).

We tend to seek, pay more attention to, and place more weight on information that supports our current beliefs than information that is inconsistent with or contradicts them (known as the confirmation or my-side bias). Moreover, as Daniel Kahneman showed in his book, “Thinking Fast and Slow” this process often happens automatically (“System 1”). When we notice information that is not consistent with our mental model/current set of beliefs about the world, our subconscious first tries to adjust those beliefs to accommodate the new information.

Only when the required adjustment is above a certain threshold, the feeling of surprise is triggered, calling on us to consciously reason about its meaning, using “System 2”.

Yet even then, this reasoning is often overpowered by group-level factors.

Having spent so much of our evolutionary existence in a world without writing or math, humans naturally create and share stories rather than formal models to make sense of our uncertain world. Stories are powerful because they have both rational and emotional content; while that makes them easy to remember, it also makes them very resistant to change.

Another group level phenomenon that is deeply rooted in our evolutionary past is competition for status within our group. Researchers have found that when the result of a decision will be private (not observed by others), we tend to be risk averse. But when the result will be observed, we tend to be risk seeking (e.g., “Interdependent Utilities: How Social Ranking Affects Choice Behavior”, by Bault et al).

Other research has found that when we are engaged in social status competition, we actually have less working memory available for reasoning about the task at hand (e.g., “Increases in Brain Activity During Social Competition Predict Decreases in Working Memory Performance and Later Recall” by DiMenichi and Tricomi).

Another evolutionary instinct comes into play when uncertainty is high. Under these conditions, we are much more likely to rely on social learning and copying the behavior of other group members, and to put less emphasis on private information that is inconsistent with or contradicts the group’s dominant view. The evolutionary basis for this heightened conformity is clear – you don’t want to be cast out of your group when uncertainty is high.

It is also the case that groups will often share more than one story or belief at the same time. Research has found that “as a result of interdependent diffusion, worldviews will emerge that are unconstrained by external truth, and polarization will develop in homogenous populations” (e.g., “Interdependent Diffusion: The Social Contagion Of Interacting Beliefs” by James P. Houghton).

All of these group causes have been supercharged in our age of hyperconnectivity and multiple media platforms.

Finally, individual and group causes are often reinforced by organizational level phenomena.

As successful organizations grow larger, there is a tendency to recruit and promote people who have similar views. Growth also tends to increase the emphasis an organization places on predictable results, which causes them to penalize errors of commission (e.g., false alarms) more heavily than errors of omission (e.g., missed alarms).

Thus employees in larger organizations are likely to wait longer and require strong evidence before warning that danger lies ahead.

In his January 2021 letter to investors (“Waiting for the Last Dance”), GMO’s Jeremy Grantham explained why larger organizations are less likely to warn clients when markets are severely overvalued:

“The combination of timing uncertainty and rapidly accelerating regret on the part of clients [for missing out on gains as the bubble inflates] means that the career and business risk of fighting bubbles is too great for large commercial enterprises…

“Their best policy is clear and simple: always be extremely bullish. It is good for business and intellectually undemanding. It is appealing to most investors who much prefer optimism to realistic appraisal, as witnessed so vividly with COVID. And when it all ends, you will as a persistent bull have overwhelming company. This is why you have always had bullish advice in bubbles and always will."

In sum, that so many suffered large losses when the post-COVID bubble burst should come as no surprise. It was merely the latest version of a plot line that has been repeated for centuries in speculative markets.

The real lessons to be learned come from those investors who reduced their exposure and changed their asset allocations before markets suddenly and violently reversed (analysts are still searching – likely in vain -- for the cause of the market crash. Such is the nature of complex adaptive systems).

What did these investors do differently?

We know one thing they didn’t do – believe that they could personally overcome the very human and deeply rooted evolutionary biases noted above. Research says that odds against success in that endeavor are long indeed.

Rather than trying to conquer their personal biases, these investors established – and followed – investment processes that were designed to offset those biases and their emotionally charged effects.

For example, they didn’t fall prey to the “this time is different” myth, and used traditional valuation metrics to inform their asset allocation decisions. Their default conclusion was that the valuation metrics were right, and demanded very solid, logical and evidence based arguments to reject the signals they sent.

In their own forecasting, they followed best practices. They spent a lot of time making sure they were asking the right questions; they paid attention to base rates; they were disciplined about seeking out high value information to update their views; and they were always alert to surprises that warned their beliefs and models were incomplete.

They also sought out forecasts from a wide range of other sources that were based on different information and/or methodologies, and then combined them to increase their predictive accuracy.

And they focused their forecasting efforts on time horizons beyond the range of the algorithms, where human effort can still produce a profitable edge.

Most important, perhaps, is this timeless truth: The investors who avoided the Crash of 2021 weren’t any smarter than those who were wiped out. They were just more conscious of their own weaknesses, and as a result their investment processes followed a more disciplined approach.

Read More...

The Deadly Race Between Vaccinations and COVID Infections from New Variants

With new highly transmissible SARS-CoV-2 variants now appearing in the US (including those first identified in the UK, South Africa, Brazil, and now one in Ohio) we are in a deadly race.

If we can exponentially increase the number of people vaccinated, we may be able to limit the exponential increase in the number of people who will otherwise suffer COVID inflection, and eventually overwhelm hospital capacity without severe lockdowns, as we now see in Europe.

Unfortunately, the evidence to date has shown that US vaccinations have been increasing at a slow rate. For example, between 20Dec and 15Jan, the percent of the US population that had been vaccinated increased from 0.17% to 3.71%. In Israel, it went from 0.07% to 25.34%.

While the Biden administration promises to speed up vaccinations in the US, based on the rates at which infections by the new variants have grown in the UK, EU, South Africa, and Brazil, it seems very likely (80% probability) that more severe lockdowns and economic losses lie ahead.

If this forecast turns out to be wrong, it will be because of a dramatic improvement in the US vaccination rate over the next month.
Read More...

Where the US Election Could Go Off the Tracks

I listened to an excellent webinar yesterday put on by the Harvard Kennedy School's Applied History Program, discussing a new book by Alexander Keyssar on the history of the Electoral College.

Of particular interest, given the uncertainties in this year's election, were Keyssar's description of the 1820 and 1876 elections, which involved Electoral College controversies, and the precedents they set.

Looking at this year's election, Keyssar identified many (too many, really) ways controversy could arise. These include, in the chronological sequence in which events will likely take place, the following:

  • State legislators legally select electors who will vote the Electoral College, based on the state's vote for presidential candidates. These votes could be challenged in court, as they were in Florida in 2000. In those states where a governor and legislative majority are from different parties, further conflicts could arise over which electors could be sent to the Electoral College. In the election of 1876, these controversies led to some states sending two sets of electors to the Electoral College.

  • At the level of the Electoral College it only gets worse. If multiple slates of electors are sent, more litigation will result. The constitution also provides that electoral votes will be counted "in the presence" of the Vice President, but it is silent on who actually counts the votes and the power they have to choose one set of electors over another.

  • If neither candidate receives the requisite 270 votes in the Electoral College, the selection of the President passes into the House of Representatives, in which each state has one vote (determined by a majority vote of the Representatives from each state). A simple majority is required to choose the President. The same process takes place in the Senate to select the Vice President.

  • In the current House of Representatives, assuming all Democrats and Republicans vote, respectively, for Biden and Trump, the latter would win.

  • However, the new member of the House and Senate, according to the Constitution, are sworn in on January 3rd, 2021. So, in theory, the composition of the House could change before the election of the President by the House of Representatives.

  • Now let's assume that, perhaps because of litigation, the House struggles, and has not reached one by 12 noon on January 20th. That is the time when, per the 20th Amendment to the US Constitution, the President's term ends.

  • While Donald Trump may disagree, the Constitution is clear. If a new President has not been chosen by either the Electoral College or the House of Representatives by 12pm on January 20, 2021, Donald Trump will no longer be President, nor will Mike Pence be Vice President.

  • In the absence of a legally selected President and Vice President, under the line of presidential succession, the Speaker of the House of Representatives becomes President, until at some point the House of Representatives selects a new President, and the Senate selects a new Vice President. Given that, it is theoretically possible that (assuming the Republicans control a majority of state deletions in the House, and the Democrats control the Senate), the United States could end up with a President and Vice President from different parties.

Needless to say, what is about to occur in the United States between November 3rd and January 20th is very likely to be a story for the ages.

I also suspect that the full impact of the coming uncertainty shock is not yet fully reflected in asset prices.
Read More...

Scarred Beliefs and a Less Dynamic Economy: Insights from Day 1 of the KC Fed's Jackson Hole Symposium


At Britten Coyne Partners, the Strategic Risk Institute, The Index Investor, and The Retired Investor, our goal is to help clients avoid strategic failure and the painful losses it brings.

Our core process for accomplishing this goal is shown in the chart below. We stress the importance of anticipating and monitoring of emerging threats, and being alert to surprises that often indicate a new threat you have missed. We also stress the importance of appropriate assessment, early warning, and adapting in time using multiple approaches to minimize the impact of dangerous threats.


Stacks Image 56
With this model in mind, I always pay attention to the academic research presentations that are on the agenda for the Federal Reserve Bank of Kansas City’s annual Jackson Hole Symposium (colloquially known as summer camp for the world’s most important central bankers).

This year's conference opened today, and the two papers featured this morning were on issues we have frequently addressed at BCP, SRI, Index, and Retired.

The first paper was “What Happened to U.S. Business Dynamism?” by Ufuk Akcigit and Sina Ates. The authors note, “Market economies are characterized by the so-called “creative destruction” where unproductive incumbents are pushed out of the market by new entrants or other more productive incumbents or both...

“A byproduct of this up-or-out process is the creation of higher-paying jobs and reallocation of workers from less to more productive firms. [However], the U.S. economy has been losing this business dynamism since the 1980s and, even more strikingly, since the 2000s. This shift manifests itself in a number of empirical regularities", which Akcigit reviewed at this morning's session:

1. Market concentration has risen.
2. Average markups have increased.
3. Average profits have increased.
4. The labor share of GDP has gone down.
5. Market concentration and labor share are negatively associated.
6. The labor productivity gap between frontier and laggard firms has widened.
7. Firm entry rate and the share of young firms in economic activity has declined.
8. Job reallocation has slowed.
9. The dispersion of firm growth has decreased.
10. Aggregate productivity growth has fallen, except for a brief pickup in the late 1990s.
11. A secular decline in real interest rates has occurred.

Akcigit and Sina Ates’ observations are also consistent with research from McKinsey, which found that, “the top 10 percent of companies now capture 80 percent of positive economic profit…[Moreover], after adjusting for inflation, today’s superstar companies have 1.6 times more economic profit, on average, than the superstar companies of 20 years ago” (“What Every CEO Needs to Know About Superstar Companies”).

Of the hypotheses that Akcigit and Ates tested to explain these trends, they found the evidence and their modeling best supported the hypothesis that, “reduction in knowledge diffusion [across firms] between 1980 and 2010 is the most powerful force in driving all of the observed trends simultaneously.”

Discussion at this morning’s symposium focused on the plausible obstacles to faster diffusion of advanced knowledge across firms. These included more patenting by larger firms, larger firm’s acquisition of patents from smaller firms, aggressive patent litigation by large firms, large firms luring away employees with the most patents from smaller firms, and larger firms’ heavy investment in lobbying and supporting regulatory changes that strengthen their advantage.

I was surprised, however, that another very likely obstacle to faster diffusion wasn’t mentioned this morning. In “Digital Abundance and Scarce Genius”, Benzell and Brynjolfsson found that the shortage of talented employees is the most important constraint on the faster deployment and diffusion of advanced technologies across the economy. And Korn Ferry found, in “The Global Talent Crunch”, that “the United States faces one of the most alarming talent crunches of the twenty countries in our study”.

So what is to be done, given the authors’ observation that the COVID-19 pandemic will likely make these conditions worse?

Looking at possible policy changes that could help to avert this outcome, this morning’s discussion focused on the need for stronger anti-trust enforcement and other actions that would intensify the level of competition in the US economy. To these I would add that recovering students’ COVID-19 learning losses and substantially strengthening the US education system are also critical (and will require painful structural changes, not just further infusions of cash).

The second paper presented this morning was “Scarring Body and Mind: The Long-Term Belief Scarring Effects of COVID-19”, by Kozlowski, Veldkamp, and Venkateswaran.

They find that, “the largest economic cost of the COVID-19 pandemic could arise from changes in behavior long after the immediate health crisis is resolved. A potential source of such a long-lived change is scarring of beliefs, a persistent change in the perceived probability of an extreme, negative shock in the future…

“The long-run costs for the U.S. economy from this [belief] channel are many times higher than the estimates of the short-run losses in output. This suggests that, even if a vaccine cures everyone in a year, the Covid-19 crisis will leave its mark on the US economy for many years to come.”

This is consistent with Robert Barro’s earlier research on the impact of “disaster risk” on investors’ decisions and required returns (see his 2006 paper on “Rare Disasters and Asset Markets in the 20th Century”).

It is also consistent with the findings in another recent paper, “The Long Run Consequences of Pandemics”, by Jorda et al from the Federal Reserve Bank of San Francisco.

They analyzed the medium to long-term effects of pandemics, and how they differ from other economic disasters, by studying major pandemics using the rates of return on assets stretching back to the 14th century.

They concluded that, “significant macroeconomic after-effects of pandemics persist for decades, with real rates of return substantially depressed, in stark contrast to what happens after wars”, and observe that “this is consistent with the neoclassical growth model: capital is destroyed in wars, but not in pandemics; pandemics instead may induce relative labor scarcity and/or a shift to greater precautionary savings” by altering consumer’s beliefs.

This morning’s discussion of the paper by Kozlowski et al focused on the critical question of why belief scarring seemed to have had a much stronger and longer-lasting impact after the Great Depression than after the 9/11 terrorist attacks.

The consensus seemed to be that a range of very visible policy responses to reduce the risk of further terrorist attacks after 9/11 seemed to reduce belief scarring by much more than the policy responses to the Great Depression…

In sum, along with actions to restore business dynamism and strengthen competition, public perceptions of the efficacy of various policy responses to the COVID-19 pandemic will very likely be critical to minimizing its long-term negative impact on economic activity. Both of these are key indicators to monitor in the months ahead.

Britten Coyne Partners advises clients on strategic risk governance and management issues in the face of high uncertainty. The Strategic Risk Institute provides online and in-person courses leading to a Certificate in Strategic Risk Governance and Management. Since 1997, The Index Investor has published global macro research and asset allocation insights, with a particular focus on avoiding large portfolio losses. The Retired Investor has the same focus, customized for the unique needs of investors in the decumulation phase of their financial life.
Read More...

Sources of Forecast Error and Uncertainty

When seeking to improve forecast accuracy, it is critical to understand the major sources of forecast error. Unfortunately, this is not something that is typically taught in school. And learning it the hard way can be very expensive. Hence this note.

Broadly speaking, there are four sources of forecast uncertainty and error:

1. An incorrect underlying theory or theories;
2. Poor modeling of a theory to apply it to a problem;
3. Wrong parameter values for variables in a model;
4. Calculation mistakes.

Let’s take a closer look at each of these.

Theories

When we make a forecast we are usually basing it on a theory. The problem here is twofold.

First, we often fail to consciously acknowledge the theory that underlies our forecast.

Second, even when we do this, we usually fail to reflect on the limitations of that theory when it comes to accurately forecasting real world results. Here’s a case in point: How many economic forecasts have been based on rational expectations and/or efficient market theories, despite their demonstrated weaknesses as descriptions of reality? Or, to cite an even more painful example, in the years before the 2008 Global Financial Crises, central bank policy was guided by equilibrium theories that failed to provide early warning of the impending disaster.

The forecasts we make are actually conditional on the accuracy of the theories that underlie them. In the case of high impact outcomes that we believe to have a low likelihood of occurring, failing to take into account the probability of the underlying theory’s accuracy can lead to substantial underestimates of the chance a disaster may occur (see, “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”, by Ord et al).

There are three other situations where the role of theory is usually obscured.

The first is forecasts based on intuition. Research has found that accurate intuition is developed through the combination of (a) repeated experience over time; (b) in a system whose structure and dynamics don’t change; (c) the receipt of repeated feedback on the accuracy of forecasts; and (d) followed by explicit reflection on this feedback that gradually sharpens intuition.

When we make a forecast based on intuition, we are (usually implicitly) making the assumption that this theory applies to the situation at hand. Yet in too many cases, it does not (e.g., because the underlying system is continually evolving). In these cases, our “intuition” very likely rests on a small number of cases that are easily recalled either because they are recent or still vivid in our memory.

The second is a forecast based on analogies. The implicit theory here is that those analogies have enough in common with the situation at hand to make them a valid basis for a forecast. In too many cases, this is only loosely true, and the resulting forecast has a higher degree of uncertainty that we acknowledge.

The third is a forecast based on the application of machine learning algorithms to a large set of data. It is often said that these forecasts are “theory free” because their predictions are based on the application of complex relationships that were found in the analysis of the training data set.

Yet theory is still very much present, including, for example, those that underlie various approaches to machine learning, and those that guide explanation of the extremely complex process that produced the forecast.

Another theoretical concern with machine learning-based forecasts is the often implicit assumption that either the system that generated the data used to train the ML algorithm will remain stable in the future (which is not the case for complex adaptive social or socio-technical systems like the economy, society, politics, and financial markets), or that it will be possible to continually update the training data and machine learning algorithm to match the speed at which the system is changing.

Models

While theories are generalized approaches to explaining and predicting observed effects, models (i.e., a specification of input and output variables and the relationships between them) apply these theories to specific real world forecasting problems.

This creates multiple sources of uncertainty. The first is the decision about which theory to include in a model, as more than one may apply. RAND’s Robert Lempert is a leading expert in this area, who advocates the construction of “ensemble” models that combine the results from applying multiple theories. Most national weather services do the same thing to guide their forecasts. However, ensemble modeling is still far from mainstream.

A second source of uncertainty is the extent to which the implications of a theory are fully captured in a model. A recent example of this was the BBC’s 24 February 2020 story, “Australia Fires Were Worse Than Any Prediction”, which noted they surpassed anything that existing fire models had simulated.

A third source of modeling uncertainty has been extensively researched by Dr. Francois Hemez, a scientist at the Los Alamos and Lawrence Livermore National Laboratories in the United States whose focus is the simulation of nuclear weapons detonations.

He has concluded that all models of complex phenomena face an inescapable tradeoff between their fidelity to historical data, robustness to lack of knowledge, and consistency of predictions.

In evolving systems, models which closely reproduce historical effects often do a poor job of predicting the future. In other words, the better a model reproduces the past, the less accurately it will predict the future, even if its forecasts are relatively consistent.

Hemez also notes that, “while unavoidable, modeling assumptions provide us with a false sense of confidence because they tend to hide our lack-of-knowledge, and the effect that this ignorance may have on predictions. The important question then becomes: ‘how vulnerable to this ignorance are our predictions?’”

“This is the reason why ‘predictability’ should not just be about accuracy, or the ability of predictions to reproduce [historical outcomes]. It is equally important that predictions be robust to the lack-of-knowledge embodied in our assumptions” (see Hemez in “Challenges in Computational Social Modeling and Simulation for National Security Decision Making” by McNamara et al).

However, making a model more robust to our lack of knowledge (e.g., by using the ensemble approach) will often reduce the consistency of its predictions about the future.

The good news is that forecast accuracy often can be increased by combining predictions made using different models and assumptions, either by simply averaging them or via a more sophisticated method (e.g., shrinkage, extremizing, etc.).

Parameter Values

The values we place on model variables is the source of uncertainty with which people are most familiar.

As such, many approaches are used to address it, including scenarios and sensitivity analysis (e.g., best, worst, and most likely cases), Monte Carlo methods (i.e., specifying input variables and results as distributions of possible outcomes, rather than point estimates), and systematic Bayesian updating of estimated values as new information becomes available.

However, even when these methods are used important sources of uncertainty can still remain. For example, in Monte Carlo modeling there is often uncertainty about the correct form of the distributions to use for different input variables. Typical defaults include the uniform distribution (where all values are equally possible), the normal (bell curve) distribution, and a triangular distribution based on the most likely value as well as those believed to be at the 10th and 90th percentiles. Unfortunately, when variable values are produced by a complex adaptive system, they often follow a power law (Pareto) distribution, and the use of traditional distributions increases forecast uncertainty.

Another common source of uncertainty is the relationship between different variables. In many models, the default decision is to assume variables are independent, which is often not true.

A final source of uncertainty is that under different conditions, the values of some model input variables may only change with varying time lags, which are rarely taken into account.

Calculations

Researchers have found that calculation errors are distressingly common, and especially in spreadsheet models (e.g., “Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors” by Raymond Panko, “Comprehensive Review for Common Types of Errors Using Spreadsheets” by Ali Aburas, and “What We Don’t Know About Spreadsheet Errors Today: The Facts, Why We don’t Believe Them, and What We Need to Do”, by Raymond Panko).

While large enterprises that create and employ complex models increasingly have independent model validation and verification (V&V) groups, and while new automated error checking technologies are appearing (e.g., see the ExcelInt add-in), their use continues to be the exception not the rule.

As a result, a large number of model calculation errors probably go undetected, at least until they produce a catastrophic result (usually a large financial loss).

Conclusion

People frequently make forecasts that assign probabilities to one or more possible future outcomes. In some cases, these probabilities are based on historical frequencies – like the likelihood of being in a car accident.

But in far more cases, forecasts reflect our subjective belief about the likelihood of the outcome in question – i.e., “I believe the probability of “X” occurring before the end of 2030 is 25%.”

What few people realize is that these forecasts are actually conditional probabilities that contain multiple sources of cumulative uncertainty.

For example, consider the probability of “X” occurring before the end of 2030 is 25% -- conditional upon (1) the probability the theory that underlies my estimate is valid; (2) the probability my model has appropriately applied this theory to the forecasting question at hand; (3) the probability my estimated value or values for the variables in my model are accurate; and (4) the probability I have not made any calculation errors.

Given what we know about these four conditioning factors, it is clear that many of the subjective forecasts we encounter are a good deal more uncertain than we usually realize.

In the absence of the opportunity to delve more deeply into the potential sources of error in a given probability forecast, the best way to improve predictive accuracy is to select and combine multiple forecasts that are made using different methodologies, and/or alternative sources of information.
Read More...
 Page 1 / 2  >>