Tuesday, December 18, 2012

The Fed's Last Mover Power and Changing Policies Thwart Statistical Economic Modeling and Cause False Positive Recession Calls

The problem with purely backward looking statistical economic models is that they can't fully account for change in Federal Reserve policy. There's no way for a predictive economic model to account for the fact that, given the same economic data, the 2012 Fed will react in a different way than the 1975 Fed. In statistical parlance, the Fed imparts exogenous shocks on the economy. They are the last mover. And the way they choose to move has changed over time.

Let me illustrate what I mean by "last mover" with an example.

It's June 28th, 2010. The S&P 500 has declined from a high of 1220 in April to 1,011 in late June, a plunge of 18% in just two months. The economic expansion and bull market is barely more than a year old and people are skittish. Here is what John Hussman had to say at the time (emphasis mine):
Based on evidence that has always and only been observed during or immediately prior to U.S. recessions, the U.S. economy appears headed into a second leg of an unusually challenging downturn.
A few weeks ago, I noted that our recession warning composite was on the brink of a signal that has always and only occurred during or immediately prior to U.S. recessions, the last signal being the warning I reported in the November 12, 2007 weekly comment Expecting A Recession (the instance before that was the recession warning I reported in October 2000). While the set of criteria I outlined in 2007 would still require a decline in the ISM Purchasing Managers Index to 54 or less to complete a recession warning, what prompts my immediate concern is that the growth rate of the ECRI Weekly Leading Index has now declined to -6.9%. The WLI growth rate has historically demonstrated a strong correlation with the ISM Purchasing Managers Index, with the correlation being highest at a lead time of 13 weeks.
 
 Taking the growth rate of the WLI as a single indicator, the only instance when a level of -6.9% was not associated with an actual recession was a single observation in 1988. But as I've long noted, recession evidence is best taken as a syndrome of multiple conditions, including the behavior of the yield curve, credit spreads, stock prices, and employment growth. Given that the WLI growth rate leads the PMI by about 13 weeks, I substituted the WLI growth rate for the PMI criterion in condition 4 of our recession warning composite. As you can see, the results are nearly identical, and not surprisingly, are slightly more timely than using the PMI. The blue line indicates recession warning signals from the composite of indicators, while the red blocks indicate official U.S. recessions as identified by the National Bureau of Economic Research.

The blue spike at the right of the graph indicates that the U.S. economy is most probably either in, or immediately entering a second phase of contraction. Of course, the evidence could be incorrect in this instance, but the broader economic context provides no strong basis for ignoring the present warning in the hope of a contrary outcome. Indeed, if anything, credit conditions suggest that we should allow for outcomes that are more challenging than we have typically observed in the post-war period.
So in June 2010, the highly respected ECRI's Weekly Leading Index has declined to a level that had always and only been observed immediately prior to recession.

Two months later Ben Bernanke, speaking at Jackson Hole, hinted that further Fed stimulus could be coming. And a bit more than a two months later QE2 was officially announced.

Since monetary policy works primarily through shaping expectations, it makes sense to consider the QE2 hints at Jackson Hole as the functional start date for QE2. Here's a graph from Marcus Nunes showing how stocks and inflation expectations changed after QE2 was hinted at:


The S&P, which was at 1100 when QE2 was hinted at, rose to 1330 in the next six months, a gain of 21%. And over that same time frame 10-year inflation expectations rose from an anemic 1.7% to a healthier 2.6%.

By late 2010 or early 2011, economic data was firming and statistical recession forecasting models were no longer predicting recession. Human nature being what it is, bears who had been predicting recession and predicting the futility of QE2 refused to admit that the policy may have successfully averted recession.

Here's Hussman in late December 2010:
As for the notion that the Fed's targeted Treasury purchases have directly aided the economy, the argument requires bizarre logical gymnastics. It demands one to believe that although the purchases were intended to stimulate the economy by lowering rates, they have been successful without lowering them, and in fact by raising them, because the expectation of lower rates was so stimulative that it caused rates to rise, so that the higher rates can be taken as evidence that lowering rates without lowering them was a success. Oh, brother.
And here he is in March 2011 (emphasis mine):
Now, it's true that QE2 has probably been good for a fraction of 1% in additional GDP, which should be sustained over a period of a year or two, and though we haven't observed real activity or actual industrial production that matches the optimism of survey-based measures such as the ISM indices, it's clear that some pent-up demand was released.
 Alarm bells should be going off in your head. Economists describe recessions as a series of negative feedback loops. Oil prices spike>consumers get nervous and cutback on spending>business results weaken>banks tighten credit standards, etc. So a small shock triggers a series of reactions that intensifies that shock and a recession results. Predictive economic models attempt to identify those shocks before their full effects are felt. That "fraction of 1% of GDP" that Hussman attributes to QE2 may be precisely what short-circuited the negative feed back process which ordinarily results in recessions.

In Hussman's own words, the leading indexes were flashing a signal that "only and always" immediately preceded recessions, but did not this time. QE2 was "probably good for a fraction of 1% in additional GDP". How can he not see the connection here? I don't mean to pick on Hussman in particular. He's just a stereotype for purely quantitative economic modelers. The highly respected ECRI, which works with similar methods, has agreed with Hussman every step of the way.

Robert Hetzel's book The Great Recession is a history of US monetary policy. He traces the evolution of ideas about how the Fed affects the economy and how it ought to operate. His core idea is that the Fed ought to "lean against the wind with credibility". Periods of volatile macroeconomic performance are caused by the Fed's failure to do that. During the 1970-1982 period, which he terms stop-go, the US economy experienced four separate recessions and out of control inflation. Bad ideas were to blame. Keynesians thought that 4% unemployment was a sustainable goal, so they continued to pursue expansionary policy even as inflation rose above 2-3%. Accelerating inflation eventually forced a reversal and monetary contraction. As inflation began to decline, unemployment rose and a negative output gap grew, monetary policy makers waited too long to begin to ease. That's stop-go. The Fed pushed when it should've have pulled and pulled when it should have pushed and always acted too slowly. He attributes the Great Moderation of 1982-2007, in which only two mild recessions were experienced, to a well-performed policy of leaning against the wind; tightening as NGDP began to grow above trend and loosening as it began to slow. And most controversially, he believes that the Fed's inaction in the Spring and Summer of 2008, even as NGDP growth slowed and a negative output gap grew, morphed a mild recession into the entirely avoidable Great Recession.

Think about the 1970-1982 period for a moment. Think about the data generated by that period and which is now used by economic forecasters to make predictions in the current environment. Here's a chart of how leading indexes have behaved 1959-2011.

Chart from John Hussman January 2012 (recessions are the tan bars)


Basically what folks like the ECRI and Hussman do is this. They look at that chart. They notice that anytime the index drops below zero recessions occur maybe 50% of the time. And they notice that once below -0.5 a recession is basically assured. What they don't account for is that, in 1975, when the index dropped below -0.5, the Fed was still tightening! Same probably goes for the 1970, 1980 and 1982 recessions.

But in August 2010, having learned the lessons of not leaning against the wind when economic conditions are deteriorating, the Fed announced plans to "print" $600 billion.

The Fed is the last mover. You can not simply look at historical data and make predictions with out accounting for changes in how the Fed will react to particular economic conditions.

In June 2010 Hussman said:
Based on evidence that has always and only been observed during or immediately prior to U.S. recessions, the U.S. economy appears headed into a second leg of an unusually challenging downturn.
He should have followed that up by saying (my words):
However, the Federal Reserve exerts a powerful influence over short-run economic outcomes. Of the eight recessions since 1958, at least four (1970, 1974, 1980, 1982) appear a direct consequence of inept Federal Reserve policy. Since 1982, modest economic weakness has resulted in an actual recession far less often than it did pre-1982, an improvement many attribute to successful counter-cyclical monetary policy. However, I remain cautious because I'm concerned that the monetary policy may be less effective than usual at the moment, since interest rates are at zero and the monetary base has already exploded. So I remain concerned, but my confidence in the bearish implications of the data I'm looking at needs to be tempered because the data was generated by a very small sample of recessions and because the way the Fed operates has changed dramatically within the data sample I'm looking at.
 And six months later Hussman should have said:
The fact that a statistical recession indicator which has a near perfect historical record has failed in 2H10 is modest evidence in favor of the idea that the Fed remains potent even in the current unusual monetary conditions.

(Feel free to ignore this part on probability. It's a bit messy.)

In probability theory there are two approaches to forming beliefs given evidence.

Frequentists form a hypothesis, look at data and output the odds of seeing the data if the hypothesis were false. The odds of flipping heads four times in a row is 6%. If the null hypothesis was that the coin being flipped was fair, a frequentist will tell you that the odds of seeing the data (4 straight heads) if the null hypothesis were true is just 6%.

But does that mean there's a 94% chance the coin is not fair? Is there any information we can bring to bear on the problem other than the four straight heads result? Does the coin look normal? Who is doing the flipping? What are his incentives? All these things- in addition to the data (4 straight heads)- will affect your judgement about the odds the coin is fair. All the relevant information other than the data is what forms your "prior", which is your perception of whether the coin is fair BEFORE it is flipped four times.

Suppose your prior is that there's a 99% chance the coin is fair. How do I update my perception as data is received? Prob(unfair given 4 heads)= Prob(unfair given 4 heads) divided by (prob fair given 4 heads)+(prob not fair given 4 heads)= (0.01*100%)/((.99*6%)+(.01*100)= 7%. 

You thought there was only a 1% chance the coin was rigged, but then four heads came up in a row, and you've updated your belief to a 7% chance the coin was rigged. Note that, if the guy doing the flipping was a crook and you were betting on the outcome, then your prior would've been much higher and the four heads in a row may have convinced you that the coin was rigged.

What does this have to do with recession forecasting?

It may be time for forecasters to reduce their prior estimates of recession probabilities. Weakening economic data is stronger evidence of looming recession in 1975 than in 2012. Making predictions based on strict reliance on coin flipping data, or economic data, ignores relevant background information. The process of incorporating that background data is messy and unscientific. But failing to grapple with the cause of the simple fact that recessions are becoming less common- and statistical models keep giving false positive recession warnings- is just as unscientific.

This is all extremely important in my opinion. I think stock valuations are affected more by economic volatility than economic growth. If I'm right that the Fed has and will continue to manage economic volatility more successfully than it has in the past, I think P/E's can continue to rise even if GDP growth is modest in the next 3, 5 and 10 years. But that's a topic for another post.

My core claim is that backward looking statistical models are useful, but fail to account for the last-mover power and changing competence of the Federal Reserve. And that until they find a way to incorporate historical data with the Fed's current ability and willingness to short-circuit modest economic weakness before negative feed-back loops set-in, forecasters will continue to overestimate recession probabilities.

1 comment: