Models of the macroeconomy have gotten quite sophisticated, thanks to decades of development and advances in computing power. Such models have also become indispensable tools for monetary policymakers, useful both for forecasting and comparing different policy options. Their failure to predict the recent financial crisis does not negate their use, it only points to some areas that can be improved.
“All models are false but some are useful”
10.05.2011 ISSN 2163-3738 EC 2011-19 DOI 10.26509/frbc-ec-201119The views authors express in Economic Commentary are theirs and not necessarily those of the Federal Reserve Bank of Cleveland or the Board of Governors of the Federal Reserve System. The series editor is Tasia Hane. This paper and its data are subject to revision; please visit clevelandfed.org for updates.
Periods of economic and social crisis can easily turn into periods of change for economics as a profession. The dramatic financial crisis we experienced recently has caused economists to question the prevailing assumptions and standard approaches of the field. It is not the first time—the problems of the 1970s and 1930s had a similar effect on economic theory—and it surely will not be the last.
As we come to terms with why the crisis happened and why economists could not prevent or predict it, it is important to understand what was wrong with mainstream doctrine and practice. It is likewise just as important to identify what was working fine. As the old saying goes, let’s not throw the baby out with the bath water.
In this Commentary, we focus on one subset of economic theory and practice, the role of econometric models in the conduct of monetary policy. We review the development of different types of models commonly in use and highlight their successes and failures since the 1950s. In doing so, we also describe some of the common approaches that central banks use for forecasting and evaluating different policy scenarios.
Forecasting plays a vital role in the conduct of monetary policy. Policymakers need to predict the future direction of the economy before they can decide which policy to adopt. While, strictly speaking, they do not necessarily need an economic model to discuss where the economy is heading, the use of a model’s forecast has the benefit of elevating that discussion to a scientific and systematic level. Models can be used to test different theories, for example, and they require forecasters to clearly spell out their underlying hypotheses.
But policymakers need forecasting tools that do more than project the likely path of important economic indicators like inflation, output, or unemployment. They need tools that can provide them with policy guidance—tools that help them determine the economic implications of monetary-policy changes. For example, what will the economy look like under the original monetary policy, and what will it look like after the change? For this reason, there has been an effort over the past 40 to 50 years to develop empirical forecasting models that are able to provide policymakers with this kind of guidance. Three broad categories of macroeconomic models have arisen during this time, each with its own strengths and weaknesses: structural, nonstructural, and large-scale models.
Structural models are built using the fundamental principles of economic theory, often at the expense of the model’s ability to predict key macroeconomic variables like GDP, prices, or employment. In other words, economists who build structural models believe that they learn more about economic processes from exploring the intricacies of economic theory than from closely matching incoming data.
Nonstructural models are primarily statistical time-series models—that is, they represent correlations of historical data. They incorporate very little economic structure, and this fact gives them enough flexibility to capture the force of history in the forecasts they generate. They intentionally “fudge” theory in an effort to more closely match economic data. The lack of economic structure makes them less useful in terms of interpreting the forecast, but at the same time, it makes them valuable in producing unconditional forecasts. That means that they generate the expected future paths of economic variables without imposing a path on any particular variable. These unconditional forecasts are typically accurate if the overall monetary policy regime does not change. Since policy regimes change infrequently, most forecasts from nonstructural models are useful.
The third category, large-scale models, is a kind of middle ground between the structural and nonstructural models. Such models are a hybrid; they are like nonstructural models in that they are built from many equations which describe relationships derived from empirical data. They are like structural models in that they also use economic theory, namely to limit the complexity of the equations. They are large, and their size brings pros and cons. One advantage is that relationships can be selected from a huge variety of data series, making it possible to provide a thorough description of the economic condition of interest. For instance, structural models rarely feature variables such as “car sales,” while large-scale models often do. The main disadvantage is their complexity, which poses some limitations to their understanding and use.
The interest in developing large-scale forecasting models for policy purposes began in the 1960s at a time when Keynesian economic theory was very popular and advances in computer technology made their use feasible. Toward the end of the decade, the Federal Reserve Board developed its first version of a macro model for the U.S. economy called MPS (MIT, University of Pennsylvania, and Social Science Research Council). The Board began to use the model for forecasting and policy analysis in 1970. In the initial version, MPS contained about 60 behavioral equations (equations that describe the behavior of economic variables). At the time, economists thought they had built a structural model. Soon they would find otherwise.
The initial optimism and momentum for building practical economic models was abruptly interrupted in the 1970s, a decade of great inflation and macroeconomic turbulence. The failure of economists to forecast high inflation and unemployment and to successfully address the economic troubles of the period produced a loss of faith in mainstream Keynesian theory and in the models that were the operative arm of that theory.
Disappointment came from realizing that the models that had been developed were not as structural as previously thought. Several flaws were identified, including assumptions about the behavior of prices and the overall modeling approach.
The models’ greatest weakness was that they ignored the role that expectations play in influencing future economic events. The Fed’s and other large-scale models were often used for conditional forecasting exercises, in which variables of interest are forecasted for a chosen monetary policy stance. Comparing scenarios shows the economic implications of different monetary policy stances. But since the models did not incorporate expectations, in particular about monetary and fiscal policies, they did not produce reliable conditional forecasts.
These weaknesses were clearly a drawback when turbulence hit the economy. In fact, when people are making decisions in periods of high uncertainty, they put a lot of emphasis on anticipating what policymakers will do. They can behave differently than they did in the past, which policymakers won’t be able to predict if they’re relying on models that merely capture historical behavior patterns and don’t incorporate expectations.
The Nobel Prize winner Robert Lucas was one of the first economists to point out the pitfalls of underplaying the role of expectations, especially in relation to policy recommendations. He pointed out that the underlying parameters of the prevailing models—the numerical constants embedded in the models that drove the forecasts—were not constant at all. They would change as policy changed or as expectations about policy changed, leaving policy conclusions based on these models completely unreliable. (The argument came to be called the Lucas critique.) The policy failures of the 1970s seemed to bear him out. Lucas called for models with deeper theoretical structures, and the economics profession heard him.
Development led next in two directions, one toward improving the existing large-scale models and the other toward further developing nonstructural forecasting models. The latter effort has led to the widespread use and success of vector auto-regression models (VARs).
The Fed continued to work on its large-scale models. It developed a multicountry model (MCM) to complement the MPS, and in the 1990s it developed a new set of models—FRB/US, FRB/MCM, and FRB/World. These new models still kept most of the underlying structural framework and the equilibrium relationships of the MPS and the MCM, but they also contained explicit specifications of forward-looking expectations and a more sophisticated representation of agents’ decision making. Though they are not truly structural, they are still nevertheless the prime large-scale macro models (with over 250 behavioral equations) currently in use at the Fed. FRB/US is the most comprehensive model of the U.S. economy available anywhere.
The rational expectations revolution of the 1970s created a temporary disconnect between academia and central banks. Economists at universities started working on developing a modeling framework that did not violate the Lucas critique. Monetary policymakers meanwhile continued to work with existing large-scale models since they were the only available framework for policy analysis. At the same time, they worked on improving those models by incorporating features advocated by Lucas and others, such as forward-looking expectations.
In a curious twist of fate, the disconnect was resolved by the rise of a new set of models, commonly known as DSGE (dynamic stochastic general equilibrium) models. The roots of DSGE models can be traced back to real business cycle theory—a theory that left very little room for monetary policy actions.
Harvard’s Gregory Mankiw explains what DSGE models are in his popular textbook. Paraphrasing, dynamic means the models “trace the path of variables over time” (since the decisions of households and businesses affect not only the current period but future periods as well); stochastic means the models incorporate techniques that account for the possibility of random economic events; and general equilibrium means that each model is built as a whole system and everything within the system depends on everything else (prices determine what people do, but what people do also determines prices).
Research on DSGE models has been going on at a significant pace since the 1980s, but only in the past few years have the models been used seriously for forecasting. While similar to large-scale models, DSGE models are different in that the latter have better microeconomic foundations: Household and firm behavior is modeled from first principles, while equations that relate macroeconomic variables (such as output, consumption, and investment) to each other are determined from the aggregation of the microeconomic equations.
The aggregation follows a strict bottom-up approach that goes from the micro to the macro level. This approach makes DSGE models better-suited to constructing conditional forecasts and comparing different policy scenarios.
DSGE models have a number of other advantages over large-scale models. They avoid the expectations problem that Lucas alerted everyone to. They incorporate a role for monetary policy, making them appealing to central banks. And finally, a technical advantage is that they can make use of the powerful solution methods of nonstructural models, given that their decision rules are usually well approximated by linear rules. The economist Francis Diebold described this aspect of DSGE models as “a marvelous union of modern macroeconomic theory and nonstructural times-series econometrics.”
Since DSGE models are technically very difficult to solve and analyze, they are much smaller in scale—usually featuring less than a hundred variables. They cannot easily incorporate the large array of high-frequency data usually available to policymakers.
Unfortunately, leaving some variables out may often lead to serious misspecification. For this reason, Princeton economist Christopher Sims characterizes DSGE models as useful story-telling devices that cannot yet replace large-scale models for forecasting purposes. On the other hand, Ben Bernanke, chairman of the Board of Governors of the Federal Reserve System, noted that DSGE models are “increasingly useful for policy analysis” and “likely to play a more significant role in the forecasting process over time. ”
Economic forecasting models have come a long way since the 1970s, both the structural and nonstructural varieties. Most models, however, failed to predict the recent financial crisis. This failure may be partly attributed to the models’ failure to fully incorporate the growing role of the financial sector or the worldwide financial and trade linkages that globalization has generated.
However, while the economics profession is currently trying to address those deficiencies, there is something intrinsic to economics that makes forecasting difficult. Contrary to the natural sciences, the social sciences do not have true invariants that can be used as scientific foundations. There is nothing like a “constant of gravity” in economics, which we can claim is really constant. This happens because the object that is studied and the observer are in continuous interaction, and those sorts of relationships have no easily predictable consequences.
It is unlikely that models will ever provide perfectly accurate forecasts. That is because forecasts are ultimately just another variable in the system, and it is impossible to restrain them from influencing other variables in the system. Once a forecast is revealed, the forecast itself can actually change people’s behavior. In fact, the people who attend most closely to forecasts are the people whose behavior is most likely to affect the future course of the variables forecasted. In the end, while policymakers would prefer better forecasts, policymakers’ ultimate objective is better policy. And the lack of forecasting ability does not prevent models from being useful devices that can help policymakers in making decisions.
In this respect, the contribution that DSGE models have provided is mainly methodological, making them a useful complement to, but not a substitute for, large-scale macroeconomic models or nonstructural VARs. At the same time, they have given academic economists and central bank staff a base for a common language. In this respect, we believe DSGE models have had a success that cannot be judged by their inability to forecast the recent crisis.