Published online by Cambridge University Press: 02 December 2009
The last few decades have witnessed a revolution in the use of econometrics in empirical macroeconomics mostly due to the easy access to fast performing computers. Even though the use of new sophisticated techniques has been burgeoning the profession does not seem to have reached a consensus on the principles for good practise in the econometric analysis of economic models. Summers' (1992) critique of the scientific value of empirical models in economics seems equally relevant today.
The basic dilemma is that the reality behind the available macroeconomic data is so much more rich and complex than the (often narrowly analyzed) problem being modeled by the theory. How to treat these “additional” features of the data (which often go against the ceteris paribus assumptions of the economic model) has divided the profession into the proponents of the so called “specific-to-general” and the proponents of “general-to-specific” approach to empirical economics.
The former, more conventional, approach is to estimate the parameters of a “stylized” economic model, while ignoring the wider circumstances under which the data were generated. These factors are then dumped into the residual term, causing its variance to be large. This practice has important implications for the power of empirical testing, often leading to a low ability to reject a theory model when it is false. As a result, different (competing) theory models are often not rejected despite being tested against the same data.