The conversion of economic concepts into formulas dominated the development of economic theory in the post–World War II era. This was a great opportunity for mathematically inclined economists even though much of the work was fairly inconclusive. As soon as one group derived an economic principle, another group proved the opposite. What economists needed were actual numbers to determine whose theory was better. They hoped to find truth in statistics.
In the 1920s, statisticians borrowed techniques from physics and other disciplines to calibrate equations with actual economic data. The field of study became known as econometrics and involved general statistical techniques refined for use with economic data and models. The name econometrics implies its meaning, a combination of economics and statistics or metrics. Nobel Prize winner Ragnar Frisch is credited with coining the term and defined it as “the statistical verification of the laws of pure economics.” Frisch and his student, Trygve Haavelmo, developed the basic framework for econometrics while another Nobel Prize winner, Jan Tinbergen, started applying the techniques to systems of equations. The goal was to provide compelling explanations of economic events and more accurate predictions of the future.
A later generation of Nobel Prize–winning statisticians concentrated on problems of particular interest to economists. Sir Clive W. J. Granger and Robert F. Engle, III developed new methods to analyze data collected over time, Daniel L. McFadden created a better method to study choices made between discrete alternatives, such as buying a Honda Accord or a Toyota Camry, and James J. Heckman explored some of the problems of using groups of individuals to test economic hypotheses.