Book contents
- Frontmatter
- Contents
- Contributors
- Editors’ Note
- 1 Opportunities and Challenges: Lessons from Analyzing Terabytes of Scanner Data
- 2 Is Big Data a Big Deal for Applied Microeconomics?
- 3 Low-Frequency Econometrics
- 4 Shocks, Sign Restrictions, and Identification
- 5 Macroeconometrics – A Discussion
- 6 On the Distribution of the Welfare Losses of Large Recessions
- 7 Computing Equilibria in Dynamic Stochastic Macro-Models with Heterogeneous Agents
- 8 Recent Advances in Empirical Analysis of Financial Markets: Industrial Organization Meets Finance
- 9 Practical and Theoretical Advances in Inference for Partially Identified Models
- 10 Partial Identification in Applied Research: Benefits and Challenges
- Index
2 - Is Big Data a Big Deal for Applied Microeconomics?
Published online by Cambridge University Press: 27 October 2017
- Frontmatter
- Contents
- Contributors
- Editors’ Note
- 1 Opportunities and Challenges: Lessons from Analyzing Terabytes of Scanner Data
- 2 Is Big Data a Big Deal for Applied Microeconomics?
- 3 Low-Frequency Econometrics
- 4 Shocks, Sign Restrictions, and Identification
- 5 Macroeconometrics – A Discussion
- 6 On the Distribution of the Welfare Losses of Large Recessions
- 7 Computing Equilibria in Dynamic Stochastic Macro-Models with Heterogeneous Agents
- 8 Recent Advances in Empirical Analysis of Financial Markets: Industrial Organization Meets Finance
- 9 Practical and Theoretical Advances in Inference for Partially Identified Models
- 10 Partial Identification in Applied Research: Benefits and Challenges
- Index
Summary
While applications of “big data” methods to social data have exploded, applications to social science have not. I discuss why, and I review some recent applications of big data methods to applied microeconomics. I speculate on opportunities to bring more big data methods into applied economics, and on opportunities to bring more economics to big data.
INTRODUCTION
There are n + 1 units. The first n units are the training sample. The remaining unit is the target. For each unit i in the training sample, a social scientist measures an r-dimensional response yi, a b-dimensional vector of background covariates xi, and a p-dimensional vector of policies zi. After observing the training sample and the target covariates xn+1, the social scientist chooses the target policy zn+1.
Data are called “big” when scale makes it impossible to use methods we might use easily on a smaller-scale problem. To fix ideas, think of ordinary least squares (OLS) as the canonical “small data” tool. The data may be big in the sense that we cannot load the product of the design matrix into working memory and invert it (large n). Or the data may be big in the sense that there are so many covariates that the OLS solution is ill-defined (b > n). Or the data may be big in the sense that the number of covariates grows with the sample (b growing as n grows).
The term “big data” is commonly applied to all of these cases, but especially to large n. The term “high-dimensional” is often applied to large r, b, or p.
Ongoing improvements in information and communication technology are making large-scale data increasingly available to governments and private actors, and hence to researchers. Much of this data – on Internet browsing habits, search queries, payment method use, interactions via social media, etc. – is social in nature and of obvious interest to social scientists. Yet most of the methodological advances in big data are occurring outside the social sciences.
In this article, I discuss the strengths and weaknesses of frontier big-data methods as tools for social science.
- Type
- Chapter
- Information
- Advances in Economics and EconometricsEleventh World Congress, pp. 35 - 52Publisher: Cambridge University PressPrint publication year: 2017
- 1
- Cited by