Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
8 - Efficiency of Estimators
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
One purpose of asymptotic statistics is to compare the performance of estimators for large sample sizes. This chapter discusses asymptotic lower bounds for estimation in locally asymptotically normal models. These show, among others, in what sense maximum likelihood estimators are asymptotically efficient.
Asymptotic Concentration
Suppose the problem is to estimate based on observations from a model governed by the parameter (). What is the best asymptotic performance of an estimator sequence Tn for
To simplify the situation, we shall in most of this chapter assume that the sequence converges in distribution under every possible value of. Next we rephrase the question as: What are the best possible limit distributions? In analogy with the CramerRao theorem a “best” limit distribution is referred to as an asymptotic lower bound. Under certain restrictions the normal distribution with mean zero and covariance the inverse Fisher information is an asymptotic lower bound for estimating in a smooth parametric model. This is the main result of this chapter, but it needs to be qualified.
The notion of a “best” limit distribution is understood in terms of concentration. If the limit distribution is a priori assumed to be normal, then this is usually translated into asymptotic unbiasedness and minimum variance. The statement that converges in distribution to a distribution can be roughly understood in the sense that eventually Tn is approximately normally distributed with mean and variance given by
Because Tn is meant to estimate, optimal choices for the asymptotic mean and variance are and variance as small as possible. These choices ensure not only that the asymptotic mean square error is small but also that the limit distribution is maximally concentrated near zero. For instance, the probability of the interval is maximized by choosingminimal.
We do not wish to assume a priori that the estimators are asymptotically normal. That normal limits are best will actually be an interesting conclusion.
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 108 - 124Publisher: Cambridge University PressPrint publication year: 1998