Book contents
- Frontmatter
- Contents
- Preface
- 1 Sampling methods
- 2 Weighting
- 3 Statistical effects of sampling and weighting
- 4 Significance testing
- 5 Measuring relationships between variables
- Appendix A Review of general terminology
- Appendix B Further reading
- Appendix C Summary tables for several common distributions
- Appendix D Chapter 2 mathematical proofs
- Appendix E Chapter 3 mathematical proofs
- Appendix F Chapter 4 mathematical proofs
- Appendix G Chapter 5 mathematical proofs
- Appendix H Statistical tables
- References
- Index
Appendix E - Chapter 3 mathematical proofs
Published online by Cambridge University Press: 18 August 2009
- Frontmatter
- Contents
- Preface
- 1 Sampling methods
- 2 Weighting
- 3 Statistical effects of sampling and weighting
- 4 Significance testing
- 5 Measuring relationships between variables
- Appendix A Review of general terminology
- Appendix B Further reading
- Appendix C Summary tables for several common distributions
- Appendix D Chapter 2 mathematical proofs
- Appendix E Chapter 3 mathematical proofs
- Appendix F Chapter 4 mathematical proofs
- Appendix G Chapter 5 mathematical proofs
- Appendix H Statistical tables
- References
- Index
Summary
E1 Design effect and effective sample size
To get the design effect formula in Theorem 3.1, notice that the sampling variance is
while the variance of simple random sampling is p(1 - p)/n. Thus,
The formula for the total effective sample size is then easily obtained:
E2 Weighting effect
To prove Theorem 3.2, we consider x1, …, xn as independent identically distributed random variables. Because the weights are constant we obtain
Therefore, The design effect is, by definition, the actual variance divided by the variance of simple random sampling which is var(x)/n. Consequently,
The first property of the calibrated sample size has been proved in [10] but we give another, much shorter, proof. By the Cauchy—Schwarz inequality, for any real numbers (ai), (bi) with the equality if and only if there is ë such that ai = λbi for all i. (This inequality, known also as the Cauchy—Bunyakovskij inequality, is discussed in almost any textbook on functional analysis or Hilbert spaces.) Therefore,
so that with the equality if and only if all weights are constant.
To prove the second property, assume that n = n1 + … + nm. We have to show that
It is enough to prove this inequality in the case of two summands because then the repetition gives the general case.
- Type
- Chapter
- Information
- Statistics for Real-Life Sample SurveysNon-Simple-Random Samples and Weighted Data, pp. 244 - 248Publisher: Cambridge University PressPrint publication year: 2006