Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-c47g7 Total loading time: 0 Render date: 2024-04-25T04:37:36.042Z Has data issue: false hasContentIssue false

13 - Markov Chain Monte Carlo Methods

Published online by Cambridge University Press:  01 June 2011

John F. Monahan
Affiliation:
North Carolina State University
Get access

Summary

Introduction

One of the main advantages of Monte Carlo integration is a rate of convergence that is unaffected by increasing dimension, but a more important advantage for statisticians is the familiarity of the technique and its tools. Although Markov chain Monte Carlo (MCMC) methods are designed to integrate high-dimensional functions, the ability to exploit distributional tools makes these methods much more appealing to statisticians. In contrast to importance sampling with weighted observations, MCMC methods produce observations that are no longer independent; rather, the observations come from a stationary distribution and so time-series methods are needed for their analysis. The emphasis here will be on using MCMC methods for Bayesian problems with the goal of generating a series of observations whose stationary distribution π(t) is proportional to the unnormalized posterior p(t). Standard statistical methods can then be used to gain information about the posterior.

The two general approaches covered in this chapter are known as Gibbs sampling and the Metropolis–Hastings algorithm, although the former can be written as a special case of the latter. Gibbs sampling shows the potential of MCMC methods for Bayesian problems with hierarchical structure, also known as random effects or variance components.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, T. W. (1971), The Statistical Analysis of Time Series. New York: Wiley.Google Scholar
Box, George E. P. and Tiao, George C. (1973), Bayesian Inference in Statistical Analysis. Reading, MA: Addison-Wesley.Google Scholar
Bratley, Paul, Fox, Bennett L., and Schrage, Linus (1983), A Guide to Simulation. New York: Springer-Verlag.CrossRefGoogle Scholar
Brownlee, K. A. (1965), Statistical Theory and Methodology, 2nd ed. New York: Wiley.Google Scholar
Casella, George and George, Edward I. (1992), “Explaining the Gibbs Sampler,” American Statistician 46: 167–74.Google Scholar
Chan, K. S. and Geyer, C. J. (1994), Discussion of “Markov Chains for Exploring Posterior Distributions” (by L. Tierney), Annals of Statistics 22: 1747–58.CrossRefGoogle Scholar
Chib, Siddhartha and Greenberg, Edward (1995), “Understanding the Metropolis–Hastings Algorithm,” American Statistician 49: 327–35.Google Scholar
Cowles, Mary Kathryn and Carlin, Bradley P. (1996), “Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review,” Journal of the American Statistical Association 91: 883–904.CrossRefGoogle Scholar
Dellaportas, P. and Smith, A. F. M. (1993), “Bayesian Inference for Generalized Linear and Proportional Hazards Models via Gibbs Sampling,” Applied Statistics 42: 443–59.CrossRefGoogle Scholar
Devroye, Luc (1984), “A Simple Algorithm for Generating Random Variates with a Log-Concave Density,” Computing 33: 247–57.CrossRefGoogle Scholar
Devroye, Luc (1986), Non-Uniform Random Variate Generation. New York: Springer-Verlag.CrossRefGoogle Scholar
Elston, R. C. and Grizzle, J. F. (1962), “Estimation of Time Response Curves and Their Confidence Bands,” Biometrics 18: 148–59.CrossRefGoogle Scholar
Fuller, Wayne A. (1996), Introduction to Statistical Time Series, 2nd ed. New York: Wiley.Google Scholar
Gelfand, Alan E., Hills, Susan E., Racine-Poon, Amy, and Smith, Adrian F. M. (1990), “Illustration of Bayesian Inference in Normal Data Models Using Gibbs Sampling,” Journal of the American Statistical Association 85: 973–85.CrossRefGoogle Scholar
Gelman, Andrew and Rubin, Donald B. (1992a), “A Single Sequence from the Gibbs Sampler Gives a False Sense of Security,” in Bernardo, J. M., Berger, J. O., Dawid, A. P., and Smith, A. F. M. (Eds.), Bayesian Statistics 4, pp. 625–31. Oxford, U.K.: Oxford University Press.Google Scholar
Gelman, Andrew and Rubin, Donald B. (1992b), “Inference from Iterative Simulation Using Multiple Sequences,” Statistical Science 7: 457–511.CrossRefGoogle Scholar
George, E. I., Makov, U. E., and Smith, A. F. M. (1993), “Conjugate Likelihood Distributions,” Scandinavian Journal of Statistics 20: 147–56.Google Scholar
Geweke, J. (1992), “Evaluating the Accuracy of Sampling-Based Approaches to the Calculation of Posterior Moments,” in Bernardo, J. M., Berger, J. O., Dawid, A. P., and Smith, A. F. M. (Eds.), Bayesian Statistics 4, pp. 169–93. Oxford, U.K.: Oxford University Press.Google Scholar
Geyer, Charles J. (1992), “Practical Markov Chain Monte Carlo,” Statistical Science 7: 473–511.CrossRefGoogle Scholar
Gilks, W. R. (1992), “Derivative-Free Adaptive Rejection Sampling for Gibbs Sampling,” in Bernardo, J. M., Berger, J. O., Dawid, A. P., and Smith, A. F. M. (Eds.), Bayesian Statistics 4, pp. 641–9. Oxford, U.K.: Oxford University Press.Google Scholar
Gilks, W. R., Best, N. G., and Tan, K. K. C. (1995), “Adaptive Rejection Metropolis Sampling within Gibbs Sampling,” Applied Statistics 44: 455–72.CrossRefGoogle Scholar
Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (Eds.) (1996), Markov Chain Monte Carlo in Practice. London: Chapman & Hall.Google Scholar
Gilks, W. R. and Wild, P. (1992), “Adaptive Rejection Sampling for Gibbs Sampling,” Applied Statistics 41: 337–48.CrossRefGoogle Scholar
Hastings, W. K. (1970), “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika 57: 97–109.CrossRefGoogle Scholar
Heidelberger, P. and Welch, P. D. (1983), “Simulation Run Length Control in the Presence of an Initial Transient,” Operations Research 31: 1109–44.CrossRefGoogle Scholar
Hobert, James P. and Casella, George (1996), “The Effect of Improper Priors on Gibbs Sampling in Hierarchical Linear Models,” Journal of the American Statistical Association 91: 1461–73.CrossRefGoogle Scholar
Kass, Robert E., Carlin, Bradley P., Gelman, Andrew, and Neal, Radford M. (1998), “Markov Chain Monte Carlo in Practice: A Roundtable Discussion,” American Statistician 52: 93–100.Google Scholar
Leydold, Josef (2000), “Automatic Sampling with the Ratio-of-Uniforms Method,” ACM Transactions on Mathematical Software 26: 78–98.CrossRefGoogle Scholar
MacEachern, Steven N. and Berliner, L. Mark (1994), “Subsampling the Gibbs Sampler,” American Statistician 48: 188–90.Google Scholar
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953), “Equations of State Calculations by Fast Computing Machines,” Journal of Chemical Physics 21: 1087–92.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (1993), Markov Chains and Stochastic Stability. New York: Springer.CrossRefGoogle Scholar
Raftery, A. E. and Lewis, S. (1992), “How Many Iterations in the Gibbs Sampler?” in Bernardo, J. M., Berger, J. O., Dawid, A. P., and Smith, A. F. M. (Eds.), Bayesian Statistics 4, pp. 763–73. Oxford, U.K.: Oxford University Press.Google Scholar
Ripley, Brian D. (1987), Stochastic Simulation. New York: Wiley.CrossRefGoogle Scholar
Roberts, Gareth O. and Polson, Nicholas G. (1994), “On the Geometric Convergence of the Gibbs Sampler,” Journal of the Royal Statistical Society B 56: 377–84.Google Scholar
Schmeiser, Bruce (1990), “Simulation Experiments,” in Heyman, D. P. and Sobel, M. J. (Eds.), Handbooks in Operations Research and Management Science (vol. 2: Stochastic Models), pp. 295–330. Amsterdam: North-Holland.Google Scholar
Spiegelhalter, David, Thomas, Andrew, Best, Nicky, and Gilks, Wally (1996), BUGS: Bayesian Inference Using Gibbs Sampling. Cambridge, U.K.: Cambridge Medical Research Council Biostatistics Unit.Google Scholar
Tanner, M. A. and Wong, W. H. (1987), “The Calculation of Posterior Distributions by Data Augmentation,” Journal of the American Statistical Association 82: 528–49.CrossRefGoogle Scholar
Tierney, Luke (1994), “Markov Chains for Exploring Posterior Distributions” (with discussion), Annals of Statistics 22: 1701–62.CrossRefGoogle Scholar
Tierney, Luke (1995), “Introduction to General State-Space Markov Chain Theory,” in Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (Eds.), Markov Chain Monte Carlo in Practice. London: Chapman & Hall.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×