Book contents
- Frontmatter
- Dedication
- Contents
- List of Illustrations
- List of Tables
- List of Contributors
- Preface
- Part I Introduction to Modeling
- Part II Parameter Estimation
- 3 Basic Parameter Estimation Techniques
- 4 Maximum Likelihood Parameter Estimation
- 5 Combining Information from Multiple Participants
- 6 Bayesian Parameter Estimation
- 7 Bayesian Parameter Estimation
- 8 Bayesian Parameter Estimation
- 9 Multilevel or Hierarchical Modeling
- Part III Model Comparison
- Part IV Models in Psychology
- Appendix A Greek Symbols
- Appendix B Mathematical Terminology
- References
- Index
8 - Bayesian Parameter Estimation
from Part II - Parameter Estimation
Published online by Cambridge University Press: 05 February 2018
- Frontmatter
- Dedication
- Contents
- List of Illustrations
- List of Tables
- List of Contributors
- Preface
- Part I Introduction to Modeling
- Part II Parameter Estimation
- 3 Basic Parameter Estimation Techniques
- 4 Maximum Likelihood Parameter Estimation
- 5 Combining Information from Multiple Participants
- 6 Bayesian Parameter Estimation
- 7 Bayesian Parameter Estimation
- 8 Bayesian Parameter Estimation
- 9 Multilevel or Hierarchical Modeling
- Part III Model Comparison
- Part IV Models in Psychology
- Appendix A Greek Symbols
- Appendix B Mathematical Terminology
- References
- Index
Summary
The goal of this chapter is to instantiate the theory from the previous chapter in some real-world models using the JAGS programming language (Plummer, 2003). The acronym JAGS stands for “Just Another Gibbs Sampler,” and JAGS is one of several modern computer packages that rely on a particular form of MCMC known as Gibbs sampling. JAGS is one of several existing packages that perfom MCMC for Bayesian modeling. Other packages include “WinBUGS” (Spiegelhalter et al., 2003) and “Stan” (Carpenter et al., 2016), which each have their own strengths and weaknesses. We chose JAGS because of the flexibility it offers for extension (Wabersich and Vandekerckhove, 2014) and because it is easily used from within R. Readers who want to build on the limited number of examples provided in this chapter may wish to consult the excellent book by Lee and Wagenmakers (2013), which contains numerous additional examples.
Gibbs Sampling
JAGS relies on a particular type of MCMC known as Gibbs sampling. Although it is named after the late 19th -century American physicist JosiahWillard Gibbs, the sampler was only invented 8 decades after his death and named in his honor because of the similarities between the sampler and Gibbs’ contributions to statistical theory (Geman and Geman, 1984). Gibbs sampling has much in common with the Metropolis-Hastings approach introduced in the previous chapter. Both algorithms involve a Markov Chain of samples and both converge on the target distribution when given a sufficiently large number of samples. There are however also some important differences. The key property of the Gibbs sampler is that it samples from conditional distributions, which are often known even in situations in which the joint density is not available for integration – as is required for computation of the marginal likelihood (or “evidence”) in the denominator of Equation 6.8. Specifically, whereas sampling from the joint posterior for all the parameters may be unachievable in many situations, we can often easily sample from the posterior for one parameter given knowledge of the other parameter values. By iterating through the parameters, sampling each conditional upon the others being constant, the Gibbs sampler manages to provide us with posterior distributions for each parameter.
- Type
- Chapter
- Information
- Computational Modeling of Cognition and Behavior , pp. 172 - 202Publisher: Cambridge University PressPrint publication year: 2018