To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is well known that traditional Markov chain Monte Carlo (MCMC) methods can fail to effectively explore the state space for multimodal problems. Parallel tempering is a well-established population approach for such target distributions involving a collection of particles indexed by temperature. However, this method can suffer dramatically from the curse of dimensionality. In this paper we introduce an improvement on parallel tempering called QuanTA. A comprehensive theoretical analysis quantifying the improved efficiency and scalability of the approach is given. Under weak regularity conditions, QuanTA gives accelerated mixing through the temperature space. Empirical evidence of the effectiveness of this new algorithm is illustrated on canonical examples.
In this article, we study parameter uncertainty and its actuarial implications in the context of economic scenario generators. To account for this additional source of uncertainty in a consistent manner, we cast Wilkie’s four-factor framework into a Bayesian model. The posterior distribution of the model parameters is estimated using Markov chain Monte Carlo methods and is used to perform Bayesian predictions on the future values of the inflation rate, the dividend yield, the dividend index return and the long-term interest rate. According to the US data, parameter uncertainty has a significant impact on the dispersion of the four economic variables of Wilkie’s framework. The impact of such parameter uncertainty is then assessed for a portfolio of annuities: the right tail of the loss distribution is significantly heavier when parameters are assumed random and when this uncertainty is estimated in a consistent manner. The risk measures on the loss variable computed with parameter uncertainty are at least 12% larger than their deterministic counterparts.
In this paper we provide an introduction to statistical inference for the classical linear birth‒death process, focusing on computational aspects of the problem in the setting of discretely observed processes. The basic probabilistic properties are given in Section 2, focusing on computation of the transition functions. This is followed by a brief discussion of simulation methods in Section 3, and of frequentist methods in Section 4. Section 5 is devoted to Bayesian methods, from rejection sampling to Markov chain Monte Carlo and approximate Bayesian computation. In Section 6 we consider the time-inhomogeneous case. The paper ends with a brief discussion in Section 7.
Cohort effects are important factors in determining the evolution of human mortality for certain countries. Extensions of dynamic mortality models with cohort features have been proposed in the literature to account for these factors under the generalised linear modelling framework. In this paper we approach the problem of mortality modelling with cohort factors incorporated through a novel formulation under a state-space methodology. In the process we demonstrate that cohort factors can be formulated naturally under the state-space framework, despite the fact that cohort factors are indexed according to year-of-birth rather than year. Bayesian inference for cohort models in a state-space formulation is then developed based on an efficient Markov chain Monte Carlo sampler, allowing for the quantification of parameter uncertainty in cohort models and resulting mortality forecasts that are used for life expectancy and life table constructions. The effectiveness of our approach is examined through comprehensive empirical studies involving male and female populations from various countries. Our results show that cohort patterns are present for certain countries that we studied and the inclusion of cohort factors are crucial in capturing these phenomena, thus highlighting the benefits of introducing cohort models in the state-space framework. Forecasting of cohort models is also discussed in light of the projection of cohort factors.
In this paper we consider the optimal scaling of high-dimensional random walk Metropolis algorithms for densities differentiable in the Lp mean but which may be irregular at some points (such as the Laplace density, for example) and/or supported on an interval. Our main result is the weak convergence of the Markov chain (appropriately rescaled in time and space) to a Langevin diffusion process as the dimension d goes to ∞. As the log-density might be nondifferentiable, the limiting diffusion could be singular. The scaling limit is established under assumptions which are much weaker than the one used in the original derivation of Roberts et al. (1997). This result has important practical implications for the use of random walk Metropolis algorithms in Bayesian frameworks based on sparsity inducing priors.
This paper proposes a novel model estimation method, which uses nested Gibbs sampling to develop a mixture-of-mixture model to represent the distribution of the model's components with a mixture model. This model is suitable for analyzing multilevel data comprising frame-wise observations, such as videos and acoustic signals, which are composed of frame-wise observations. Deterministic procedures, such as the expectation–maximization algorithm have been employed to estimate these kinds of models, but this approach often suffers from a large bias when the amount of data is limited. To avoid this problem, we introduce a Markov chain Monte Carlo-based model estimation method. In particular, we aim to identify a suitable sampling method for the mixture-of-mixture models. Gibbs sampling is a possible approach, but this can easily lead to the local optimum problem when each component is represented by a multi-modal distribution. Thus, we propose a novel Gibbs sampling method, called “nested Gibbs sampling,” which represents the lower-level (fine) data structure based on elemental mixture distributions and the higher-level (coarse) data structure based on mixture-of-mixture distributions. We applied this method to a speaker clustering problem and conducted experiments under various conditions. The results demonstrated that the proposed method outperformed conventional sampling-based, variational Bayesian, and hierarchical agglomerative methods.
This paper proposes a Bayesian approach to estimating a factor-augmented GDP per capita equation. We exploit the panel dimension of our data and distinguish between individual-specific and time-specific factors. On the basis of 21 technology, infrastructure, and institutional indicators from 82 countries over a 19-year period (1990 to 2008), we construct summary indicators of each of these three components in the cross-sectional dimension and an overall indicator of all 21 indicators in the time-series dimension and estimate their effects on growth and international differences in GDP per capita. For most countries, more than 50% of GDP per capita is explained by the four common factors we have introduced. Infrastructure is the greatest contributor to total factor productivity, followed by technology and institutions.
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not
satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
A time-varying-parameter VAR for real output growth and inflation is estimated with annual U.S. series dating back to 1870. Volatility for both variables rises quickly with World War I and its aftermath, stays high until the end of World War II, and drops rapidly until the 1960s. This Postwar Moderation yields the largest decline in volatilities, surpassing the Great Moderation. Conditional on temporary shocks, inflation and output growth are positively correlated. Our model implies that aggregate demand played a key role in inflation volatility fluctuations. Conversely, the two variables are negatively correlated conditional on permanent shocks. Our model suggests that aggregate supply played an important role in output volatility fluctuations. Most impulse responses support an aggregate supply interpretation for permanent shocks. However, before World War I, a permanent increase in output raised the price level at longer horizons, and these responses are frequently statistically significant. This evidence supports the hypothesis that aggregate demand had a long-run positive effect on output during the pre–World War I period.
We study asymptotic behavior of Markov chain Monte Carlo (MCMC) procedures. Sometimes the
performances of MCMC procedures are poor and there are great importance for the study of
such behavior. In this paper we call degeneracy for a particular type of poor
performances. We show some equivalent conditions for degeneracy. As an application, we
consider the cumulative probit model. It is well known that the natural data augmentation
(DA) procedure does not work well for this model and the so-called parameter-expanded data
augmentation (PX-DA) procedure is considered to be a remedy for it. In the sense of
degeneracy, the PX-DA procedure is better than the DA procedure. However, when the number
of categories is large, both procedures are degenerate and so the PX-DA procedure may not
provide good estimate for the posterior distribution.
In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.
The Guide to the expression of measurement uncertainty, (GUM, JCGM 100)
and its Supplement 1: propagation of distributions by a Monte Carlo
method, (GUMS1, JCGM 101) are two of the most widely used documents concerning
measurement uncertainty evaluation in metrology. Both documents describe three phases (a)
the construction of a measurement model, (b) the assignment of probability distributions
to quantities, and (c) a computational phase that specifies the distribution for the
quantity of interest, the measurand. The two approaches described in these two documents
agree in the first two phases but employ different computational approaches, with the GUM
using linearisations to simplify the calculations. Recent years have seen an increasing
interest in using Bayesian approaches to evaluating measurement uncertainty. The Bayesian
approach in general differs in the assignment of the probability distributions and its
computational phase usually requires Markov chain Monte Carlo (MCMC) approaches. In this
paper, we summarise the three approaches to evaluating measurement uncertainty and show
how we can regard the GUM and GUMS1 as providing approximate solutions to the Bayesian
approach. These approximations can be used to design effective MCMC algorithms.
This paper introduces a new framework for modelling the joint development over time of mortality rates in a pair of related populations with the primary aim of producing consistent mortality forecasts for the two populations. The primary aim is achieved by combining a number of recent and novel developments in stochastic mortality modelling, but these, additionally, provide us with a number of side benefits and insights for stochastic mortality modelling. By way of example, we propose an Age-Period-Cohort model which incorporates a mean-reverting stochastic spread that allows for different trends in mortality improvement rates in the short-run, but parallel improvements in the long run. Second, we fit the model using a Bayesian framework that allows us to combine estimation of the unobservable state variables and the parameters of the stochastic processes driving them into a single procedure. Key benefits of this include dampening down of the impact of Poisson variation in death counts, full allowance for paramater uncertainty, and the flexibility to deal with missing data. The framework is designed for large populations coupled with a small sub-population and is applied to the England & Wales national and Continuous Mortality Investigation assured lives males populations. We compare and contrast results based on the two-population approach with single-population results.
In this paper, we present a Markov chain Monte Carlo (MCMC) simulation algorithm for estimating parameters in the kernel density estimation of bivariate insurance claim data via transformations. Our data set consists of two types of auto insurance claim costs and exhibits a high-level of skewness in the marginal empirical distributions. Therefore, the kernel density estimator based on original data does not perform well. However, the density of the original data can be estimated through estimating the density of the transformed data using kernels. It is well known that the performance of a kernel density estimator is mainly determined by the bandwidth, and only in a minor way by the kernel. In the current literature, there have been some developments in the area of estimating densities based on transformed data, where bandwidth selection usually depends on pre-determined transformation parameters. Moreover, in the bivariate situation, the transformation parameters were estimated for each dimension individually. We use a Bayesian sampling algorithm and present a Metropolis-Hastings sampling procedure to sample the bandwidth and transformation parameters from their posterior density. Our contribution is to estimate the bandwidths and transformation parameters simultaneously within a Metropolis-Hastings sampling procedure. Moreover, we demonstrate that the correlation between the two dimensions is better captured through the bivariate density estimator based on transformed data.
The standard Markov chain Monte Carlo method of estimating an expected value is to generate a Markov chain which converges to the target distribution and then compute correlated sample averages. In many applications the quantity of interest θ is represented as a product of expected values, θ = µ1 ⋯ µk, and a natural estimator is a product of averages. To increase the confidence level, we can compute a median of independent runs. The goal of this paper is to analyze such an estimator , i.e. an estimator which is a ‘median of products of averages’ (MPA). Sufficient conditions are given for to have fixed relative precision at a given level of confidence, that is, to satisfy . Our main tool is a new bound on the mean-square error, valid also for nonreversible Markov chains on a finite state space.
In this paper we examine the claims reserving problem using Tweedie's compound Poisson model. We develop the maximum likelihood and Bayesian Markov chain Monte Carlo simulation approaches to fit the model and then compare the estimated models under different scenarios. The key point we demonstrate relates to the comparison of reserving quantities with and without model uncertainty incorporated into the prediction. We consider both the model selection problem and the model averaging solutions for the predicted reserves. As a part of this process we also consider the sub problem of variable selection to obtain a parsimonious representation of the model being fitted.
The chapter focuses on problems in higher-level cognition: inferring causal structure from patterns of statistical correlation, learning about categories and hidden properties of objects, and learning the meanings of words. This chapter discusses the basic principles that underlie Bayesian models of cognition and several advanced techniques for probabilistic modeling and inference coming out of recent work in computer science and statistics. The first step is to summarize the logic of Bayesian inference based on probabilistic models. A discussion is then provided of three recent innovations that make it easier to define and use probabilistic models of complex domains: graphical models, hierarchical Bayesian models, and Markov chain Monte Carlo. The central ideas behind each of these techniques is illustrated by considering a detailed cognitive modeling application, drawn from causal learning, property induction, and language modeling, respectively.