We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study the community detection problem on a Gaussian mixture model, in which vertices are divided into $k\geq 2$ distinct communities. The major difference in our model is that the intensities for Gaussian perturbations are different for different entries in the observation matrix, and we do not assume that every community has the same number of vertices. We explicitly find the necessary and sufficient conditions for the exact recovery of the maximum likelihood estimation, which can give a sharp phase transition for the exact recovery even though the Gaussian perturbations are not identically distributed; see Section 7. Applications include the community detection on hypergraphs.
In this chapter we introduce the concept of likelihoods, how to incorporate measurement uncertainty into likelihoods, and the concept of latent variables that arise in describing measurements. We then show how the principle of maximum likelihood is applied to estimate values for unknown parameters and discuss a number of numerical optimization techniques used in obtaining maximum likelihood estimators. These techniques include a discussion of expectation maximization.
One of the core challenges of decision research is to identify individuals’ decision strategies without influencing decision behavior by the method used. Bröder and Schiffer (2003) suggested a method to classify decision strategies based on a maximum likelihood estimation, comparing the probability of individuals’ choices given the application of a certain strategy and a constant error rate. Although this method was shown to be unbiased and practically useful, it obviously does not allow differentiating between models that make the same predictions concerning choices but different predictions for the underlying process, which is often the case when comparing complex to simple models or when comparing intuitive and deliberate strategies. An extended method is suggested that additionally includes decision times and confidence judgments in a simultaneous Multiple-Measure Maximum Likelihood estimation. In simulations, it is shown that the method is unbiased and sensitive to differentiate between strategies if the effects on times and confidence are sufficiently large.
While quantum accelerometers sense with extremely low drift and low bias, their practical sensing capabilities face at least two limitations compared with classical accelerometers: a lower sample rate due to cold atom interrogation time; and a reduced dynamic range due to signal phase wrapping. In this paper, we propose a maximum likelihood probabilistic data fusion method, under which the actual phase of the quantum accelerometer can be unwrapped by fusing it with the output of a classical accelerometer on the platform. Consequently, the recovered measurement from the quantum accelerometer is used to estimate bias and drift of the classical accelerometer which is then removed from the system output. We demonstrate the enhanced error performance achieved by the proposed fusion method using a simulated 1D accelerometer precision test scenario. We conclude with a discussion on fusion error and potential solutions.
In this appendix, we review the major concepts, notation, and results from probability and statistics that are used in this book. We start with univariate random variables, their distributions, moments, and quantiles. We consider dependent random variables through conditional probabilities and joint density and distribution functions. We review some of the distributions that are most important in the text, including the normal, lognormal, Pareto, uniform, binomial, and Poisson distributions. We outline the maximum likelihood (ML) estimation process, and summarize key properties of ML estimators. We review Bayesian statistics, including the prior, posterior, and predictive distributions. We discuss Monte Carlo simulation, with a particular focus on estimation and uncertainty.
The Halphen type A (Hal-A) frequency distribution has been employed for frequency analyses of hydrometeorological and hydrological extremes. This chapter derives this distribution using entropy theory and discusses the estimation of its parameters using the methods of entropy, moments, probability moments, L-moments, cumulative moments, and maximum likelihood estimation.
The four-parameter beta Lomax (FPBL) distribution is a generalization of the beta distribution through random variable transformation. The FPBL distribution may be reduced to three-parameter distribution and may also be extended to five-parameter beta Lomax distribution by adding the location parameter. In this chapter, the FPBL distribution is derived using the entropy theory and then its parameters are estimated with the principle of maximum entropy and the method of maximum likelihood estimation.
This chapter gives an introduction to extreme value theory. Unlike most statistical analyses, which are concerned with the typical properties of a random variable, extreme value theory is concerned with rare events that occur in the tail of the distribution. The cornerstone of extreme value theory is the Extremal Types Theorem. This theorem states that the maximum of N independent and identically distributed random variables can converge, after suitable normalization, only to a single distribution in the limit of large N. This limiting distribution is called the Generalized Extreme Value (GEV) distribution. This theorem is analogous to the central limit theorem, except that the focus is on the maximum rather than the sum of random variables. The GEV provides the basis for estimating the probability of extremes that are more extreme than those that occurred in a sample. The GEV is characterized by three parameters, called the location, scale, and shape. A procedure called the maximum likelihood method can be used to estimate these parameters, quantify their uncertainty, and account for dependencies on time or external environmental conditions.
The stability and control derivatives are essential parameters in the flight operation of aircraft, and their determination is a routine task using classical parameter estimation methods based on maximum likelihood and least-squares principles. At high angle-of-attack, the unsteady aerodynamics may pose difficulty in aerodynamic structure determination, hence data-driven methods based on artificial neural networks could be an alternative choice for building models to characterise the behaviour of the system based on the measured motion and control variables. This research paper investigates the feasibility of using a recurrent neural model based on an extreme learning machine network in the modelling of the aircraft dynamics in a restricted sense for identification of the aerodynamic parameters. The recurrent extreme learning machine network is combined with the Gauss–Newton method to optimise the unknowns of the postulated aerodynamic model. The efficacy of the proposed estimation algorithm is studied using real flight data from a quasi-steady stall manoeuvre. Furthermore, the estimates are validated against the parameters estimated using the maximum likelihood method. The standard deviations of the estimates demonstrate the effectiveness of the proposed algorithm. Finally, the quantities regenerated using the estimates present good agreement with their corresponding measured values, confirming that a qualitative estimation can be obtained using the proposed estimation algorithm.
The methodological literature recommends multiple imputation and maximum likelihood estimation as best practices in handling missing data in published research. Relative to older methods such as listwise and pairwise deletion, these approaches are preferable because they rely on a less stringent assumption about how missingness relates to analysis variables. Furthermore, in contrast to deletion methods, multiple imputation and maximum likelihood estimation enable researchers to include all available data in the analysis, resulting in increased statistical power. This chapter provides an overview of multiple imputation and maximum likelihood estimation for handling missing data. Using an example from a study of predictors of depressive symptoms in children with juvenile rheumatic diseases, the chapter illustrates the use of multiple imputation and maximum likelihood estimation using variety of statistical software packages.
We develop a forward-reverse expectation-maximization (FREM) algorithm for estimating parameters of a discrete-time Markov chain evolving through a certain measurable state-space. For the construction of the FREM method, we develop forward-reverse representations for Markov chains conditioned on a certain terminal state. We prove almost sure convergence of our algorithm for a Markov chain model with curved exponential family structure. On the numerical side, we carry out a complexity analysis of the forward-reverse algorithm by deriving its expected cost. Two application examples are discussed.
Resolution of inflammation is an active process involving specialised pro-resolving mediators (SPM) generated from the n-3 fatty acids EPA and DHA. n-3 Fatty acid supplementation during pregnancy may provide an intervention strategy to modify these novel SPM. This study aimed to assess the effect of n-3 fatty acid supplementation in pregnancy on offspring SPM at birth and 12 years of age (12 years). In all, ninety-eight atopic pregnant women were randomised to 3·7 g daily n-3 fatty acids or a control (olive oil), from 20 weeks gestation until delivery. Blood was collected from the offspring at birth and at 12 years. Plasma SPM consisting of 18-hydroxyeicosapentaenoic acid (18-HEPE), E-series resolvins, 17-hydroxydocosahexaenoic acid (17-HDHA), D-series resolvins, 14-hydroxydocosahexaenoic acid (14-HDHA), 10 S,17S-dihydroxydocosahexaenoic acid, maresins and protectin 1, were measured by liquid chromatography-tandem MS. We identified the resolvins RvE1, RvE2, RvE3, RvD1, 17R-RvD1 and RvD2 for the first time in human cord blood. n-3 Fatty acids increased cord blood 18-HEPE (P<0·001) derived from EPA relative to the control group. DHA-derived 17-HDHA at birth was significantly increased in the n-3 fatty acid group relative to the controls (P=0·001), but other SPM were not different between the groups. n-3 Fatty acid supplementation during pregnancy was associated with an increase in SPM precursors in the offspring at birth but the effects were not sustained at 12 years. The presence of these SPM, particularly at birth, may have functions relevant in the newborn that remain to be established, which may be useful for future investigations.
Multiplicative noise removal is a challenging problem in image restoration. In this paper, by applying Box-Cox transformation, we convert the multiplicative noise removal problem into the additive noise removal problem and the block matching three dimensional (BM3D) method is applied to get the final recovered image. Indeed, BM3D is an effective method to remove additive Gaussian white noise in images. A maximum likelihood method is designed to determine the parameter in the Box-Cox transformation. We also present the unbiased inverse transform for the Box-Cox transformation which is important. Both theoretical analysis and experimental results illustrate clearly that the proposed method can remove multiplicative noise very well especially when multiplicative noise is heavy. The proposed method is superior to the existing methods for multiplicative noise removal in the literature.
Two relatively new methods for analyzing herbicide efficacy data are described. Weighted multiple regression using the logit transformation for plant mortality data is illustrated and compared with the more accurate maximum likelihood logistic regression procedure. A partial data set evaluating the effects of increasing application rates of picloram (0, 1.1, 2.2 and 4.5 kg ae ha–1) for control of tall larkspur is used to illustrate the methods. Suggestions are made for using logistic regression to monitor herbicide efficacy over several years.
We derive some limit theorems associated with the Ewens sampling formula when its parameter is increasing together with a sample size.
Moreover, the limit results are applied in order to investigate asymptotic properties of the maximum likelihood estimator.
A reflected Ornstein–Uhlenbeck process is a process that returns continuously and immediately to the interior of the state space when it attains a certain boundary. It is an extended model of the traditional Ornstein–Uhlenbeck process being extensively used in finance as a one-factor short-term interest rate model. Under some mild conditions, this paper is devoted to the study of the analogue of the Cramer–Rao lower bound of a general class of parameter estimation of the unknown parameter in reflected Ornstein–Uhlenbeck processes.
One of the most critical problems in property/casualty insurance is to determine an appropriate reserve for incurred but unpaid losses. These provisions generally comprise most of the liabilities of a non-life insurance company. The global provisions are often determined under an assumption of independence between the lines of business. Recently, Shi and Frees (2011) proposed to put dependence between lines of business with a copula that captures dependence between two cells of two different runoff triangles. In this paper, we propose to generalize this model in two steps. First, by using an idea proposed by Barnett and Zehnwirth (1998), we will suppose a dependence between all the observations that belong to the same calendar year (CY) for each line of business. Thereafter, we will then suppose another dependence structure that links the CYs of different lines of business. This model is done by using hierarchical Archimedean copulas. We show that the model provides more flexibility than existing models, and offers a better, more realistic and more intuitive interpretation of the dependence between the lines of business. For illustration, the model is applied to a dataset from a major US property-casualty insurer, where a bootstrap method is proposed to estimate the distribution of the reserve.
The French mathematician Bertillon reasoned that the number of dizygotic (DZ) pairs would equal twice the number of twin pairs of unlike sexes. The remaining twin pairs in a sample would presumably be monozygotic (MZ). Weinberg restated this idea and the calculation has come to be known as Weinberg's differential rule (WDR). The keystone of WDR is that DZ twin pairs should be equally likely to be of the same or the opposite sex. Although the probability of a male birth is greater than .5, the reliability of WDR's assumptions has never been conclusively verified or rejected. Let the probability for an opposite-sex (OS) twin maternity be pO, for a same-sex (SS) twin maternity pS and, consequently, the probability for other maternities 1 − pS − pO. The parameter estimates $\hat p_O$ and $\hat p_S$ are relative frequencies. Applying WDR, the MZ rate is m = pS − pO and the DZ rate is d = 2pO, but the estimates $\hat m$ and $\hat d$ are not relative frequencies. The maximum likelihood estimators $\hat p_S$ and $\hat p_O$ are unbiased, efficient, and asymptotically normal. The linear transformations $\hat m = \hat p_S - \hat p_O$ and ${\skew6\hat d} = 2\hat p_O$ are efficient and asymptotically normal. If WDR holds they are also unbiased. For the tests of a set of m and d rates, contingency tables cannot be used. Alternative tests are presented and the models are applied on published data.
A new methodology based on maximum likelihood estimation for structure refinement using powder diffraction data is proposed. The method can not only optimize the parameters adjusted in Rietveld refinement but also parameters to specify errors in a model for statistical properties of the observed intensity. The results of structure refinements with relation to fluorapatite Ca5(PO4)3F, anglesite PbSO4, and barite BaSO4 are demonstrated. The structure parameters of fluorapatite and barite optimized by the new method are closer to single-crystal data than those optimized by the Rietveld method, while the structure parameters of anglesite, whose values optimized by the Rietveld method are already in good agreement with the single-crystal data, are almost unchanged by the application of the new method.