We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The stability and control derivatives are essential parameters in the flight operation of aircraft, and their determination is a routine task using classical parameter estimation methods based on maximum likelihood and least-squares principles. At high angle-of-attack, the unsteady aerodynamics may pose difficulty in aerodynamic structure determination, hence data-driven methods based on artificial neural networks could be an alternative choice for building models to characterise the behaviour of the system based on the measured motion and control variables. This research paper investigates the feasibility of using a recurrent neural model based on an extreme learning machine network in the modelling of the aircraft dynamics in a restricted sense for identification of the aerodynamic parameters. The recurrent extreme learning machine network is combined with the Gauss–Newton method to optimise the unknowns of the postulated aerodynamic model. The efficacy of the proposed estimation algorithm is studied using real flight data from a quasi-steady stall manoeuvre. Furthermore, the estimates are validated against the parameters estimated using the maximum likelihood method. The standard deviations of the estimates demonstrate the effectiveness of the proposed algorithm. Finally, the quantities regenerated using the estimates present good agreement with their corresponding measured values, confirming that a qualitative estimation can be obtained using the proposed estimation algorithm.
The methodological literature recommends multiple imputation and maximum likelihood estimation as best practices in handling missing data in published research. Relative to older methods such as listwise and pairwise deletion, these approaches are preferable because they rely on a less stringent assumption about how missingness relates to analysis variables. Furthermore, in contrast to deletion methods, multiple imputation and maximum likelihood estimation enable researchers to include all available data in the analysis, resulting in increased statistical power. This chapter provides an overview of multiple imputation and maximum likelihood estimation for handling missing data. Using an example from a study of predictors of depressive symptoms in children with juvenile rheumatic diseases, the chapter illustrates the use of multiple imputation and maximum likelihood estimation using variety of statistical software packages.
We develop a forward-reverse expectation-maximization (FREM) algorithm for estimating parameters of a discrete-time Markov chain evolving through a certain measurable state-space. For the construction of the FREM method, we develop forward-reverse representations for Markov chains conditioned on a certain terminal state. We prove almost sure convergence of our algorithm for a Markov chain model with curved exponential family structure. On the numerical side, we carry out a complexity analysis of the forward-reverse algorithm by deriving its expected cost. Two application examples are discussed.
Resolution of inflammation is an active process involving specialised pro-resolving mediators (SPM) generated from the n-3 fatty acids EPA and DHA. n-3 Fatty acid supplementation during pregnancy may provide an intervention strategy to modify these novel SPM. This study aimed to assess the effect of n-3 fatty acid supplementation in pregnancy on offspring SPM at birth and 12 years of age (12 years). In all, ninety-eight atopic pregnant women were randomised to 3·7 g daily n-3 fatty acids or a control (olive oil), from 20 weeks gestation until delivery. Blood was collected from the offspring at birth and at 12 years. Plasma SPM consisting of 18-hydroxyeicosapentaenoic acid (18-HEPE), E-series resolvins, 17-hydroxydocosahexaenoic acid (17-HDHA), D-series resolvins, 14-hydroxydocosahexaenoic acid (14-HDHA), 10 S,17S-dihydroxydocosahexaenoic acid, maresins and protectin 1, were measured by liquid chromatography-tandem MS. We identified the resolvins RvE1, RvE2, RvE3, RvD1, 17R-RvD1 and RvD2 for the first time in human cord blood. n-3 Fatty acids increased cord blood 18-HEPE (P<0·001) derived from EPA relative to the control group. DHA-derived 17-HDHA at birth was significantly increased in the n-3 fatty acid group relative to the controls (P=0·001), but other SPM were not different between the groups. n-3 Fatty acid supplementation during pregnancy was associated with an increase in SPM precursors in the offspring at birth but the effects were not sustained at 12 years. The presence of these SPM, particularly at birth, may have functions relevant in the newborn that remain to be established, which may be useful for future investigations.
Multiplicative noise removal is a challenging problem in image restoration. In this paper, by applying Box-Cox transformation, we convert the multiplicative noise removal problem into the additive noise removal problem and the block matching three dimensional (BM3D) method is applied to get the final recovered image. Indeed, BM3D is an effective method to remove additive Gaussian white noise in images. A maximum likelihood method is designed to determine the parameter in the Box-Cox transformation. We also present the unbiased inverse transform for the Box-Cox transformation which is important. Both theoretical analysis and experimental results illustrate clearly that the proposed method can remove multiplicative noise very well especially when multiplicative noise is heavy. The proposed method is superior to the existing methods for multiplicative noise removal in the literature.
We derive some limit theorems associated with the Ewens sampling formula when its parameter is increasing together with a sample size.
Moreover, the limit results are applied in order to investigate asymptotic properties of the maximum likelihood estimator.
A reflected Ornstein–Uhlenbeck process is a process that returns continuously and immediately to the interior of the state space when it attains a certain boundary. It is an extended model of the traditional Ornstein–Uhlenbeck process being extensively used in finance as a one-factor short-term interest rate model. Under some mild conditions, this paper is devoted to the study of the analogue of the Cramer–Rao lower bound of a general class of parameter estimation of the unknown parameter in reflected Ornstein–Uhlenbeck processes.
One of the most critical problems in property/casualty insurance is to determine an appropriate reserve for incurred but unpaid losses. These provisions generally comprise most of the liabilities of a non-life insurance company. The global provisions are often determined under an assumption of independence between the lines of business. Recently, Shi and Frees (2011) proposed to put dependence between lines of business with a copula that captures dependence between two cells of two different runoff triangles. In this paper, we propose to generalize this model in two steps. First, by using an idea proposed by Barnett and Zehnwirth (1998), we will suppose a dependence between all the observations that belong to the same calendar year (CY) for each line of business. Thereafter, we will then suppose another dependence structure that links the CYs of different lines of business. This model is done by using hierarchical Archimedean copulas. We show that the model provides more flexibility than existing models, and offers a better, more realistic and more intuitive interpretation of the dependence between the lines of business. For illustration, the model is applied to a dataset from a major US property-casualty insurer, where a bootstrap method is proposed to estimate the distribution of the reserve.
The French mathematician Bertillon reasoned that the number of dizygotic (DZ) pairs would equal twice the number of twin pairs of unlike sexes. The remaining twin pairs in a sample would presumably be monozygotic (MZ). Weinberg restated this idea and the calculation has come to be known as Weinberg's differential rule (WDR). The keystone of WDR is that DZ twin pairs should be equally likely to be of the same or the opposite sex. Although the probability of a male birth is greater than .5, the reliability of WDR's assumptions has never been conclusively verified or rejected. Let the probability for an opposite-sex (OS) twin maternity be pO, for a same-sex (SS) twin maternity pS and, consequently, the probability for other maternities 1 − pS − pO. The parameter estimates
$\hat p_O$
and
$\hat p_S$
are relative frequencies. Applying WDR, the MZ rate is m = pS − pO and the DZ rate is d = 2pO, but the estimates
$\hat m$
and
$\hat d$
are not relative frequencies. The maximum likelihood estimators
$\hat p_S$
and
$\hat p_O$
are unbiased, efficient, and asymptotically normal. The linear transformations
$\hat m = \hat p_S - \hat p_O$
and
${\skew6\hat d} = 2\hat p_O$
are efficient and asymptotically normal. If WDR holds they are also unbiased. For the tests of a set of m and d rates, contingency tables cannot be used. Alternative tests are presented and the models are applied on published data.
A new methodology based on maximum likelihood estimation for structure refinement using powder diffraction data is proposed. The method can not only optimize the parameters adjusted in Rietveld refinement but also parameters to specify errors in a model for statistical properties of the observed intensity. The results of structure refinements with relation to fluorapatite Ca5(PO4)3F, anglesite PbSO4, and barite BaSO4 are demonstrated. The structure parameters of fluorapatite and barite optimized by the new method are closer to single-crystal data than those optimized by the Rietveld method, while the structure parameters of anglesite, whose values optimized by the Rietveld method are already in good agreement with the single-crystal data, are almost unchanged by the application of the new method.
Trajectories of individual molecules moving within complex environments such as cell cytoplasm and membranes or semiflexible polymer networks provide invaluable information on the organization and dynamics of these systems. However, when such trajectories are obtained from a sequence of microscopy images, they can be distorted due to the fact that the tracked molecule exhibits appreciable directed motion during the single-frame acquisition. We propose a new model of image formation for mobile molecules that takes the linear in-frame motion into account and develop an algorithm based on the maximum likelihood approach for retrieving the position and velocity of the molecules from single-frame data. The position and velocity information obtained from individual frames are further fed into a Kalman filter for interframe tracking of molecules that allows prediction of the trajectory of the molecule and further improves the precision of the position and velocity estimates. We evaluate the performance of our algorithm by calculations of the Cramer-Rao Lower Bound, simulations, and model experiments with a piezo-stage. We demonstrate tracking of molecules moving as fast as 7 pixels/frame (12.6 μm/s) within a mean error of 0.42 pixel (37.43 nm).
We propose a nonhomogeneous Poisson hidden Markov model for a time series ofclaim counts that accounts for both seasonal variations and random fluctuations in the claims intensity. It assumes that the parameters of the intensity function for the nonhomogeneous Poisson distribution vary according to an (unobserved) underlying Markov chain. This can apply to natural phenomena that evolve in a seasonal environment. For example, hurricanes that are subject to random fluctuations (El Niño-La Niña cycles) affect insurance claims. The Expectation-Maximization (EM) algorithm is used to calculate the maximum likelihood estimators for the parameters of this dynamic Poisson hidden Markov model. Statistical applications of this model to Atlantic hurricanes and tropical storms data are discussed.
We consider statistical inference for a parametric cooperative sequential adsorption model for spatial time series data, based on maximum likelihood. We establish asymptotic normality of the maximum likelihood estimator in the thermodynamic limit. We also perform and discuss some numerical simulations of the model, which illustrate the procedure for creating confidence intervals for large samples.
We consider a model for a time series of spatial locations, in which points are placed sequentially at random into an initially empty region of ℝd, and given the current configuration of points, the likelihood at location x for the next particle is proportional to a specified function βk of the current number (k) of points within a specified distance of x. We show that the maximum likelihood estimator of the parameters βk (assumed to be zero for k exceeding some fixed threshold) is consistent in the thermodynamic limit where the number of points grows in proportion to the size of the region.
Traditionally, actuaries have modeled mortality improvement using deterministic reduction factors, with little consideration of the associated uncertainty. As mortality improvement has become an increasingly significant source of financial risk, it has become important to measure the uncertainty in the forecasts. Probabilistic confidence intervals provided by the widely accepted Lee-Carter model are known to be excessively narrow, due primarily to the rigid structure of the model. In this paper, we relax the model structure by considering individual differences (heterogeneity) in each age-period cell. The proposed extension not only provides a better goodness-of-fit based on standard model selection criteria, but also ensures more conservative interval forecasts of central death rates and hence can better reflect the uncertainty entailed. We illustrate the results using US and Canadian mortality data.
This study presents a general model of two binary variables and applies it to twin sex pairing data from 21 twin data sources to estimate the frequency of dizygotic twins. The purpose of this study is to clarify the relationship between maximum likelihood and Weinberg's differential rule zygosity estimation methods. We explore the accuracy of these zygosity estimation measures in relation to twin ascertainment methods and the probability of a male. Twin sex pairing data from 21 twin data sources representing 15 countries was collected for use in this study. Maximum likelihood estimation of the probability of dizygotic twins is applied to describe the variation in the frequency of dizygotic twin births. The differences between maximum likelihood and Weinberg's differential rule zygosity estimation methods are presented as a function of twin data ascertainment method and the probability of a male. Maximum likelihood estimation of the probability of dizygotic twins ranges from 0.083 (95% approximate CI: 0.082, 0.085) to 0.750 (95% approximate CI: 0.749, 0.752) for voluntary ascertainment data sources and from 0.374 (95% approximate CI: 0.373, 0.375) to 0.987 (95% approximate CI: 0.959, 1.016) for active ascertainment data sources. In 17 of the 21 twin data sources differences of 0.01 or less occur between maximum likelihood and Weinberg zygosity estimation methods. The Weinberg and maximum likelihood estimates are negligibly different in most applications. Using the above general maximum likelihood estimate, the probability of a dizygotic twin is subject to substantial variation that is largely a function of twin data ascertainment method.
Non-linear mixed models defined by stochastic differential equations (SDEs) are considered: the parameters of the diffusion process are random variables and vary among the individuals. A maximum likelihood estimation method based on the Stochastic Approximation EM algorithm, is proposed.
This estimation method uses the Euler-Maruyama approximation of the diffusion, achieved using latent auxiliary data introduced to complete the diffusion process between each pair of measurement instants.
A tuned hybrid Gibbs algorithm based on conditional Brownian bridges simulations of the unobserved process paths is included in this algorithm.
The convergence is proved and the error induced on the likelihood by the Euler-Maruyama approximation is bounded as a function of the step size of the approximation.
Results of a pharmacokinetic simulation study illustrate the accuracy of this estimation method. The analysis of the Theophyllin real dataset illustrates the relevance of the SDE approach relative to the deterministic approach.
The existence and uniqueness of maximum likelihood estimators for the time and range parameters in random sequential adsorption models are investigated.
We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate samples from this distribution, and use the samples thus obtained to approximate the maximum likelihood estimator of the intensity. We discuss and compare two methods: one based on a Monte Carlo approximation of the likelihood function, the other a stochastic version of the EM algorithm.