To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we consider the risk–return trade-off for variable annuities in a Black–Scholes setting. Our analysis is based on a novel explicit allocation of initial wealth over the payments at various horizons. We investigate the relationship between the optimal consumption problem and the design of variable annuities by deriving the optimal so-called assumed interest rate for an investor with constant relative risk aversion preferences. We investigate the utility loss due to deviations from this. Finally, we show analytically how habit-formation-type smoothing of financial market shocks over the remaining lifetime leads to smaller year-to-year volatility in pension payouts, but to increases in the longer-term volatility.
Using novel microdata, we explore lifecycle consumption in Sub-Saharan Africa. We find that households' ability to smooth consumption over the lifecycle is large, particularly, in rural areas. Consumption in old age is sustained by shifting to self-farmed staple food, as opposed to traditional savings mechanisms or food gifts. This smoothing strategy indicates two important costs. The first cost is a loss of human capital as children seem to be diverted away from school and into producing self-farmed food. Second, a diet largely concentrated in staple food (e.g., maize in Malawi) in old age results in a loss of nutritional quality for households headed by the elderly.
FM-to-AM (frequency modulation-to-amplitude modulation) conversion caused by nonuniform spectral transmission of broadband beam is harmful to high-power laser facility. Smoothing by spectral dispersion (SSD) beam is a special broadband beam for its monochromatic feature at the given time and space on the near field. The traditional method which uses the optical spectral transfer function as filters cannot accurately describe its AM characteristics. This paper presents the theoretical analysis of the etalon effect for SSD beam. With a low-order approximation, the analytic model of the temporal shape of SSD beam is obtained for the first time, which gives the detailed AM characteristics at local and integral aspects, such as the variation of ripples width and amplitude in general situation. We also analyze the FM-to-AM conversion on the focal plane; in the focusing process, the lens simply acts as an integrator to smooth the AM of SSD beam. Because AM control is necessary for the near field to avoid optics damage and for the far field to ensure an optimal interaction of laser–target, our investigations could provide some important phenomena and rules for pulse shape control.
This paper models the decumulation period of a Personal Pension with Risk sharing (PPR). We derive several relationships between the contract parameters. Individuals can adopt two approaches to the decumulation period of a PPR: the investment approach and the consumption approach. In the investment approach, individuals specify how to invest wealth and how much wealth to withdraw. Retirement consumption follows endogenously. In the consumption approach, in contrast, individuals specify retirement consumption exogenously. Investment and withdrawal policies follow endogenously. We explore these two approaches in detail. Consistent with habit formation, we allow for excess smoothness and excess sensitivity in retirement consumption.
Several perturbation tools are established in the volume-preserving setting allowing for the pasting, extension, localized smoothing and local linearization of vector fields. The pasting and the local linearization hold in all classes of regularity ranging from
(Hölder included). For diffeomorphisms, a conservative linearized version of Franks’ lemma is proved in the
settings, the resulting diffeomorphism having the same regularity as the original one.
Suppose that a mobile sensor describes a Markovian trajectory in the ambient space and at each time the sensor measures an attribute of interest, e.g. the temperature. Using only the location history of the sensor and the associated measurements, we estimate the average value of the attribute over the space. In contrast to classical probabilistic integration methods, e.g. Monte Carlo, the proposed approach does not require any knowledge of the distribution of the sensor trajectory. We establish probabilistic bounds on the convergence rates of the estimator. These rates are better than the traditional `root n'-rate, where n is the sample size, attached to other probabilistic integration methods. For finite sample sizes, we demonstrate the favorable behavior of the procedure through simulations and consider an application to the evaluation of the average temperature of oceans.
In this paper we present results on the concentration properties of the smoothing and filtering distributions of some partially observed chaotic dynamical systems. We show that, rather surprisingly, for the geometric model of the Lorenz equations, as well as some other chaotic dynamical systems, the smoothing and filtering distributions do not concentrate around the true position of the signal, as the number of observations tends to ∞. Instead, under various assumptions on the observation noise, we show that the expected value of the diameter of the support of the smoothing and filtering distributions remains lower bounded by a constant multiplied by the standard deviation of the noise, independently of the number of observations. Conversely, under rather general conditions, the diameter of the support of the smoothing and filtering distributions are upper bounded by a constant multiplied by the standard deviation of the noise. To some extent, applications to the three-dimensional Lorenz 63 model and to the Lorenz 96 model of arbitrarily large dimension are considered.
Recently, there has been an increasing interest from life insurers to assess their portfolios' mortality risks. The new European prudential regulation, namely Solvency II, emphasized the need to use mortality and life tables that best capture and reflect the experienced mortality, and thus policyholders' actual risk profiles, in order to adequately quantify the underlying risk. Therefore, building a mortality table based on the experience of the portfolio is highly recommended and, for this purpose, various approaches have been introduced into actuarial literature. Although such approaches succeed in capturing the main features, it remains difficult to assess the mortality when the underlying portfolio lacks sufficient exposure. In this paper, we propose graduating the mortality curve using an adaptive procedure based on the local likelihood. The latter has the ability to model the mortality patterns even in presence of complex structures and avoids relying on expert opinions. However, such a technique fails to offer a consistent yet regular structure for portfolios with limited deaths. Although the technique borrows the information from the adjacent ages, it is sometimes not sufficient to produce a robust life table. In the presence of such a bias, we propose adjusting the corresponding curve, at the age level, based on a credibility approach. This consists in reviewing the assumption of the mortality curve as new observations arrive. We derive the updating procedure and investigate its benefits of using the latter instead of a sole graduation based on real datasets. Moreover, we look at the divergences in the mortality forecasts generated by the classic credibility approaches including Hardy–Panjer, the Poisson–Gamma model and the Makeham framework on portfolios originating from various French insurance companies.
The daily time series Flare Index (FI) data of Northern Hemisphere, Southern Hemisphere and Total Disk for Solar Cycle 21- 23 and 24 up to Dec. 2014 has been pre-processed using a 2nd order exponential smoothing algorithm to remove orthogonal noise. The smoothed data in each case is processed for scaling analysis using Rescaled-Range Analysis as well as Finite Variance Scaling Method in order to search for the Hurst exponent. As the value of H obtained from our analysis lies in between 0 and 1, so it can be said that the signal may behave like Fractional Brownian Motion. Also, it is observed that H is less than 0.5 which indicates the data is anti-persistent in nature and it has a strong negative correlation within the signal. The value of H also indicates the oscillating features of the signal which might have some fundamental periodicities in the Suns atmosphere.
This paper studies optimal labor-income taxation in a simple model with credit constraints on firms. The labor-income tax rate and the shadow value on the credit constraint induce a wedge between the marginal product of labor and the marginal rate of substitution between labor and consumption. It is found that optimal policy prescribes a volatile path for the labor-income tax rate even in the presence of state-contingent debt and capital. In this respect, credit frictions are akin to a form of market incompleteness. Credit frictions break the equivalence between tax smoothing and wedge smoothing; therefore, as the tightness of the credit constraint varies over the business cycle, tax volatility is needed in order to counter this variation and, as a result, allow for wedge smoothing.
We consider a modification to the Poisson common factor model and utilise a generalised linear model (GLM) framework that incorporates a smoothing process and a set of linear constraints. We extend the standard GLM model structure to adopt Lagrange methods and P-splines such that smoothing and constraints are applied simultaneously as the parameters are estimated. Our results on Australian, Canadian and Norwegian data show that this modification results in an improvement in mortality projection in terms of producing more accurate forecasts in the out-of-sample testing. At the same time, projected male-to-female ratio of death rates at each age converges to a constant and the residuals of the models are sufficiently random, indicating that the use of smoothing does not adversely affect the fit of the model. Further, the irregular patterns in the estimates of the age-specific parameters are moderated as a result of smoothing and this model can be used to produce more regular projected life tables for pricing purposes.
A prevalent problem in general state-space models is the approximation of the smoothing distribution of a state conditional on the observations from the past, the present, and the future. The aim of this paper is to provide a rigorous analysis of such approximations of smoothed distributions provided by the two-filter algorithms. We extend the results available for the approximation of smoothing distributions to these two-filter approaches which combine a forward filter approximating the filtering distributions with a backward information filter approximating a quantity proportional to the posterior distribution of the state, given future observations.
We present an extension of vendor-managed inventory (VMI) problems by considering advertising and pricing policies. Unlike the results available in the literature, the demand is supposed to depend on the retail price and advertising investment policies of the manufacturer and retailers, and is a random variable. Thus, the constructed optimization model for VMI supply chain management is a stochastic bi-level programming problem, where the manufacturer is the upper level decision-maker and the retailers are the lower-level ones. By the expectation method, we first convert the stochastic model into a deterministic mathematical program with complementarity constraints (MPCC). Then, using the partially smoothing technique, the MPCC is transformed into a series of standard smooth optimization subproblems. An algorithm based on gradient information is developed to solve the original model. A sensitivity analysis has been employed to reveal the managerial implications of the constructed model and algorithm: (1) the market parameters of the model generate significant effects on the decision-making of the manufacturer and the retailers, (2) in the VMI mode, much attention should be paid to the holding and shortage costs in the decision-making.
Insurance companies and pension funds must value liabilities using mortality rates that are appropriate for their portfolio. These can only be estimated in a reliable way from a sufficiently large historical dataset for such portfolios, which is often not available. We overcome this problem by introducing a model to estimate portfolio-specific mortality simultaneously with population mortality. By using a Bayesian framework, we automatically generate the appropriate weighting for the limited statistical information in a given portfolio and the more extensive information that is available for the whole population. This allows us to separate parameter uncertainty from uncertainty due to the randomness in individual deaths for a given realization of mortality rates. When we apply our method to a dataset of assured lives in England and Wales, we find that different prior specifications for the portfolio-specific factors lead to significantly different posterior distributions for hazard rates. However, in short-term predictive distributions for future numbers of deaths, individual mortality risk turns out to be more important than parameter uncertainty in the portfolio-specific factors, both for large and for small portfolios.
Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes, these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue, we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the least-squares and the stochastic gradient algorithms. Using simulations, we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.
We carry out an in-depth study of some domination and smoothing properties of linear operators and of their role within the theory of eventually positive operator semigroups. On the one hand, we prove that, on many important function spaces, they imply compactness properties. On the other hand, we show that these conditions can be omitted in a number of Perron–Frobenius type spectral theorems. We furthermore prove a Kreĭn–Rutman type theorem on the existence of positive eigenvectors and eigenfunctionals under certain eventual positivity conditions.
In this paper, bymeans of a new recursive algorithm of non-tensor-product-typed divided differences, bivariate polynomial interpolation schemes are constructed over nonrectangular meshes firstly, which is converted into the study of scattered data interpolation. And the schemes are different as the number of scattered data is odd and even, respectively. Secondly, the corresponding error estimation is worked out, and an equivalence is obtained between high-order non-tensor-product-typed divided differences and high-order partial derivatives in the case of odd and even interpolating nodes, respectively. Thirdly, several numerical examples illustrate the recursive algorithms valid for the non-tensor-product-typed interpolating polynomials, and disclose that these polynomials change as the order of the interpolating nodes, although the node collection is invariant. Finally, from the aspect of computational complexity, the operation count with the bivariate polynomials presented is smaller than that with radial basis functions.
Zero velocity update (ZUPT) is widely discussed for error restriction in land vehicle Inertial Navigation Systems (INSs) and wearable pedestrian INSs to overcome the problems of Global Positioning System (GPS) unavailability in urban canyons or indoor scenarios. In this paper, an online smoothing method for ZUPT-aided INSs is presented. By introducing the Rauch–Tung–Striebel (RTS) smoothing method into the ZUPT-aided INS, position errors can be effectively restrained not only at stop points but during the whole trajectory. By integrating reverse navigation with a ZUPT smoother, the method realises forward and real-time processing. Compared with existing approaches, it can improve the position accuracy in real time without any other sensors, which is well suited for applications on high-accuracy navigation in GPS-challenging environments. Accuracy test results with different Inertial Measurement Units (IMUs) show that the developed method can significantly decrease position errors from hundreds or thousands of metres to below ten metres. During the whole trajectory, the online smoothing method ensures the maximum position errors at non-stop points can reach the same level of accuracy at stop points. A delay test result proves that the delay of the reverse online smoothing method proposed in this paper is much shorter than existing online smoothing methods.
In order to overcome the possible singularity associated with the Point Interpolation Method (PIM), the Radial Point Interpolation Method (RPIM) was proposed by G. R. Liu. Radial basis functions (RBF) was used in RPIM as basis functions for interpolation. All these radial basis functions include shape parameters. The choice of these shape parameters has been and stays a problematic theme in RBF approximation and interpolation theory. The object of this study is to contribute to the analysis of how these shape parameters affect the accuracy of the radial PIM. The RPIM is studied based on the global Galerkin weak form performed using two integration technics: classical Gaussian integration and the strain smoothing integration scheme. The numerical performance of this method is tested on their behavior on curve fitting, and on three elastic mechanical problems with regular or irregular nodes distributions. A range of recommended shape parameters is obtained from the analysis of different error indexes and also the condition number of the matrix system. All resulting RPIM methods perform very well in term of numerical computation. The Smoothed Radial Point Interpolation Method (SRPIM) shows a higher accuracy, especially in a situation of distorted node scheme.
Actuaries often encounter censored and masked survival data when constructing multiple-decrement tables. In this paper, we propose estimators for the cause-specific failure time density using LOESS smoothing techniques that are employed in the presence of left-censored data, while still allowing for right-censored and exact observations, as well as masked causes of failure. The smoothing mechanism is incorporated as part of an expectation-maximisation algorithm. The proposed models are applied to a bivariate African sleeping sickness data set.