AHMADI, SEYED SAEED; GAILLARDETZ, PATRICE. Modeling mortality and pricing life annuities with Lévy processes. 337–350. We consider the pricing of annuity-due under stochastic force of mortality. Similarly to Renshaw et al. (1996) and Sithole et al. (2000) [Sithole, T Z; Haberman, S; Verrall, R J (2000), An investigation into parametric models for mortality projections, with applications to immediate annuitants’ and life office pensioners’ data, Insurance: Mathematics and Economics 27: 285–312], the force of mortality will be defined using an exponential function of Legendre polynomials. We extend the approach of Ballotta and Haberman (2006) [Ballotta, L; Haberman, S (2006), The fair valuation problem of guaranteed annuity options: The stochastic mortality environment case, Insurance: Mathematics and Economics 38: 195–214] by conditionally adding aa-stable Lévy subordinators in the force of mortality. In particular, we focus on the Gamma and Variance-Gamma processes in order to show how Lévy subordinators can capture mortality shocks. Generalized Linear Models is used to estimate coefficients of the explanatory variables and the Lévy process. For this purpose, the coefficients of the process are obtained by maximizing the log-likelihood function. We use the mortality data of males in Japan from 1998–2011 and the U.S. from 1965–2010 in order to compare our results with the model proposed by Renshaw et al. (1996) [Renshaw, A E; Haberman, S; Hatzoupoulos, P (1996), The modelling of recent mortality trends in United Kingdom male assured lives, British Actuarial Journal, 2(2): 449–477]. Some preferences are indicated based on Akaike’s information criterion, Bayesian information criterion, likelihood ratio test and Akaike weights to support the proposed model. We then use a cubic smoothing spline method to fit the interest rate curve and illustrate some over (under) estimations in the prices of annuities under the structure suggested by Renshaw et al. (1996) [op. cit.].
ALAI, DANIEL H; ZINOVIY, LANDSMAN; SHERRIS, MICHAEL. A multivariate Tweedie lifetime model: censoring and truncation. 203–213. We generalize model calibration for a multivariate Tweedie distribution to allow for censored observations; estimation is based on the method of moments. The multivariate Tweedie distribution we consider incorporates dependence in a pool of lives via a common stochastic component. Pools may be interpreted in various ways, from nation-wide cohorts to employer-based pension annuity portfolios. In general, the common stochastic component is representative of systematic longevity risk, which is not accounted for in standard life tables and actuarial models used for annuity pricing and reserving.
BARMALZAN, GHOBAD; NAJAFABADI, AMIR T PAYANDEH. On the convex transform and right-spread orders of smallest claim amounts. 380–384. Suppose X 1,…,X n is a set of Weibull random variables with shape parameter a>0, scale parameter i>0 for i=1,…,n and Ip1,…,Ipn are independent Bernoulli random variables, independent of the X i’s, with E(Ipi)=pi, i=1,…,ni=1,…,n. Let Yi=X iIpi, for i=1,…, n. In particular, in actuarial science, it corresponds to the claim amount in a portfolio of risks. In this paper, under certain conditions, we discuss stochastic comparison between the smallest claim amounts in the sense of the right-spread order. Moreover, while comparing these two smallest claim amounts, we show that the right-spread order and the increasing convex orders are equivalent. Finally, we obtain the results concerning the convex transform order between the smallest claim amounts and find a lower and upper bound for the coefficient of variation. The results established here extend some well-known results in the literature.
BOUCHER, JEAN-PHILIPPE; COUTURE-PICHÉ, GUILLAUME. Modeling the number of insureds’ cars using queuing theory. 67–76. In this paper, we propose to model the number of insured cars per household. We use queuing theory to construct a new model that needs 4 different parameters: one that describes the rate of addition of new cars on the insurance contract, a second one that models the rate of removal of insured vehicles, a third parameter that models the cancellation rate of the insurance policy, and finally a parameter that describes the rate of renewal. Statistical inference techniques allow us to estimate each parameter of the model, even in the case where there is censorship of data. We also propose to generalize this new queuing process by adding some explanatory variables into each parameter of the model. This allows us to determine which policyholder’s profiles are more likely to add or remove vehicles from their insurance policy, to cancel their contract or to renew annually. The estimated parameters help us to analyze the insurance portfolio in detail because the queuing theory model allows us to compute various kinds of useful statistics for insurers, such as the expected number of cars insured or the customer lifetime value that calculates the discounted future profits of an insured. Using car insurance data, a numerical illustration based on a portfolio from a Canadian insurance company is included to support this discussion.
BUTT, ADAM; KHEMKA, GAURAV. The effect of objective formulation on retirement decision making. 385–395. For a retiree who must maintain both investment and longevity risks, we consider the impact on decision making of focusing on an objective relating to the terminal wealth at retirement, instead of a more correct objective relating to a retirement income. Both a shortfall and a utility objective are considered; we argue that shortfall objectives may be inappropriate due to distortion in results with non-monotonically correlated economic factors. The modelling undertaken uses a dynamic programming approach in conjunction with Monte-Carlo simulations of future experience of an individual to make optimal choices. We find that the type of objective targetted can have a significant impact on the optimal choices made, with optimal equity allocations being up to 30% higher and contribution amounts also being significantly higher under a retirement income objective as compared to a terminal wealth objective. The result of these differences can have a significant impact on retirement outcomes.
CHEUNG, KA CHUN; CHONG, WING FUNG; YAM, SHEUNG CHI PHILLIP. Convex ordering for insurance preferences. 409–416. In this article, we study two broad classes of convex order related optimal insurance decision problems, in which the objective function or the premium valuation is a general functional of the expectation, Value-at-Risk and Average Value-at-Risk of the loss variables. These two classes of problems include many existing and contemporary optimal insurance problems as interesting examples being prevalent in the literature. To solve these problems, we apply the Karlin-Novikoff-Stoyan-Taylor multiple-crossing conditions, which is a useful sufficient criterion in the theory of convex ordering, to replace an arbitrary insurance indemnity by a more favorable one in convex order sense. The convex ordering established provides a unifying approach to solve the special cases of the problem classes. We show that the optimal indemnities for these problems in general take the double layer form.
CHEUNG, KA CHUN; CHONG, WING FUNG; YAM, SHEUNG CHI PHILLIP. The optimal insurance under disappointment theories. 77–90. In his celebrated work, Arrow (1974) [Arrow, K J (1974), Optimal insurance and generalized deductibles, Scandinavian Actuarial Journal 1: 1–42] was first to discover the optimality of deductible insurance under Expected Utility Theory; recently, Kaluszka and Okolewski (2008) [Kaluszka, M; Okolewski, A (2008), An extension of Arrow’s result on optimal reinsurance contract, Journal of Risk and Insurance 75: 275–288] extended Arrow’s result by generalizing the premium constraint as a convex combination of the expected value and the supremum of an insurance indemnity, with single layer insurance as the optimal solution. Nevertheless, the Expected Utility Theory has constantly been criticized for its failure in capturing the actual human decision making, and its shortcoming motivates the recent development of behavioral economics and finance, such as the Disappointment Theory; this theory was first developed by (1) Bell (1985) [Bell, D E (1985), Disappointment in decision making under uncertainty, Operational Research 33: 1–27], and Loomes and Sugden (1986) [Loomes, G; Sugden, R (1986), Disappointment and dynamic consistency in choice under uncertainty, Review of Economic Studies 53: 271–282], that can successfully explain the Allais Paradox. Their theory was later enhanced to the (2) Disappointment Aversion Theory by Gul (1991) [Gul, F (1991), A theory of disappointment aversion, Econometrica 59: 667–686], and then (3) Disappointment Theory without prior expectation by Cillo and Delquié (2006) [Cillo, A; Delquié, P (2006), Disappointment without prior expectation: a unifying perspective on decision under risk, Journal of Risk and Uncertainty 33: 197–215]. In our present paper, we extend the problem studied by Kaluszka and Okolewski (2008) [op. cit.] over the three mentioned disappointment models, while the solutions are still absent in the literature. We also conclude with the uniform optimality of the class of single layer indemnities in all these models.
COSSETTE, HÉLÈNE; MARCEAU, ÉTIENNE; PERREAULT, SAMUEL. On two families of bivariate distributions with exponential marginals: aggregation and capital allocation. 214–224. In this paper, we consider two main families of bivariate distributions with exponential marginals for a couple of random variables (X1,X2). More specifically, we derive closed-form expressions for the distribution of the sum S=X1+X2S, the TVaR of SS and the contributions of each risk under the TVaR-based allocation rule. The first family considered is a subset of the class of bivariate combinations of exponentials, more precisely, bivariate combinations of exponentials with exponential marginals. We show that several well-known bivariate exponential distributions are special cases of this family. The second family we investigate is a subset of the class of bivariate mixed Erlang distributions, namely bivariate mixed Erlang distributions with exponential marginals. For this second class of distributions, we propose a method based on the compound geometric representation of the exponential distribution to construct bivariate mixed Erlang distributions with exponential marginals. Notably, we show that this method not only leads to Moran-Downton’s bivariate exponential distribution, but also to a generalization of this bivariate distribution. Moreover, we also propose a method to construct bivariate mixed Erlang distributions with exponential marginals from any absolutely continuous bivariate distributions with exponential marginals. Inspired from Lee and Lin (2012) [Lee, S C K; Lin, X S (2012), Modeling dependent risks with multivariate Erlang mixtures, ASTIN Bulletin 42(1): 153–180], we show that the resulting bivariate distribution approximates the initial bivariate distribution and we highlight the advantages of such an approximation.
DAI, TIAN-SHYR; YANG, SHARON S; LIU, LIANG-CHIH. Pricing guaranteed minimum/lifetime withdrawal benefits with various provisions under investment, interest rate and mortality risks. 364–379. Many variable annuity products associated with guaranteed minimum withdrawal benefit (GMWB) or its lifelong version, a guaranteed lifelong withdrawal benefit (GLWB), have enjoyed great market success in the United States and Asia. The interaction impacts among complex policy provisions and the randomness of the account value of the policy, the prevailing interest rate, as well as the mortality rate may significantly influence the evaluations of GMWBs/GMLBs, especially when the guaranteed payments are made over a long, or even a lifelong, horizon. To deal with aforementioned risk factors and policy provisions, this paper proposes a novel three-dimensional (3D) tree that can analyze how different policy provisions influence the evaluation of GMWB/GLWBs under investment interest rate, and mortality risks simultaneously. The orthogonalization method is used to convert correlated dynamics of the account value of the policy and the short-term interest rate into two independent processes that can be easily simulated by our 3D tree. Besides, the structure of our 3D tree is sophisticatedly designed to avoid the unstable (oscillating) pricing results phenomenon that has characterized many numerical pricing methods. Rigorous numerical experiments are given to analyze the interaction effects among policy provisions and the aforementioned risk factors on the evaluation of GMWBs/GLWBs.
DENUIT, MICHEL; TRUFIN, JULIEN. Model points and Tail-VaR in life insurance. 268–272. Life insurance models are becoming more and more sophisticated under Solvency 2 regulation. European insurance companies are required to base their cash-flow projection on a policy-by-policy approach on the one hand, and to demonstrate the compliance of their internal model by carrying out additional testing on the other hand (see EIOPA, 2010) [EIOPA. QIS5 Technical Specification. European commission]. In particular, one of the validation tools recommended by the regulator is sensitivity testing, which consists in estimating the impact on the model outcomes of various changes in the underlying risk factors. Next to the baseline runs, insurers are then invited to conduct sensitivity analyses. Usually, all those studies need to be performed within tight deadlines. However, the use of Monte-Carlo simulations based on a policy-by-policy approach often leads to large running times (up to several days for the entire portfolio with the currently available computing power). Saving time when running the models thus appears to be an issue of major importance in life insurance.
DONNELLY, CATHERINE; GERRARD, RUSSELL; GUILLÉN, MONTSERRAT; NIELSEN, JENS PERCH. Less is more: increasing retirement gains by using an upside terminal wealth constraint. 259–267. We solve a portfolio selection problem of an investor with a deterministic savings plan who aims to have a target wealth value at retirement. The investor is an expected power utility-maximizer. The target wealth value is the maximum wealth that the investor can have at retirement. By constraining the investor to have no more than the target wealth at retirement, we find that the lower quantiles of the terminal wealth distribution increase, so the risk of poor financial outcomes is reduced. The drawback of the optimal strategy is that the possibility of gains above the target wealth is eliminated.
GERBER, HANS U; SHIU, ELIAS S W; YANG, HAILIANG. Geometric stopping of a random walk and its applications to valuing equity-linked death benefits. 313–325. We study discrete-time models in which death benefits can depend on a stock price index, the logarithm of which is modeled as a random walk. Examples of such benefit payments include put and call options, barrier options, and lookback options. Because the distribution of the curtate-future-lifetime can be approximated by a linear combination of geometric distributions, it suffices to consider curtate-future-lifetimes with a geometric distribution. In binomial and trinomial tree models, closed-form expressions for the expectations of the discounted benefit payment are obtained for a series of options. They are based on results concerning geometric stopping of a random walk, in particular also on a version of the Wiener-Hopf factorization.
GOMES-GONÇALVES, ERIKA; GZYL, HENRYK; MAYORAL, SILVIA. Maxentropic approach to decompound aggregate risk losses. 326–336. A risk manager may be faced with the following problem: she/he has obtained loss data collected during a year, but the data only contains the total number of events and the total loss for that year. She/he suspects that there are different sources of risk, each occurring with a different frequency, and wants to identify the frequency with which each type of event occurs and if possible, the individual losses at each risk event. The purpose of this methodological note is to examine a combination of disentangling and decompounding procedures, to get as close as possible to that goal. The disentangling procedure is actually a two step process: First, a preliminary analysis is carried out to determine the number of risks groups present. Once that is decided, the underlying model for the frequency of each type of risk is worked out. After that we use the maxentropic techniques in the decompounding stage to determine the distribution of individual losses that aggregated yield the observed total loss.
HÄRDLE, WOLFGANG KARL; LÓPEZ CABRERA, BRENDA; TENG, HUEI-WEN. State price densities implied from weather derivatives. 106–125. A State Price Density (SPD) is the density function of a risk neutral equivalent martingale measure for option pricing, and is indispensable for exotic option pricing and portfolio risk management. Many approaches have been proposed in the last two decades to calibrate a SPD using financial options from the bond and equity markets. Among these, non and semiparametric methods were preferred because they can avoid model mis-specification of the underlying. However, these methods usually require a large data set to achieve desired convergence properties. One faces the problem in estimation by e.g., kernel techniques that there are not enough observations locally available. For this situation, we employ a Bayesian quadrature method because it allows us to incorporate prior assumptions on the model parameters and hence avoids problems with data sparsity. It is able to compute the SPD of both call and put options simultaneously, and is particularly robust when the market faces the data sparsity issue. As illustration, we calibrate the SPD for weather derivatives, a classical example of incomplete markets with financial contracts payoffs linked to non-tradable assets, namely, weather indices. Finally, we study related weather derivatives data and the dynamics of the implied SPDs.
HATZOPOULOS, PETER; HABERMAN, STEVEN. Modeling trends in cohort survival probabilities. 162–179. A new dynamic parametric model is proposed for analyzing the cohort survival function. A one-factor parameterized polynomial in age effects, complementary log–log link and multinomial cohort responses are utilized, within the generalized linear models (GLM) framework. Sparse Principal component analysis (SPCA) is then applied to cohort dependent parameter estimates and provides (marginal) estimates for a two-factor structure. Modeling the two-factor residuals in a similar way, in age–time effects, provides estimates for the three-factor age-cohort-period model. An application is presented for Sweden, Norway, England & Wales and Denmark mortality experience.
HUANG, JINLONG; QIU, CHUNJUAN; WU, XIANYI; ZHOU, XIAN. An individual loss reserving model with independent reporting and settlement. 232–245. The main purpose of this paper is to assess and demonstrate the advantage of claims reserving models based on individual data in forecasting future liabilities over traditional models on aggregate data both theoretically and numerically. The available information consists of the reporting delays, settlement delays and claim payments. The model settings include Poisson distributed frequency of claims produced by each policy, claims payable at the settlement time, and the amount of payment depending only on its settlement delay. While such settings are applicable to certain but not all practical cases, the principal purpose of the paper is to examine the efficiency of individual data against aggregate data. We refer to loss reserving as to estimate the projections of the outstanding liabilities on observed information. The efficiency of the individual loss reserving against classical aggregate loss reservings, namely Chain-Ladder (C-L) and Bornhuetter-Ferguson (B-F), is assessed by comparing the asymptotic variances of the errors in estimating the conditional expectation (projection) of the outstanding liability between individual, C-L and B-F reservings. The research shows a significant increase in the accuracy of loss reserving by using individual data compared with aggregate data.
HUNT, ANDREW; VILLEGAS, ANDRÉS M. Robustness and convergence in the Lee-Carter model with cohort effects. 186–202. Interest in cohort effects in mortality data has increased dramatically in recent years, with much of the research focused on extensions of the Lee-Carter model incorporating cohort parameters. However, some studies find that these models are not robust to changes in the data or fitting algorithm, which limits their suitability for many purposes. It has been suggested that these robustness problems may be the result of an unresolved identifiability issue. In this paper, after investigating systemically the robustness of cohort extensions of the Lee-Carter model and the convergence of the algorithms used to fit it to data, we demonstrate the existence of such an identifiability issue and propose an additional approximate identifiability constraint which solves many of the problems found.
IKEFUJI, MASAKO; LAEVEN, ROGER J A; MAGNUS, JAN R; MURIS, CHRIS. Expected utility and catastrophic consumption risk. 306–312. An expected utility based cost-benefit analysis is, in general, fragile to distributional assumptions. We derive necessary and sufficient conditions on the utility function of consumption in the expected utility model to avoid this. The conditions ensure that expected (marginal) utility of consumption and the expected intertemporal marginal rate of substitution that trades off consumption and self-insurance remain finite, also under heavy-tailed distributional assumptions. Our results are relevant to various fields encountering catastrophic consumption risk in cost-benefit analysis.
IVANOVS, JEVGENIJS; BOXMA, ONNO J. A bivariate risk model with mutual deficit coverage. 126–134. We consider a bivariate Cramér-Lundberg-type risk reserve process with the special feature that each insurance company agrees to cover the deficit of the other. It is assumed that the capital transfers between the companies are instantaneous and incur a certain proportional cost, and that ruin occurs when neither company can cover the deficit of the other. We study the survival probability as a function of initial capitals and express its bivariate transform through two univariate boundary transforms, where one of the initial capitals is fixed at 0. We identify these boundary transforms in the case when claims arriving at each company form two independent processes. The expressions are in terms of Wiener-Hopf factors associated to two auxiliary compound Poisson processes. The case of non-mutual agreement is also considered. The proposed model shares some features of a contingent surplus note instrument and may be of interest in the context of crisis management.
JANG, JI-WOOK; RAMLI, SITI NORAFIDAH MOHD. Jump diffusion transition intensities in life insurance and disability annuity. 440–451. We study the effects of jump diffusion transition intensities on a life insurance and disability annuity. To do so, we use a multi-states Markov chain with multiple decrement. Assuming independent statewise intensities, we evaluate the prospective reserve for this scheme where the insured life is in Active or Disabled state at inception, respectively. We also examine the components of the prospective reserves by changing the relevant parameters of the transition intensities, which are the jump size, the average frequency of jumps as well as the diffusion parameters, assuming deterministic rate of interest. The computation of the reserve sensitivity with their figures are provided.
JIANG, TAO; WANG, YUEBAO; CHEN, YANG; XU, HUI. Uniform asymptotic estimate for finite-time ruin probabilities of a time-dependent bidimensional renewal model. 45–53. This paper studies a bidimensional renewal risk model with constant force of interest and subexponentially distributed claim size vector. Some uniform asymptotic estimates for finite-time ruin probabilities are established when the claim size vector and its inter-arrival time are subject to certain general dependence structure.
JIN, ZHUO; YANG, HAILIANG; YIN, G. Optimal debt ratio and dividend payment strategies with reinsurance. 351–363. This paper derives the optimal debt ratio and dividend payment strategies for an insurance company. Taking into account the impact of reinsurance policies and claims from the credit derivatives, the surplus process is stochastic that is jointly determined by the reinsurance strategies, debt levels, and unanticipated shocks. The objective is to maximize the total expected discounted utility of dividend payment until financial ruin. Using dynamic programming principle, the value function is the solution of a second-order nonlinear Hamilton-Jacobi-Bellman equation. The subsolution-supersolution method is used to verify the existence of classical solutions of the Hamilton-Jacobi-Bellman equation. The explicit solution of the value function is derived and the corresponding optimal debt ratio and dividend payment strategies are obtained in some special cases. An example is provided to illustrate the methodologies and some interesting economic insights.
LI, DANPING; RONG, XIMIN; ZHAO, HUI. Time-consistent reinsurance-investment strategy for a mean-variance insurer under stochastic interest rate model and inflation risk. 28–44. In this paper, we consider the time-consistent reinsurance–investment strategy under the mean–variance criterion for an insurer whose surplus process is described by a Brownian motion with drift. The insurer can transfer part of the risk to a reinsurer via proportional reinsurance or acquire new business. Moreover, stochastic interest rate and inflation risks are taken into account. To reduce the two kinds of risks, not only a risk-free asset and a risky asset, but also a zero-coupon bond and Treasury Inflation Protected Securities (TIPS) are available to invest in for the insurer. Applying stochastic control theory, we provide and prove a verification theorem and establish the corresponding extended Hamilton–Jacobi–Bellman (HJB) equation. By solving the extended HJB equation, we derive the time-consistent reinsurance–investment strategy as well as the corresponding value function for the mean–variance problem, explicitly. Furthermore, we formulate a precommitment mean–variance problem and obtain the corresponding time-inconsistent strategy to compare with the time-consistent strategy. Finally, numerical simulations are presented to illustrate the effects of model parameters on the time-consistent strategy.
LIANG, ZONGXIA; MA, MING. Optimal dynamic asset allocation of pension fund in mortality and salary risks framework. 151–161. In this paper, we consider the optimal dynamic asset allocation of pension fund with mortality risk and salary risk. The managers of the pension fund try to find the optimal investment policy (optimal asset allocation) to maximize the expected utility of terminal wealth. The market is a combination of financial market and insurance market. The financial market consists of three assets: cashes with stochastic interest rate, stocks and rolling bonds, while the insurance market consists of mortality risk and salary risk. These two non-hedging risks cause incompleteness of the market. By martingale method and dynamic programming principle we first derive the approximate optimal investment policy to overcome the difficulty, then investigate the efficiency of the approximation. Finally, we solve an optimal assets liabilities management(ALM) problem with mortality risk and salary risk under CRRA utility, and reveal the influence of these two risks on the optimal investment policy by numerical illustration.
LIU, AIAI; HOU, YANXI; PENG, LIANG. Interval estimation for a measure of tail dependence. 294–305. Systemic risk concerns extreme co-movement of several financial variables, which involves characterizing tail dependence. The coefficient of tail dependence was proposed by Ledford and Tawn (1996) [Ledford, A W; Tawn, J (1996), Statistics for near independence in multivariate extreme values, Biometrika, 83: 169–187] and Ledford and Tawn (1997) [A.W. Ledford, J. Tawn (1997), Modelling dependence within joint tail regions, Journal of the Royal Statistical Society: Series B: Statistical Methodology 59: 475–499] to distinguish asymptotic independence and asymptotic dependence. Recently a new measure based on the conditional Kendall’s tau was proposed by Asimit et al. (2015) [Asimit, A; Gerrard, R; Hou, Y; Peng, L (2015), Tail dependence measure for modeling financial extreme co-movements, Journal of Econometrics (2015) to appear] to measure the tail dependence and to distinguish asymptotic independence and asymptotic dependence. For effectively constructing a confidence interval for this new measure, this paper proposes a smooth jackknife empirical likelihood method, which does not need to estimate any additional quantities such as asymptotic variance. A simulation study shows that the proposed method has a good finite sample performance.
LIU, YANXIN; LI, JOHNNY SIU-HANG. The age pattern of transitory mortality jumps and its impact on the pricing of catastrophic mortality bonds. 135–150. To value catastrophic mortality bonds, a number of stochastic mortality models with transitory jump effects have been proposed. Rather than modeling the age pattern of jump effects explicitly, most of the existing models assume that the distributions of jump effects and general mortality improvements across ages are identical. Nevertheless, this assumption does not seem to be in line with what we observe from historical data. In this paper, we address this problem by introducing a Lee-Carter variant that captures the age pattern of mortality jumps by a distinct collection of parameters. The model variant is then further generalized to permit the age pattern of jump effects to vary randomly. We illustrate the two proposed models with mortality data from the United States and English and Welsh populations, and use them to value hypothetical mortality bonds with similar specifications to the Atlas IX Capital Class B note that was launched in 2013. It is found that the features we consider have a significant impact on the estimated prices.
LIU, YING; LI, XIAOZHONG; LU, YINLI. The bounds of premium and optimality of stop loss insurance under uncertain random environments. 273–278. The potential loss of insured can be affected by many nondeterministic factors, in which uncertainty always coexists with randomness. Therefore, uncertain random variables are used to describe this phenomenon of simultaneous appearance of both uncertainty and randomness in potential loss. Based on that, the upper and lower bounds of premium with uncertain random loss are given, respectively. Moreover, a mathematical model of uncertain random optimal insurance problem is established and the stop loss insurance is proved to be the optimal insurance policy and the equation for calculating the optimal deductible is arrived. Some numerical examples are also given for illustration.
MAO, TIANTIAN; YANG, FAN. Risk concentration based on Expectiles for extreme risks under FGM copula. 429–439. Risk concentration is used as a measurement of diversification benefits in the context of risk aggregation. Expectiles, which are known to possess many good properties, have attracted increasing interest in recent years. In this paper, we aim to study the asymptotic properties of risk concentration based on Expectiles. Firstly, we extend the results on the second-order asymptotics of Expectiles in Mao et al. (2015) [T. Mao, T; Ng, K; Hu, T (2015), Asymptotics of generalized quantiles and Expectiles for extreme risks, Probability in the Engineering and Informational Sciences 29(3): 309–327]. Secondly, we investigate the second-order asymptotics of tail probabilities and then apply them to risk concentrations based on Expectiles as well as on VaR.
MILEVSKY, MOSHE ARYE; SALISBURY, THOMAS S. Optimal retirement income tontines. 91–105. Tontines were once a popular type of mortality-linked investment pool. They promised enormous rewards to the last survivors at the expense of those died early. While this design appealed to the gambling instinct, it is a suboptimal way to generate retirement income. Indeed, actuarially fair life annuities making constant payments – where the insurance company is exposed to longevity risk – induce greater lifetime utility. However, tontines do not have to be structured the historical way, i.e. with a constant cash flow shared amongst a shrinking group of survivors. Moreover, insurance companies do not sell actuarially-fair life annuities, in part due to aggregate longevity risk. We derive the tontine structure that maximizes lifetime utility. Technically speaking we solve the Euler–Lagrange equation and examine its sensitivity to (i) the size of the tontine pool n, and (ii) individual longevity risk aversion . We examine how the optimal tontine varies with and nn, and prove some qualitative theorems about the optimal payout. Interestingly, Lorenzo de Tonti’s original structure is optimal in the limit as longevity risk aversion 8. We define the natural tontine as the function for which the payout declines in exact proportion to the survival probabilities, which we show is near-optimal for all and n. We conclude by comparing the utility of optimal tontines to the utility of loaded life annuities under reasonable demographic and economic conditions and find that the life annuity’s advantage over the optimal tontine is minimal. In sum, this paper’s contribution is to (i) rekindle a discussion about a retirement income product that has been long neglected, and (ii) leverage economic theory as well as tools from mathematical finance to design the next generation of tontine annuities.
PITSELIS, GEORGIOS; GRIGORIADOU, VASILIKI; BADOUNAS, IOANNIS. Robust loss reserving in a log-linear model. 14–27. It is well known that the presence of outlier events can overestimate or underestimate the overall reserve when using the chain-ladder method. The lack of robustness of loss reserving estimators leads to the development of this paper. The appearance of outlier events (including large claims – catastrophic events) can offset the result of the ordinary chain ladder technique and perturb the reserving estimation. Our proposal is to apply robust statistical procedures to the loss reserving estimation, which are insensitive to the occurrence of outlier events in the data. This paper considers robust log-linear and ANOVA models to the analysis of loss reserving by using different type of robust estimators, such as LAD-estimators, M-estimators, LMS-estimators, LTS-estimators, MM-estimators (with initial S-estimate) and Adaptive-estimators. Comparisons of these estimators are also presented, with application of a well known data set.
SCOTT, ALEXANDRE; METZLER, ADAM. A general importance sampling algorithm for estimating portfolio loss probabilities in linear factor models. 279–293. This paper develops a novel importance sampling algorithm for estimating the probability of large portfolio losses in the conditional independence framework. We apply exponential tilts to (i) the distribution of the natural sufficient statistics of the systematic risk factors and (ii) conditional default probabilities, given the simulated values of the systematic risk factors, and select parameter values by minimizing the Kullback–Leibler divergence of the resulting parametric family from the ideal (zero-variance) importance density. Optimal parameter values are shown to satisfy intuitive moment-matching conditions, and the asymptotic behaviour of large portfolios is used to approximate the requisite moments. In a sense we generalize the algorithm of Glasserman and Li (2005) [Glasserman, P; Li, J (2005), Importance sampling for portfolio credit risk, Management Science 51(11): 1643–1656] so that it can be applied in a wider variety of models. We show how to implement our algorithm in the tt copula model and compare its performance there to the algorithm developed by Chan and Kroese (2010) [Chan, J C C; Kroese, D P (2010), Efficient estimation of large portfolio loss probabilities in t-copula models, European Journal of Operational Research 205(2): 361–367]. We find that our algorithm requires substantially less computational time (especially for large portfolios) but is slightly less accurate. Our algorithm can also be used to estimate more general risk measures, such as conditional tail expectations, whereas Chan and Kroese (2010) [op. cit.] is specifically designed to estimate loss probabilities.
SHI, PENG; FENG, TIANJUN; IVANTSOVA, ANASTASIA. Dependent frequency-severity modeling of insurance claims. 417–428. Standard ratemaking techniques in non-life insurance assume independence between the number and size of claims. Relaxing the independence assumption, this article explores methods that allow for the correlation among frequency and severity components for micro-level insurance data. To introduce granular dependence, we rely on a hurdle modeling framework where the hurdle component concerns the occurrence of claims and the conditional component looks into the number and size of claims given occurrence. We propose two strategies to correlate the number of claims and the average claim size in the conditional component. The first is based on conditional probability decomposition and treats the number of claims as a covariate in the regression model for the average claim size, the second employed a mixed copula approach to formulate the joint distribution of the number and size of claims. We perform a simulation study to evaluate the performance of the two approaches and then demonstrate their application using a U.S. auto insurance dataset. The hold-out sample validation shows that the proposed model is superior to the industry benchmarks including the Tweedie and the two-part generalized linear models.
STEINORTH, PETRA; MITCHELL, OLIVIA S. Valuing variable annuities with guaranteed minimum lifetime withdrawal benefits. 246–258. Variable annuities with guaranteed minimum lifetime withdrawal benefits (VA/GLWB) offer retirees longevity protection, exposure to equity markets, and access to flexible withdrawals in emergencies. We model how risk-averse retirees optimally withdraw from the products, balancing returns and the embedded longevity protection. Assuming reasonable individual preferences, the resulting cash flow generates a Money’s Worth Ratio of around 0.9 for our stylized VA/GLWB in the post-crisis product, considerably lower than what was offered prior to 2008. Sensitivity analyses with respect to portfolio choice, mortality, fees, and guaranteed withdrawal rates show that MWRs range from 0.80 to 1.0, with the portfolio choice making the biggest difference. For most parameter choices, the utility value of the VA/GLWB exceeds that of a similar mutual fund, but it is less than for a fixed annuity. Interestingly, VA/GLWB withdrawals in early retirement can optimally exceed contract maximum withdrawals, despite the fact that this reduces future withdrawal guarantees.
WANG, HONGXIA; WANG, JIANLI; LI, JINGYUAN; XIA, XINPING. Precautionary paying for stochastic improvements under background risks. 180–185. In a two-dimensional framework, we propose a general two-period decision model which extends the temporal precautionary saving and effort model. We relate the role of cross-prudence to the impact of background risks on paying for stochastic improvements of the future risk. We find that the effect of background risks introduced in the first period is consistent to signing cross derivatives of bivariate utility functions, which is independent of the type of stochastic improvements brought by additional paying; however, when the background risk occurs in the second period, that is not the case.
WU, HUILING; ZENG, YAN. Equilibrium investment strategy for defined-contribution pension schemes with generalized mean-variance criterion and mortality risk. 396–408. This paper studies a generalized multi-period mean-variance portfolio selection problem within the game theoretic framework for a defined-contribution pension scheme member. The member is assumed to have a stochastic salary flow and a stochastic mortality rate, and is allowed to invest in a financial market with one risk-free asset and one risky asset. The explicit expressions for the equilibrium investment strategy and equilibrium value function are obtained by backward induction. In addition, some sensitivity analysis and numerical illustrations are provided to show the effects of mortality risk on our results.
WU, YANG-CHE. Reexamining the feasibility of diversification and transfer instruments on smoothing catastrophe risk. 54–66. The present study discusses the effects of diversification and transfer of risk by global insurers on smoothing the peak of catastrophic claims. Empirical experiments indicate that the occurrence frequency of natural catastrophes (NatCat) has a serially dependent trend and that the Cox-Ingersoll-Ross square-root model for global insured losses is best fit than any other static distributions. The results are used to develop a NatCat risk insurance model that sets up a NatCat premium formula, uses the serially dependent dynamics of insured loss and establishes the cash flow of all involved parties while considering corporate income tax and no additional risk premium. The simulation results based on this model shows that fluctuation reserves, catastrophe bonds and catastrophe funds with payback schemes are feasible options for smoothing risk because they can benefit all long-term involved parties, including insurance company shareholders, the insured, bondholders, the fund and the government (i.e. taxpayers).
YOU, YINPING; LI, XIAOHU. Functional characterizations of bivariate weak SAI [stochastic arrangement increasing] with an application. 225–231. This paper presents functional characterizations of the bivariate right tail weakly stochastic arrangement increasing (RWSAI) and left tail weakly stochastic arrangement increasing (LWSAI) (Cai and Wei, 2014 [J. Cai, J; Wei, W (2014), Some new notions of dependence with applications in optimal allocation problems, Insurance Mathematics & Economics 55: 200–209] and Cai and Wei, 2015 [Cai, J; Wei, W (2015), Notions of multivariate dependence and their applications in optimal portfolio selections with dependent risks, Journal of Multivariate Analysis 138: 156–169]). The present theories are also applied to ordering generalized weighted sum of dependent random variables. Some recent related results in Mao et al. (2013) [Mao, T; Pan, X; Hu, T (2013), On ordering between weighted sums of random variables, Probability in the Engineering and Informational Sciences 27: 85–97] are either improved or extended.
YUEN, KAM CHUEN; LIANG, ZHIBIN; ZHOU, MING. Optimal proportional reinsurance with common shock dependence. 1–13. In this paper, we consider the optimal proportional reinsurance strategy in a risk model with multiple dependent classes of insurance business, which extends the work of Liang and Yuen (2014) [Liang, Z and Yuen, K C (2014). Optimal dynamic reinsurance with dependent risks: variance premium principle, Scandinavian Actuarial Journal (2014)] to the case with the reinsurance premium calculated under the expected value principle and to the model with two or more classes of dependent risks. Under the criterion of maximizing the expected exponential utility, closed-form expressions for the optimal strategies and value function are derived not only for the compound Poisson risk model but also for the diffusion approximation risk model. In particular, we find that the optimal reinsurance strategies under the expected value premium principle are very different from those under the variance premium principle in the diffusion risk model. The former depends not only on the safety loading, time and interest rate, but also on the claim size distributions and the counting processes, while the latter depends only on the safety loading, time and interest rate. Finally, numerical examples are presented to show the impact of model parameters on the optimal strategies.