## 1. Introduction

Insurers issuing various forms of life insurance contracts are exposed to multiple types of risks due to the uncertainty surrounding the cash flows and associated discount factors. Such risks must be assessed, managed, valued and reported by insurance companies. The various risks can be regrouped into several categories, for instance equity risk, interest rate risk, mortality risk and policyholder behaviour risk. Evaluating the risk exposure for each distinct category can shed some light on the magnitude of the various risk types faced by insurers. Moreover, risk mitigation strategies associated with the various categories differ, highlighting the need to disentangle the overall risk and understand the contribution from each category.

One of the main issues related to the quantification of the insurer’s exposure to the various categories of risk is that such risks are entangled and most likely interrelated. For instance, cash flows and liability values depend simultaneously and non-linearly on multiple risk factors whose movements are not necessarily independent. As highlighted for instance in Ho *et al.* (Reference Ho, Mudavanhu and Analytics2005), it is therefore desirable to consider a methodology able to decompose the complex overall risk exposure into clear and separated contributions from each individual risk category.

A very interesting approach tackling this particular problem is presented in Schilling *et al.* (Reference Schilling, Bauer, Christiansen and Kling2020). It relies on the martingale representation theorem (MRT) associated with martingales living in filtered probabilities spaces generated by Brownian motions. Although very useful, such method is not universal as it does not apply to frameworks not admitting the characterization of martingales based on the MRT, such as many incomplete market models. Numerous works have considered different risk allocation models, mainly for the purpose of capital requirements determination. The manuscripts of Dhaene *et al.* (Reference Dhaene, Tsanakas, Valdez and Vanduffel2012), Albrecht (Reference Albrecht2014) and Guo *et al.* (Reference Guo, Bauer and Zanjani2018) survey multiple other existing allocation methods. For instance, Parker (Reference Parker1997) and Christiansen and Helwich (Reference Christiansen and Helwich2008) sequentially condition on some of the risk factors to obtain a risk decomposition. Another recent alternative approach to risk allocation consists in applying multivariate risk measures, see for instance Mailhot and Mesfioui (Reference Mailhot and Mesfioui2016) or Cossette *et al.* (Reference Cossette, Mailhot, Marceau and Mesfioui2016).

The first main contribution of the present manuscript consists in proposing a very general method for the allocation of the overall risk associated with life insurance contracts or other financial contingent claims to the various sources of risks driving the aggregate exposure. The method applies for very general discrete-time dynamics of the key risk drivers involved in the valuation of the contract. Furthermore, the inclusion of a hedging portfolio is very natural in the proposed allocation scheme. The allocation of risk is obtained in two steps. First, a Shapley decomposition (See Shapley (Reference Shapley, Kuhn and Tucker2016)) is applied, which leads to an additive representation of the gain and loss in each given period that is expressed in terms of contributions from each risk category. The Shapley decomposition satisfies multiple favorable properties detailed in Shapley (Reference Shapley, Kuhn and Tucker2016), such as not depending on a subjective order in which risk factors are taken into consideration. Furthermore, it takes into consideration the various potential interactions between all risk drivers. In a second step, the additive representation of the gain and loss is combined with conventional risk allocation schemes applying to sums of loss variables, such as the Euler allocation principle (see for instance Tasche (Reference Tasche2007)) in order to allocate the total risk exposure to the various risk drivers. The proposed allocation method should prove useful to risk managers of an insurance company or a financial institution desiring to understand how the various categories of risk impact the overall exposure and determine the extent of resources and effort needed for an efficient mitigation of risk stemming from such risk classes.

The second important contribution of this work is to test the allocation method through Monte-Carlo simulations, with variable annuity contracts being chosen for the assessment due to their rich dependence on multiple risk factors. More precisely, a *guaranteed minimal maturity benefit* (GMMB) contract is considered, with equity, interest rate and mortality risks being the risk drivers in the proposed examples. A stochastic on stochastic simulation algorithm required to compute risk allocations is provided. The algorithm is then used in numerical experiments to illustrate the application of the risk allocation approach to variable annuities.

The paper is divided as follows. Section 2 develops the proposed methodology for risk decomposition and allocation. Section 3 details the mathematical setup considered for the financial market dynamics and variable annuity cash flows. Section 4 provides results from numerical experiments testing the risk allocation method on variable annuity policies. Section 5 concludes. An Online Appendix providing additional details about this work is also available. It includes, among others, a list of the symbols used in this study to facilitate its reading.

## 2. Risk allocation method

This section describes the mathematical setup underlying the proposed risk allocation approach.

### 2.1. Contract valuation

Consider a discrete-time market with time steps
$t=0,\ldots,T$
representing monthly periods of length
$\Delta=1/12$
years. The triplet
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$
is a probability space on which a filtration
$\{\mathcal{F}_t\}^T_{t=0}$
representing the information flow is defined. The measure
$\mathbb{P}$
is referred to as the physical measure. The contract’s sequence of cash inflows to the insurer is denoted by
$\{ CF_t \}^T_{t=1}$
. Negative inflows are cash outflows. Cash flows include for instance fees charged to policyholders and benefits paid to them. From no-abritrage pricing theory, for some martingale measure
$\mathbb{Q}$
, the time *t* value to the insurer of the guarantee contract right after the time *t* cash flow is paid

where
$D_{t_1,t_2} = \exp \left( \sum_{j=t_1}^{t_2-1} -r_j \Delta \right)$
is the discount factor with
$r_j$
being the annualized risk-free short rate prevailing during the time interval
$[j,j+1)$
. Using no-arbitrage pricing principles for insurance contracts is a common (and sometimes mandatory) practice as they provide market-consistent prices as required for instance by IFRS 17 accounting standards.Footnote 1 The time *t* cash flow
$CF_t$
and the short rate
$r_t$
are assumed to be observable at time *t*; they are
$\mathcal{F}_t$
-measurable.

We assume that the short rate and cash flow dynamics are driven by a column vector exogenous state variables process
$\textbf{X}=\{ \textbf{X}_t \}^T_{t=0}$
referred to as risk factors:
$r_t = \tilde{r}(\textbf{X}_t)$
and
$CF_{t} = \widetilde{CF}_{t}(\textbf{X}_t,\textbf{X}_{t-1})$
for some mappings
$\tilde{r}$
and
$\widetilde{CF}_{t}$
. Note that for the cash flow, the assumption is that
$CF_{t}$
is also impacted by
$\textbf{X}_{t-1}$
as some components of the cash flow can be determined one period in advance. A self-financing hedging portfolio can also be implemented by the insurer to mitigate the risk associated with the policy. Denoting the time *t* hedging portfolio value by
$V^\phi_t$
, one-period hedging gains provided by positions on hedging assets between
$t-1$
and *t* are defined as

where
$\phi$
denotes the hedging strategy. See Section 3.4 for details about the hedging portfolio. Hedging gains are assumed to have the representation
$H^\phi_t=\tilde{H}^\phi_t \left(\textbf{X}_t,\textbf{X}_{t-1} \right)$
. The evolution of risk factors
$\textbf{X}$
is assumed to be driven by a *d*-dimensional
$\mathcal{F}$
-adapted process
$\textbf{Y}=\{ \textbf{Y}_t \}^T_{t=1}$
acting as shocks on risk factors, which is subsequently referred to as sources of risk. Elements of
$\textbf{Y}$
can also be sub-vectors, that is, they can each contain subgroups of sources of risk. Risk factors are assumed to evolve according to

for some update functions
$\,\mathcal{U}_t, \, t=1,\ldots,T$
. Finally, risk factors
$\textbf{X}$
are assumed to have the Markov property with respect to filtration
$\mathcal{F}$
; there exists a mapping *f* such that
$\Pi_t = f(t,\textbf{X}_t)$
. Unlike in the case of vanilla options where a closed-form formula for *f* might exist (e.g., the Black-Scholes formula), such a function does not have an explicit expression in the present framework, which is why nested simulation is needed later.

### 2.2. Single-period gain and loss decomposition

To understand the relative impact of each source of risk, a decomposition of the gain and loss (GL) associated with the contract is considered. The first objective is to interpret causes of the variability of the gain and loss in a single period, that is, in

Based on assumptions from Section 2.1, $GL_{t+1}$ can be expressed as a function of $\textbf{X}_{t+1}$ and $\textbf{X}_{t}$ . Therefore, due to (2.2) we can write $GL_{t+1}=\widetilde{GL}_{t+1}\left(\textbf{Y}_{t+1},\textbf{X}_{t}\right)$ .

Note that the definition (2.3) of the gain and loss makes it possible to partition the total discounted gains earned throughout the life of the contract:

The goal is therefore to propose a suitable decomposition of $GL_{t}$ , $t=1,\ldots,T$ , which is insightful and allows understanding the impact of each underlying risk in the model.

#### 2.2.1. Shapley decompositions

Shapley decompositions, which were initially studied in Shapley (Reference Shapley, Kuhn and Tucker2016) in a game theory context, allow decomposing the variation of a function of multiple arguments into a linear combination of contributions from each of the arguments. Such decompositions are the cornerstone of the methodology proposed in this work.

Consider any function *g* with *d*-dimensional inputs. In what follows, the dummy
$\unicode{x1D7D9}_j$
denotes a column vector whose elements are all zero with the exception of its
$j^{th}$
element which is one. The Hadamard product (element per element product) is denoted by
$\circ$
. Let
$\mathbb{D}$
be the set of subsets of
$\{1,\ldots,d\}$
. Then, for
$\mathcal{S} \in \mathbb{D},$
define
$\unicode{x1D7D9}_\mathcal{S} \equiv \sum_{j \in \mathcal{S}} \unicode{x1D7D9}_j$
. Furthermore, let
$\mathbb{D}^k_i$
be the set of subsets of
$\{1,\ldots,d\}$
which have cardinality *k* and which do not contain *i*. Moreover for any
$\mathcal{S} \in \mathbb{D},$
define
$\textbf{y}^\mathcal{S} \equiv \unicode{x1D7D9}_{\mathcal{S}} \circ \textbf{y}$
. The Shapley decomposition of the variation of function *g* between points
$\textbf{y}=[y_1,\ldots,y_d]$
and the null point
$\textbf{0}$
is

For completeness, the proof of (2.5) is provided in the Appendix D of the Online Appendix. This leads to the representation $ g(\textbf{y}) - g(\textbf{0}) = \sum_{j=1}^{d} \mathcal{C}^{(j)}$ where

Quantity
$\mathcal{C}^{(j)}$
can be interpreted as the contribution of element *j* of vector
$\textbf{y}$
to the variation
$g(\textbf{y}) - g(\textbf{0})$
. The representation (2.6) considers not only standalone movements from element *j* of
$\textbf{y}$
but also all possible co-movements of other elements since interaction terms are incorporated within the decomposition (2.5).

#### 2.2.2. Applying the Shapley decomposition to the gain and loss

The next step consists in simply applying the Shapley decomposition (2.5) to the gain and loss. Define the passage of time contribution as the gain and loss which would occur in the absence of shocks on risk factors, that is, if $\textbf{Y}_{t+1}=\textbf{0}$ :

Using (2.5)–(2.6), this leads to

where the contribution from the source of risk $\textbf{Y}^{\{j\}}$ for $j=1,\ldots,d$ is

Equation (2.8) is the main building block of the risk allocation framework developed in this work. It provides an additive representation of the gain and loss $GL_{t+1}$ in terms of contributions from the various sources of risk.

### 2.3. Aggregation of gain and loss over time

Section 2.2.2 provides a decomposition of the gain and loss for a single time period. Evaluating risk contributions over a longer time horizon (e.g., the full contract duration) can be done by summing discounted contributions, which are defined as

over multiple time periods. Defining *time-aggregated contributions* as

leads, by substituting (2.8) into (2.4), to

### 2.4. Risk measurement and allocation for the gain and loss decomposition

Equations (2.8) and (2.12) provide linear decompositions of a risk of the form
$Z=\sum_{j=1}^{J} Z^{(j)}$
. The objective is to perform risk attribution and allocate the risk associated with *Z* to its subcomponents
$Z^{(j)}$
. Let
$\varrho$
denote the risk measure considered to assess risk. This work specializes on
$\varrho$
being either the variance or the conditional value-at-risk (CVaR), which is meant to reflect overall risk and tail risk, respectively. The CVaR with confidence level
$\alpha$
, is defined as

It represents the average of outcomes of a risk over the set of all worst-case scenarios. Indeed, when *Z* is an absolutely continuous random variable (which is assumed onwards), the CVaR also has the representation
$\text{CVaR}_\alpha(Z) = \mathbb{E}\left[Z | Z > \text{VaR}_\alpha(Z)\right]$
, see Rockafellar and Uryasev (Reference Rockafellar and Uryasev2002) and Tasche (Reference Tasche2007).

Denote by $\mathcal{A}_j(Z^{(1)},\ldots,Z^{(J)})$ the allocation to component $Z^{(j)}$ of risk $\varrho(Z)$ . Depending on the choice of the risk measure, we use the following allocation rules:

Note that (2.14) makes it possible to perform an allocation of the expected value when considering the case
$\alpha=0$
. Furthermore, (2.14) is equivalent to using the *Euler allocation principle*, see Tasche (Reference Tasche1999), which has been extensively studied in the literature. One interesting characteristic of such principle when the risk measure is positively homogeneous and subadditive (properties satsified
$\text{CVaR}_\alpha$
) is that the allocated amounts are smaller or equal to the CVaR for the standalone risks, that is,
$\mathcal{A}_j(Z^{(1)},\ldots,Z^{(J)})\leq \text{CVaR}_\alpha(Z^{(j)})$
,
$j=1,\ldots,J$
, see Kalkbrener (Reference Kalkbrener2005). Quantity (2.14) is straightforward to estimate if an i.i.d. sample of risk components
$\bigg\{ \left(Z^{(1)}_i,\ldots,Z^{(J)}_i\right) \bigg\}^n_{i=1}$
is available;
$\mathbb{E} \left[Z^{(j)}|Z>\text{VaR}_{\alpha}(Z)\right]$
can be estimated as the sample average of
$Z^{(j)}$
over all observations for which *Z* is greater than its sample quantile of level
$\alpha$
.

In simulations from the subsequent Section 4, two main types of risk assessment are performed. The first consists in assessing the risk over the entire duration of the contract by setting
$Z=-GL_{tot}$
and considering contributions
$-\tilde{\Theta}_{tot}$
and
$-\widetilde{\mathcal{C}}^{(j)}_{tot}, \, j=1,\ldots,J$
as the sub-components
$Z^{(j)}$
, which is justified by (2.12). The negative sign reflects that
$GL_{tot}$
represents gains, whereas the risk should be quantified by measuring losses. Conversely, other tests focus on standalone period *t* risk by letting
$Z=-GL_{t}$
and considering contributions
$-\tilde{\Theta}_{t}$
and
$-\widetilde{\mathcal{C}}^{(j)}_{t}, \, j=1,\ldots,J$
as the sub-components. Tests on the full contract duration are concerned with long-term (global) risk, whereas the single-period tests mainly reflects short-term (local) risk, which is connected to solvency issues.

### 2.5. Discussion of properties of the Shapley-based allocation scheme

The allocation method proposed in this work, which consists in applying either (2.13) or (2.14) to the linear decomposition of the gain and loss (2.8) (or to (2.12) for risks over the full time horizon), possesses several desirable properties.

The first two are called *additive aggregation* and *order invariance*, based on the terminology of Schilling *et al.* (Reference Schilling, Bauer, Christiansen and Kling2020), and are described more thoroughly in Appendix E of the Online Appendix. Additive aggregation means that the total risk is entirely attributed to each source of risk:
$\varrho(GL_{t+1}) = \mathcal{A}_{\Theta_{t+1}}(\Theta_{t+1},\mathcal{C}^{(1)}_{t+1},\ldots,\mathcal{C}^{(d)}_{t+1}) + \sum_{j=1}^{d} \mathcal{A}_{\mathcal{C}^{(j)}_{t+1}}(\Theta_{t+1},\mathcal{C}^{(1)}_{t+1},\ldots,\mathcal{C}^{(d)}_{t+1})$
, with subscripts of
$\mathcal{A}$
denoting components related either to the time decay or to the *d* elements of
$\textbf{Y}_{t+1}$
. Such property is helpful for the interpretation of the risk allocation. The order invariance property means that allocation results do not depend on a particular order in which the various sources of risk are considered. Other approaches encountered in practice do not necessarily satisfy such property and sometimes require to use a particular order to compute the marginal impact of the various risk drivers. For example, in Canada’s Province of Quebec, the regulator requires to sequentially add margins for adverse deviations to assumptions on lapse risk, mortality risk, mortality improvement risk and expenses to calculate capital requirements for a portfolio of segregated funds (the Canadian version of variable annuities) when an internal model is used, see for instance p. 250 of AMF (2022). The use of our allocation scheme circumvents the need to choose such a highly subjective order, which is a favorable property.

Another important feature of the proposed allocation scheme is its incorporation of interactions between the various sources of risk. Indeed, the contribution of a source of risk *j* computed through (2.6) involves all possible co-movements of other sources of risk. This is an improvement over several risk management schemes which mainly look at marginal movements of risk drivers to perform risk attribution. For instance, Greek letters used in finance to measure risk (e.g., the Delta, the Gamma and the Vega) mainly look at marginal movements of single risk drivers. Incorporating interactions between risk drivers allows forming a more holistic view on overall risk exposure. Furthermore, the Shapley decomposition-based scheme is an improvement over the *allocation by proportion* approach detailed in Feng (Reference Feng2018), which does not consider the dependence structure between the risk factors.

Moreover, the proposed allocation approach can handle very general dynamics of risk factors. Not having to rely on restrictive dynamics (e.g., complete markets or dynamics admitting the martingale representation theorem) enhances the approach’s applicability. Furthermore, several complex allocation schemes, such as that from Powers (Reference Powers2007) which relies on the computation of Aumann–Shapley values, might suffer from intractability issues in the context of general loss distributions.

The proposed Shapley-based allocation method has several advantages over the gradient-based methods, such as that found in Tsanakas and Millossovich (Reference Tsanakas and Millossovich2016). Whereas the latter considers infinitesimal shocks on risk drivers whose shape and distribution must be subjectively chosen, the present paper’s method uses actual simulated shocks on sources of risk whose size is consistent with their modeled dynamics. As an example, certain categories of risk are associated with multiple random variables which are not necessarily on the same scale, and it is unclear how should these be shocked all at once to compute gradients (e.g., what form of dependence should be embedded between shocks). Furthermore, our method does not require special adaptation for discretely distributed risk factors, unlike that of Tsanakas and Millossovich (Reference Tsanakas and Millossovich2016). Lastly, gradient-based methods often require numerical approximation schemes for partial derivatives, an issue not shared with the present study’s scheme not relying on infinitesimal quantities.

Finally, the beauty of the proposed scheme lies in its conceptual simplicity, which allows tackling the non-trivial allocation to interrelated sources of risk with well-studied and well-known allocation principles such as the Euler allocation approach. Other approaches could have instead been considered, such as these from Rabitti and Borgonovo (Reference Rabitti and Borgonovo2020) relying on Sobol indices or conditional distributions given fixed values for selected shocks. However, calculations involved in these schemes would most likely be considerably more intensive.

Our method has one drawback that has been pointed out by Powers (Reference Powers2007) and which is due to the use of Shapley decompositions: if a source of risk is partitioned, the sum of allocations to all sub-elements of the partition will not be equal to the allocation to the initial source of risk. Such drawback is nevertheless deemed an acceptable compromise given the various advantages of the proposed method that are outlined above.

## 3. Variable annuities cash flows and market setup

This section describes the financial market setup and cash flows associated with variable annuities contracts considered, which are inspired by Trottier *et al.* (Reference Trottier, Godin and Hamel2018) and Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021).

### 3.1. Cash flows to the insurer

Consider a single-premium GMMB variable annuity. Such a contract allows a policyholder to deposit a premium at time
$t=0$
in an account, which is invested into a mutual fund. It also guarantees a minimal benefit at the contract maturity if the policyholder survives and does not surrender his policy by then; the insurer will make up for any potential shortfall between the guaranteed amount and the account value at maturity. The time *t* policy account value
$A_t$
evolves according to

where
$\omega$
is the periodic fee rate charged by the insurer and
$F_{t}$
is the time *t* value of the underlying fund in which the account is invested.
$A_0$
is the amount of premium paid by the policyholder.

Variable annuity policies are terminated at some point in time when the first of the following three events occurs: the policyholders dies, the policyholder lapses his policy, that is, he forfeits the benefits and stops paying fees, or the maturity date of the contract is reached. For simplicity, the case of a large-sized insurer is considered, which entails that only the systematic part of mortality and lapse decrements is modeled. In other words, the asymptotic policyholder termination rates when the total number of policyholders in the portfolio is very large are applied. Such assumption could easily be relaxed in the proposed model by having each policyholder randomly dying or lapsing based on asymptotic rates. In such case, mortality risk would be higher, with the extent of the increase depending on the insurer portfolio size. Indeed, smaller portfolios would often incur mortality rates departing more sharply from asymptotic rates.

The mortality of the policyholder is assumed to be independent from the decision to lapse the policy or not.Footnote 2 Denote by
${}_{t} a_x$
the proportion of policyholders still active at time *t* among a large homogeneous cohort, assuming that they are all aged *x* periods at time
$t=0$
and they all hold the same contract. The activity proportion is assumed to evolve according to

where
${}_{t} p_x$
denotes the probability that the policyholder survives at least *t* periods after time 0 and
$\mathcal{L}(m_{t-1})$
denotes the probability for a policyholder who is active at
$t-1$
to lapse his policy during the period
$[t-1,t)$
. The latter proportion modeled as a mapping
$\mathcal{L}$
applied to
$m_{t-1}$
, a measure of the moneyness of the contract guarantee which is defined more precisely subsequently.

Fees are charged at the end of the period for all policyholders active at the beginning of the period:

Moreover a time-dependent penalty proportional to the end-of-period account value is charged to a policyholder lapsing in early years. Such penalties are typically charged by insurers to ensure a sufficient level of retention in their portfolio. Lapse penalties are characterized by a function
$\mathcal{P}$
which is typically decreasing with respect to time in practice. As in Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021), lapse penalties are assumed to start at 7% of the account value the first year, and then to decrease by 1% every year until it reaches 0% after the seventh year:

where $\lfloor \cdot \rfloor$ is the integer part function. Assuming a constant force of mortality and lapsation in any period (see Appendix A for details), the inflow to the insurer related to lapse penalties is

where $q^\mathcal{L}_{x,t-1}$ , the proportion of policyholders active at $t-1$ who lapse their policy in the next period, is

For a GMMB policy, the insurer promises to pay to all policyholders active at maturity *T* the difference between the guaranteed amount and the account value if it is positive:

where
$G_T$
denotes the guaranteed amount at maturity. The guaranteed amount can either be fixed, that is,
$G_t \equiv G_0$
for all *t* and some constant
$G_0$
, or it can evolve stochastically based on a ratchet provision. The ratchet provision considered in this work, when applied, is similar to that considered in Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021). In each period at least 10 years prior to the maturity of the contract, it entails increasing the guaranteed amount, at most once a year, to the account value when the latter surpasses the guaranteed amount by 15%:

where $\zeta_{t+1}$ is the number of times in the current year that the guaranteed amount was increased prior to time $t+1$ .

The moneyness process *m* whose value drives lapse rates is defined in terms of the guaranteed value process *G* through
$m_t = \frac{A_t}{G_t}$
.

Combining all the previously outlined cash flows, the net inflow to the insurer at time *t* is given by

### 3.2. Market setup and state variables

This section outlines the stochastic models driving market dynamics used in the present work. For the evolution of the term structure, the discrete-time Gaussian three-factor model presented in Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021) is used, where
$\mathbb{P}$
-dynamics of the annualized risk-free rate are

where $\left(\kappa_i,\mu_i,\sigma_i\right)$ , $i=1,2,3$ , are constant model parameters, and $z^{(r)}=\left\{ \left(z^{(r)}_{t,1},z^{(r)}_{t,2}, z^{(r)}_{t,3} \right) \right\}^T_{t=1}$ is a multivariate standard normal process with correlation $\text{Corr}\left(z^{(r)}_{t,i},z^{(r)}_{s,\ell}\right) = \unicode{x1D7D9}_{ \{s=t \} }\Gamma_{i,\ell}$ for some correlation matrix $\Gamma=\left[ \Gamma_{i,\ell}\right]_{i,\ell=1,2,3}$ . The advantages of such multi-factor model are twofold. First, reliance of the Gaussian distribution makes the model very tractable. Second, the model is sufficiently flexible to produce different shapes of the term structure.

Two equity indices with EGARCH dynamics are considered as building blocks to represent the equity portion of the underlying fund. The time *t* value of index *j*, denoted by
$S_{t}^{(j)}$
, follows

where $\left(\lambda^{(S)}_j,\omega^{(S)}_j,\alpha^{(S)}_j,\gamma^{(S)}_j\beta^{(S)}_j\right)$ , $j=1,2$ are constant parameters, $z^{(S)}=\left\{z^{(S)}_{t,1},z^{(S)}_{t,2}\right\}^T_{t=1}$ is a bivariate standard normal process independent from term structure innovations $z^{(r)}$ with correlation $\text{Corr}\left(z^{(S)}_{t,i},z^{(S)}_{s,\ell}\right) = \unicode{x1D7D9}_{ \{s=t \} }\rho_{i,\ell}$ for some correlation matrix $\rho=\left[\rho_{j,k}\right]_{j,k=1,2}$ .

The underlying fund of the variable annuity follows the mixed fund model dynamics from Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021), which consist in an EGARCH process where the conditional mean is mapped to equity fund returns and term structure factor variations to reflect the presence of both equity and fixed income assets within the fund:

where
$(\theta_0,\theta_1,\theta_2,\theta_{3})$
,
$(\theta_1^{(S)},\theta_{2}^{(S)})$
and
$(\omega^{(F)}, \alpha^{(F)}, \gamma^{(F)}, \beta^{(F)})$
are constant parameters, and
$z^{(F)}=\{z^{(F)}_{t}\}^T_{t=1}$
is a sequence of independent standard normal random variables independent from the term structure and equity innovations
$z^{(r)}$
and
$z^{(S)}$
. Parameters
$(\theta_0,\theta_1,\theta_2,\theta_{3})$
and
$(\theta_1^{(S)},\theta_{2}^{(S)})$
represent the extent to which underlying fund returns load on term structure factor shocks and equity index returns, respectively. As indicated in Appendix B,
$\tilde{\kappa}_i = \kappa_i - \sigma_i\lambda_i$
in (3.7), where
$\lambda_i$
is the constant risk premium associated with risk factor
$x^{(i)}$
, is the risk-neutral (and not the physical) speed of reversion for term structure factor *i*.

The risk-neutral measure
$\mathbb{Q}$
considered is also drawn from Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021) and is described in Appendix B.

The mortality dynamics follow an adaptation of the Lee and Carter (Reference Lee and Carter1992) model and are assumed to be identical under the physical and the risk-neutral measures,Footnote 3 with mortality rates being independent from term structure, equity and underlying fund innovations. Let
$\mathcal{Y}(x+t) \equiv \lfloor (x+t) /12 \rfloor$
be the integer age in years at time *t* of the policyholder who had age *x* months at time 0. The conditional 1-month survival probability at time *t* is given by

where $\{\varepsilon_t\}_{t\in\mathcal{T}}$ is a sequence of i.i.d. Gaussian random variables with standard deviation being a function of the current age given byFootnote 4

$a_{y}$
is the yearly base mortality rate at age *y* and
$b_{y}$
is the yearly sensitivity of the mortality rate at age *y* to the systemic mortality factor
$\{u_t\}_{t\in\mathcal{T}}$
, which is an integrated Gaussian auto-regressive process of order 1 with a trend:

where
$\{\nu_t\}_{t\in\mathcal{T}}$
is a sequence of independent and identically distributed Gaussian random variables with zero mean and variance
$\sigma^2_{\nu}$
. Processes
$\{u_t\}^T_{t=1}$
and
$\{\varepsilon_t\}^T_{t=1}$
are independent from each other and both
$\mathcal{F}$
-adapted. They, respectively, reflect the stochastic trend of mortality intensity and temporary departures from such trend. An implication of this model specification is that Lee–Carter model factors *a* and *b* found in (3.9) are constant throughout any given year.

The lapse function, taken from Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021), is a piecewise linear function constructed such that the lapse rate varies linearly between two boundaries as a function of the policy moneyness. Denoting the annualized lapse rate by
$\mathcal{L}^{ann}$
,

### 3.3. Model estimation

For the term structure, equity indices and underlying fund models, estimated parameters are also taken from Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021) and are presented here for completeness.

Table 1 presents parameters for the term structure model (3.2).

The two equity indices considered are the S&P/TSX Composite and S&P 500 price indices. Estimated parameters for the equity indices model (3.4)–(3.5) are provided in Table 2.

Two different funds acting as variable annuity underlying assets are considered: the *RBC Bond GIF Series 1*, which is a bond fund, and the *Assumption/CI Harbour Growth & Income Fund Series A* fund which is a mixed equity/bond fund. Table 3 gives parameter estimates for the fund model (3.7)–(3.8).

The Lee–Carter mortality model is calibrated on yearly Canadian mixed male and female mortality data from the *Human Mortality Database* for the 1951–2016 period. This dataset reports yearly realized mortality rates, and yearly rates must be transformed into monthly rates for the present study. Estimated parameters and details about the determination of monthly parameters for the model based on the yearly calibration of the Lee–Carter model are given in Appendix C from the Online Appendix.

### 3.4. Hedging instruments and hedging portfolio dynamics

To mitigate the risk associated with the insurance contract, the insurer can set up a hedging portfolio. The portfolio is assumed to be self-financing and to have a null initial value:
$V^\phi_0 = 0$
. On each time *t*, positions on three assets or contracts can be traded: a cash asset or futures contracts on the two market indices considered. Every time a position on a futures is taken in the portfolio, it is closed at the end of the period; trading frictions are disregarded. A trading strategy is defined as a process
$\phi=\{ \phi_t\}^T_{t=0}$
where, for
$t=1,\ldots,T$
, the
$\mathcal{F}_{t-1}$
-measurable vector
$\phi_t = \left[\phi^{(0)}_t , \phi^{(1)}_t, \phi^{(2)}_t\right]$
corresponds to the number of long positions between time
$t-1$
and time *t* on the three traded instruments. The convention
$\phi_0 = \textbf{0}$
is used, which is consistent with a null initial portfolio value. Denote by
$\delta^{(i)}_t$
the price variation of a unitary position in instrument *i* between
$t-1$
and *t*. For a self-financing trading strategy
$\phi$
, the time-*t* value of the portfolio is given by the sum of gains earned on all instruments and on all previous time steps:

which allows computing hedging gains $H^\phi_t$ through (2.1). The asset $i=0$ is a risk-free asset which is rolled-over such that on any period $[t,t+1)$ its value at the beginning and at the end of the period are, respectively, 1 and $e^{r_t \Delta}$ . Therefore, $\delta^{(0)}_t = e^{r_{t-1} \Delta}-1$ . Furthermore, the two other hedging assets considered are futures contracts that are rolled-over and always have a null value at their initiation. Therefore, the entire portfolio value is always fully invested in the risk-free asset, yielding $\phi^{(0)}_j= V^\phi_{t-1}$ . This implies only hedging gains on the futures need to be taken into account to compute hedging gains:

As shown in Björk (Reference Björk2009), in absence of dividends,Footnote 5 the time *t* price of a futures on index
$S^{(j)}$
maturing at
$t+n$
is
$\textbf{Fut}_{t,t+n} = \mathbb{E}^{\mathbb{Q}}[S^{(j)}_{t+n} \vert \mathcal{F}_t]$
. Appendix F from the Online Appendix shows that

The gain and loss obtained by holding the futures on index
$S^{(j)}$
between times *t* and
$t+1$
and the delta of the futures with respect to the index
$S^{(j)}$
are, respectively,

The hedging approach proposed in the present work requires calculating the sensitivity (i.e., the Greek) of the variable annuity guarantee with respect to the account value. Thus, define

Due to the absence of closed-form solutions for the pricing function *f*, such Greek is estimated through a forward finite difference approach based on a Monte-Carlo simulation. The approximation considered is

where
$\epsilon>0$
is a small constant,
$\unicode{x1D7D9}_A$
is the dummy vector having the value 1 only at the element corresponding to the component
$A_t$
from
$\textbf{X}_t$
and
$\hat{f}(t,\textbf{X}_t)$
is the time *t* value of the guarantee estimated by simulation when the current state variables are
$\textbf{X}_t$
. Note that the same pseudo-random numbers shall be used to calculate both
$\hat{f}(t,\textbf{X}_t + \epsilon\unicode{x1D7D9}_A)$
and
$\hat{f}(t,\textbf{X}_t)$
to reduce the variance of the estimate. Furthermore, to use equity futures for hedging, the Greek with respect to the account value needs to be transformed into Greeks with respect to equity indices underlying the futures contracts. This can be accomplished by using the chain rule, see Appendix G from the Online Appendix for details:

Whenever equity risk hedging is implemented, a delta hedging approach is used. If risk associated with the equity index *j* is hedged, a position on a single equity index *j* futures of time-to-maturity
$n^{(S,j)}$
is included in the hedging portfolio. Denote by
$\phi^{(S,j)}_{t+1}$
the position on such futures between time
$t-1$
and time *t*. The delta of the overall portfolio (i.e., including the guarantee and the hedging portfolio) is neutralized through
$\phi^{(S,j)}_{t+1} \Delta^{(fut,S,j,n)}_{t}+ \Delta^{(guar,S,j)}_{t}=0$
. Based on (3.13) and (3.16), this yields

## 4. Simulation experiments

This section presents Monte-Carlo simulation experiments involving the Shapley-based risk decomposition method to assess the importance of the various risks faced by an insurer issuing variable annuities and their interplay with different features such as the presence of hedging, the inclusion of ratchet provisions or the type of mutual fund underlying the policy.

### 4.1. Stochastic on stochastic simulation scheme

The application of the risk allocation methodology in a Monte-Carlo simulation setting requires the use of nested simulations, an approach referred to as a *stochastic on stochastic* simulation. Indeed, as a first step, multiple paths of risk factor values are simulated under the physical measure
$\mathbb{P}$
. Such paths are referred to as *outer loops*. Moreover, as indicated by (2.9), on each time step of each outer loop, the guarantee must be revaluated several times, which is done through the generation of multiple paths of risk factors under the risk-neutral measure
$\mathbb{Q}$
referred to as *inner loops*. Nested simulation procedures are very demanding numerically. Nevertheless, calculations can easily be parallelized, which can speed computations up substantially.Footnote 6 The proposed simulation algorithm is presented in the Algorithm 1 textbox, where
$M^{(out)}$
denotes the number of outer loops in the simulation.

### 4.2. Parameters and simulation specifications

Simulations presented subsequently depend on multiple parameters which are now provided.

#### 4.2.1. Simulation parameters and starting values

The number of outer and inner loops applied are, respectively,
$M^{(out)}=1000$
and
$M^{(in)}=1000$
. For risk factor dynamics and cash flow specifications, all models and parameter estimates provided in Section 3 and Appendix C are considered. As in Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021), fees are set to
$\omega=0.0286$
for the Assumption fund and
$\omega=0.0206$
for the RBC fund.

Starting values for risk factors are also selected in accordance with Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021) (see their Table 4 Scenario I) which correspond to filtered and estimated values on October 31, 2018. Initial term structure factor values are
$x^{(1)}_0=-0.0690$
,
$x^{(2)}_0=-0.0062$
and
$x^{(3)}_0=0.0940$
. Initial equity index volatilities are characterized by
$\sqrt{12 h^{(S)}_{0,1}}= 0.1412$
for the S&P/TSX index and
$\sqrt{12 h^{(S)}_{0,2}}= 0.1785$
for the S&P 500. For the underlying fund, the initial annualized volatility is
$\sqrt{12 h^{(F)}_{0}}= 0.0090$
for the RBC fund, or
$\sqrt{12 h^{(F)}_{0}}= 0.0416$
for the Assumption fund. For the purpose of simulation, all funds value and the account value are initiated to one that is,
$A_0=F_0 = S^{(1)}_0=S^{(2)}_0=1$
. The guaranteed amount of the policy is assumed to be
$G_0=A_0$
, that is, the policy is at-the-money at the onset. The initial mortality trend is set to
$u_{0}=-64.73868$
, which corresponds to the estimated value on year 2016 of the trend *u* on based on the fit to the mortality dataset mentioned in Section 3.3. Lapse parameters are set to
$\gamma_1 = 0.02$
,
$\gamma_2 = 0.10$
,
$\delta_1 = 0.4434$
and
$\delta_2 = 1.7420$
, which is aimed at replicating assumptions from p.149 of AMF (2022).

Whenever equity risk hedging is applied, the maturity of equity futures that are rolled-over is $n^{(S,j)}=3$ , that is, three monthly periods, for $j=1,2$ . To estimate Greeks of the guarantee with respect to the underlying account value $A_t$ , finite difference forward steps of $\epsilon=0.01 A_t$ are considered.

As in Augustyniak *et al.* (Reference Augustyniak, Godin and Hamel2021), some modifications to the model are applied based on judgmental considerations. A floor of
$-0.75\%$
is applied on simulated values of the risk-free rate
$r_t$
; highly negative short rate values are deemed undesirable. This entails using
$r_t = \max(-0.0075, \sum_{i=1}^{p} x^{(i)}_t)$
in simulations. Nevertheless, in order to ease the computational burden, the impact of the floor is not considered when computing the price of derivatives in hedging portfolios. Another modification consists of flooring the conditional variance of the mutual fund
$h^{(F)}_t$
; the minimal value is set to
$2.0833\times 10^{-6}$
which corresponds to an annualized volatility of 0.5%.

The various sources of risk considered are regrouped into term structure innovations ( $z^{(r)}_{t,i}$ , $i=1,2,3$ ), equity innovations ( $z^{(S)}_{t,j}$ , $j=1,2$ and $z^{(F)}_{t}$ ), and mortality ( $\varepsilon_t, \nu_t$ ) innovations:

Risk factors characterizing the guarantee value are $\textbf{X}_t = \left[x^{(1)}_t, x^{(2)}_t,x^{(3)}_t, h^{(S)}_{t,1},h^{(S)}_{t,2},h^{(F)}_{t},u_t,A_t, G_t \right]$ , which include term structure factors, equity volatilities, the mortality factor, the underlying account value and the current guaranteed amount.

### 4.3. Simulation results

The results of numerical experiments are presented in this section. The risk decomposition methodology is applied on simulated paths for several GMMB policies, and the allocations to the various risk classes based on either the variance or CVaR risk measures are extracted. The various simulations presented aim mainly to test the impact on the allocation of the choice of the risk measure, the nature of the underlying fund, the use of hedging, the inclusion of ratchet provisions to the policy and the policy maturity.

#### 4.3.1. Risk in absence of hedging

The first set of simulations considers GMMB policies issued on either the RBC bond fund or the Assumption mixed fund in absence of hedging, that is, with null hedging positions $\phi_t\equiv \textbf{0}$ , $t=1,\ldots,T$ . Two maturities are considered, respectively, 10 years ( $T=120$ ) or 20 years ( $T=240$ ). Two versions of the 20-year policies are tested: one without ratchets and one with ratchets applied to the first 10 years in accordance with (3.1). Two risk measures are used for the decomposition approach: the variance and the CVaR $_{95\%}$ which represent overall and tail risk, respectively. For the total-period risk allocation, $\mathcal{A}_{\tilde{\Theta}_{tot}}$ , $\mathcal{A}_{{IR}_{tot}}$ , $\mathcal{A}_{{EQ}_{tot}}$ and $\mathcal{A}_{{MO}_{tot}}$ denote allocations corresponding to respective additive inverses of contributions, namely $-\tilde{\Theta}_{tot}$ , $-\widetilde{\mathcal{C}}^{(IR)}_{tot}$ , $-\widetilde{\mathcal{C}}^{(EQ)}_{tot}$ and $-\widetilde{\mathcal{C}}^{(MO)}_{tot}$ , with interest rate (IR), equity (EQ) and mortality (MO) risk groups corresponding to elements 1, 2 and 3 of the innovations vector (4.1). Such allocations are reported in Table 4.

Notes: Risk allocations $\mathcal{A}$ to time decay ( $-\tilde{\Theta}_{tot}$ ), interest rate risk ( $-\widetilde{\mathcal{C}}^{(IR)}_{tot}$ ), equity risk ( $-\widetilde{\mathcal{C}}^{(EQ)}_{tot}$ ) and mortality risk ( $-\widetilde{\mathcal{C}}^{(MO)}_{tot}$ ) applied on GMMB policies on the RBC bond fund or the Assumption mixed fund mentioned in Section 3.3. Three risk measures are considered for the allocation: the expected value, the variance and the $\text{CVaR}_{95\%}$ ; allocation principles for such measures are presented in Section 2.4. Two policy maturities are considered: 10 years and 20 years. For 20-year policies, results with and without the inclusion of ratchets in accordance with the specification (3.1) are presented.

For the Assumption mixed fund, it is clear that equity risk plays the most predominant role in terms of contribution to both the variance and tail risk. Interest rates and mortality are, respectively, second and third in term of contribution size. These observations hold true for both 10-year and 20-year maturities, regardless of whether ratchet provisions are included to the policy. For the RBC bond fund, equity risk and interest rate risk surprisingly seem to have contributions to the variance of similar magnitude. Furthermore, tail risk is mainly driven by equity risk, with interest rate risk providing an offsetting effect reducing the CVaR. This implies that credit spread variations, which are negatively correlated with market returns, might be a more important driver of risk than interest rates for some bond funds. The equity risk contribution is therefore credit risk in disguise. Note that the contribution of the passage of time $\Theta$ always has an offsetting effect for risk. The reason underlying such phenomenon is that the passage of time reduces the likelihood of observing extreme values for payoff, unlike shocks on the equity, interest rate and mortality risk factors which may accentuate such probabilities.

It is also relevant to look at risk allocations period by period instead of considering the entire time horizon as a whole. For the simulation of the Assumption mixed fund 20-year GMMB without ratchet, Figure 1 provides the sample average (left panel) and sample standard deviation (right panel) across all simulated paths for all discounted group contributions $\tilde{\Theta}_t$ and $\widetilde{\mathcal{C}}^{(j)}_t$ on all steps $t=1,\ldots,T$ . Interestingly, sources of risk have a changing impact throughout the life of the contract. In early years, where multiple periods for the collection of fees remain, equity, interest rate and mortality risks have an upward effect on the gain and loss (see their positive average contribution) since they provide potential for appreciation of collected fees. Conversely, when nearing the contract maturity, very few fees are left to be collected and sources of risk mainly provide potential for benefits appreciation, which contributes to losses as evidenced by their negative average contribution. The time decay contribution $\tilde{\Theta}_t$ has an offsetting effect; for instance if sources of risk can generate increases in fees or benefits, the time decay conversely reduces the likelihood of such appreciations. Such changing signs for expected values of contributions over time highlight that care must applied when interpreting sums of discounted contributions across all time steps; it might make sense to instead consider their sum over some phases of the product life, each of which corresponds to an average direction for contribution of sources of risk.

#### 4.3.2. Impact of hedging on risk

The simulation-based risk allocation procedure from the previous section is repeated for the Assumption mixed fund, but this time including a hedging portfolio which delta-hedges exposure to the two equity indices based on the methodology outlined in Section 3.4. Recall that equity risk is the most important contributor to total risk in the experiment in the absence of hedging, which justifies spending efforts on mitigating such risk. Results are provided in Table 5. The left panel of the latter table is a repetition from Table 4 displaying the results without hedging, which can be compared to these with hedging shown in the right panel.

Notes: Risk allocations $\mathcal{A}$ to time decay ( $-\tilde{\Theta}_{tot}$ ), interest rate risk ( $-\widetilde{\mathcal{C}}^{(IR)}_{tot}$ ), equity risk ( $-\widetilde{\mathcal{C}}^{(EQ)}_{tot}$ ) and mortality risk ( $-\widetilde{\mathcal{C}}^{(MO)}_{tot}$ ) applied on GMMB policies on the Assumption mixed fund mentioned in Section 3.3, either in absence of (left panel) or in the presence of (right panel) a hedging portfolio. Three risk measures are considered for the allocation: the expected value, the variance and the $\text{CVaR}_{95\%}$ ; allocation principles for such measures are presented in Section 2.4. Two policy maturities are considered: 10 years and 20 years. For 20-year policies, results with and without the inclusion of ratchets in accordance with the specification (3.1) are presented.

The application of dynamic hedging reduces the risk in terms of total variance for all three products, that is, the 10-year, 20-year and 20-year with ratchets GMMBs, by 33.7%, 6.5% and 12%, respectively (see the total risk line for the total variance decomposition). Despite providing non-negligible reduction in risk, the hedge is unable to mitigate a large portion of the variance. This can be explained by the presence of basis risk in the mixed fund model, which cannot be hedged away with equity futures. Indeed, as shown in Trottier *et al.* (Reference Trottier, Godin and Hamel2018), even a small amount of basis risk can lead to major deterioration in hedging performance. Unreported tests performed by the authors showed that reducing the magnitude of basis risk significantly improves hedging performance. Furthermore, the tail risk measured by the CVaR is amplified for the two 20-year products when applying hedging. This is explained among other reasons by the fact that hedging significantly reduces the profitability of the contract for the insurer; for example, the expected profits go from 0.078$ to
$-$
0.040$ per unit of account value when the hedging portfolio is added. This results from systematic short positions on the underlying equity futures entailing shorting the equity risk premium over long periods of time and thus incurring very large hedging costs. Since the CVaR is translation invariant, the reduction in profitability reduces the buffer to offset fluctuations due to equity risk, which can create higher overall risk. When looking at the magnitude of the equity risk allocation
$\mathcal{A}_{EQ_{tot}}$
in the presence of the hedging portfolio, it is still higher than that of interest rate and mortality, but to a much smaller extent in relative terms than in the no-hedging case.

To further improve hedging performance, using equity options on the two indices instead of only futures could help hedging the Gamma of the contract and hence protect against large equity movements. However, this would not help mitigating basis risk which is a major driver of the equity risk. Interest rate derivatives such as swaps could also be used, but since interest rate risk is less material than equity risk (even for the RBC bond fund as explained above), we expect that this would produce only minor improvements in hedging performance.

## 5. Conclusion

This paper proposes a method to perform the allocation of risk associated with an insurance policy or a financial contract providing exposure to various sources of risk. The approach first applies a Shapley decomposition to represent the gain and loss in any period as a sum of contributions from the various sources of risk. Then, a risk allocation rule (e.g., the Euler allocation principle) is applied on contributions to allocate the whole risk as quantified by a risk measure to each of the underlying sources. The method is very flexible and can be applied for general discrete-time dynamics of risk factors. Hedging is naturally included in the framework. A stochastic on stochastic algorithm for the implementation of the allocation method is provided. Simulation experiments are conducted over GMMB variable annuity policies on either fixed income or mixed underlying funds to analyze the relative impact of equity, interest rate and mortality risk on such contracts. Equity risk is shown to be the most material risk drivers in the simulations, even when the underlying fund considered is a bond fund. This can be explained by movements of credit spreads in bonds which are correlated with equity market fluctuations. The ability of dynamic hedging to reduce risk over the entire horizon of the contract is limited by the presence of substantial basis risk. Since dynamic hedging for variable annuities leads to significant reduction in profitability due to hedging positions being systematically short over the equity risk premium, partial hedging might be a preferable alternative to full delta hedging in practice, especially if basis risk cannot fully be diversified away across an entire portfolio of variable annuity contracts as illustrated in Li *et al.* (Reference Li, Moenig and Augustyniak2020).

Results from the present paper could be extended in various ways. Other types of variable annuity guarantees or insurance contracts could be considered. Variance reduction methods could be implemented in either inner or outer loops of the stochastic on stochastic algorithm to increase the precision of Monte-Carlo simulations. Furthermore, to speed up computations, online re-valuation of the guarantee through inner loops could be avoided by pre-training a neural network on various contract specifications and using it for generalization. Finally, dynamics of risk factors could be refined, for instance by applying idiosyncratic components to mortality and lapse risk that cannot be diversified due to limited portfolio size.

## Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/asb.2023.7.

## A. Probability of lapsing before dying within a period

Define forces of lapsing and mortality through

Under the constant force of lapsing and mortality assumption, the probability, given being active at *t*, to lapse in
$[t,t+1)$
without dying before lapsing is

## B. Risk–neutral dynamics of financial variables

This appendix provides the risk–neutral dynamics considered for risk factors whose physical dynamics is described in Section 3.2. The tilde used on parameters below is used to represent risk–neutral parameters and innovations. Under the risk–neutral measure $\mathbb{Q}$ , the term structure factors retain their autoregressive structure: for $i=1,2,3$

where risk premia $\lambda_i$ , $i=1,2,3$ are constants. The risk–neutral equity indices dynamics are

for constant equity risk premia $\lambda^{(S)}_j$ , $j=1,2$ . Underlying fund risk–neutral dynamics are

where

The risk-neutral term structure, equity and underlying fund innovations processes, respectively
$\big\{ \left(\tilde{z}^{(r)}_{t,1},\tilde{z}^{(r)}_{t,2}, \tilde{z}^{(r)}_{t,3} \right) \big\}^T_{t=1}$
,
$\big\{ \left( \tilde{z}^{(S)}_{t,1},\tilde{z}^{(S)}_{t,2} \right) \big\}^T_{t=1}$
and
$\{\tilde{z}^{(F)}_{t}\}^T_{t=1}$
, are again independent under
$\mathbb{Q}$
. Moreover, under
$\mathbb{Q}$
, term structure time *t* innovations
$z_{t}=\left[\tilde{z}^{(r)}_{t,1},\tilde{z}^{(r)}_{t,2},\tilde{z}^{(r)}_{t,3}\right]$
and equity time *t* innovations
$\tilde{z}^{(S)}_{t}=\left[\tilde{z}^{(S)}_{t,1}, \tilde{z}^{(S)}_{t,2}\right]$
are still multivariate standard normal with respective correlation matrix
$\Gamma$
and
$\rho$
. Furthermore, time-independence is still preserved:
$\tilde{z}^{(r)}_{t_1,\cdot}$
is still independent from
$\tilde{z}^{(r)}_{t_2,\cdot}$
when
$t_1 \neq t_2$
, and
$\tilde{z}^{(S)}_{t_1,\cdot}$
is still independent from
$\tilde{z}^{(S)}_{t_2,\cdot}$
.

## C. Calibrated parameters for a monthly Lee–Carter model

In the yearly Lee–Carter model, realized yearly survival rates are given by

where *t* and *x* are denominated in years, and
$u^{yr}_{t+1} = c^{yr} + u^{yr}_t + \nu^{yr}_{t+1}$
with
$\varepsilon^{yr}$
and
$\nu^{yr}$
being two independent i.i.d. Gaussian processes, the latter having a constant variance
$\sigma^2_{\nu^{yr}}$
. The process
$\varepsilon^{yr}$
has a standard deviation which we assume to be characterized by the quadratic function of age
$\sigma_{\varepsilon^{yr}}$
given by (3.10). The calibration on yearly data directly provides sequences
$\{a_t\}$
and
$\{b_t\}$
which are also used in the monthly model. Furthermore, other parameters from the monthly model can be obtained from calibrated yearly model parameters through

The calibration methodology to obtain estimates of the functions *a* and *b* and of the realized process
$u^{yr}$
is the standard one proposed in Lee and Carter (Reference Lee and Carter1992), which is implemented with the lca function of the R software. Parameter estimates
$\hat{c}^{yr}$
and
$\hat{\sigma}^2_{\nu^{yr}}$
are then set as the sample estimates of the mean and standard deviation of the first difference of the realized process
$u^{yr}$
estimate. Resulting parameter estimates are
$\hat{c}^{yr} = -1.8325$
and
$\hat{\sigma}^2_{\nu^{yr}} = 2.1957$
. The parameters of the quadratic function for the innovations standard deviations
$\sigma_{\varepsilon^{yr}}$
are estimated based on ordinary least squares (OLS). For all fixed ages, the sample standard deviation (across all years in the data) of residuals
$\hat{\varepsilon}^{yr}$
from the Lee–Carter model is computed through quadratic regression on the sample standard deviation versus the age (in years) presented in Figure 2, which yields