Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-17T23:20:36.209Z Has data issue: false hasContentIssue false

Optimal risk sharing for lambda value-at-risk

Published online by Cambridge University Press:  29 July 2024

Zichao Xia*
Affiliation:
University of Science and Technology of China
Taizhong Hu*
Affiliation:
University of Science and Technology of China
*
*Postal address: International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
*Postal address: International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
Rights & Permissions [Opens in a new window]

Abstract

A new risk measure, the Lambda Value-at-Risk (VaR), was proposed from a theoretical point of view as a generalization of the ordinary VaR in the literature. Motivated by the recent developments in risk sharing problems for the VaR and other risk measures, we study the optimization of risk sharing for the Lambda VaR. Explicit formulas of the inf-convolution and sum-optimal allocations are obtained with respect to the left Lambda VaRs, the right Lambda VaRs, or a mixed collection of the left and right Lambda VaRs. The inf-convolution of Lambda VaRs constrained to comonotonic allocations is investigated. Explicit formula for worst-case Lambda VaRs under model uncertainty induced by likelihood ratios is also given.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $ (\mathrm{\Omega },\mathcal{F},\mathbb{P})$ be an atomless probability space, and let $ {L}^{0}$ be the set of all random variables defined on $ (\mathrm{\Omega },\mathcal{F},\mathbb{P})$ . Let $ \mathcal{X}$ be a convex cone of random variables in $ {L}^{0}$ , and let $ {L}^{k}$ be the set of all random variables with finite $ k$ th moments, where $ k \gt 0$ . For any $ X\in {L}^{0}$ , a positive (negative) value of $ X$ represents a financial loss (profit). A risk measure is a functional $ \rho \,:\,\mathcal{X}\to ({-}\mathcal{\infty },+\mathcal{\infty }]$ ; see [Reference Artzner, Delbaen, Eber and Heath3, Reference Föllmer and Schied14]. In a risk sharing problem, there are $ m$ agents equipped with respective risk measures $ {\rho }_{1},\dots ,{\rho }_{m}$ . Let $ X\in \mathcal{X}$ denote the total risk, which is shared by $ m$ agents. $ X$ is splitted into an allocation $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ among $ m$ agents, where $ {\mathbb{A}}_{m}(X)$ is the set of all possible allocations of $ X$ , defined as

\begin{align*} {\mathbb{A}}_{m}(X)=\left\{({X}_{1},\dots ,{X}_{m})\in {\mathcal{X}}^{m}\,:\,\mathrm{}\mathrm{}\sum _{j=1}^{m} {X}_{j}=X\right\}.\end{align*}

The inf-convolution of risk measures $ {\rho }_{1},\dots ,{\rho }_{m}$ is the mapping $ {\square}_{i=1}^{n}{\rho }_{i}\,:\,\mathcal{X}\to ({-}\mathcal{\infty },\mathcal{\infty }]$ , defined as

\begin{align*} \stackrel{m}{\underset{i=1}{\square}}{\rho }_{i}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\!\left\{\sum _{i=1}^{m} {\rho }_{i}\!\left({X}_{i}\right):\,\mathrm{}\mathrm{}({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)\right\},\quad X\in \mathcal{X}.\end{align*}

An $ m$ -tuple $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ is optimal (also termed as sum-optimal) of $ X$ for $ ({\rho }_{1},\dots ,{\rho }_{m})$ if $ {\square}_{i=1}^{m}{\rho }_{i}(X)={\sum }_{i=1}^{m} {\rho }_{i}\!\left({X}_{i}\right)$ . A sequence of allocations $ ({X}_{1n},\dots ,{X}_{mn})\in {\mathbb{A}}_{m}(X)$ , $ n\in \mathbb{N}$ , is asymptotically optimal if $ {\sum }_{i=1}^{m} {\rho }_{i}\!\left({X}_{in}\right)\to {\square}_{i=1}^{m}{\rho }_{i}(X)$ as $ n\to \mathrm{\infty }$ . An allocation $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ is Pareto-optimal if for any $ ({Y}_{1},\dots $ , $ {Y}_{m})$ $ \in {\mathbb{A}}_{m}(X)$ , $ {\rho }_{i}\!\left({Y}_{i}\right)\le {\rho }_{i}\!\left({X}_{i}\right)$ for all $ i\in [m]$ implies $ {\rho }_{i}\!\left({Y}_{i}\right)={\rho }_{i}\!\left({X}_{i}\right)$ for all $ i\in [m]$ , where $ [m]=\{1,\dots ,m\}$ . It is shown in Proposition 1 of [Reference Embrechts, Liu and Wang12] that sum-optimality is equivalent to Pareto-optimality for monetary risk measures. For non-monetary risk measures, it is easy to see that sum-optimality implies Pareto-optimality.

Liu et al. [Reference Liu, Wang and Wei23] investigated conditions under which the inf-convolution possesses the property of law invariance. For more on inf-convolution for the case of convex risk measures, see [Reference Acciaio1], [Reference Barrieu and El Karoui4], [Reference Filipović and Svindland13], [Reference Jouini, Schachermayer and Touzi19 and [Reference Tsanakas26], among others.

Embrechts et al. [Reference Embrechts, Liu and Wang12], Liu et al. [Reference Liu, Mao, Wang and Wei21], and Wang and Wei [Reference Wang and Wei27] studied the optimization of risk sharing for non-convex risk measures, for examples, Value-at-Risk (VaR) and Range-Value-at-Risk (RVaR). Explicit formulas of the inf-convolution and Pareto-optimal allocations were obtained with respect to the left VaRs, the right VaRs or a mixed collection of the left and right VaRs for $ m\ge 2$ . Formal definitions of the left and right VaRs are defined in Subsection 2.1. More precisely, for $ m=2$ , Embrechts et al. [Reference Embrechts, Liu and Wang12] proved that

(1.1) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{-}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{-}(X)=\,\mathrm{VaR}_{\lambda }^{-}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1},{\lambda }_{2}\in [\mathrm{0,1}]$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1 \gt 0$ . Liu et al. [Reference Liu, Mao, Wang and Wei21] considered the case of a mixed collection of the left and right VaRs, and proved that

(1.2) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{+}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{+}(X)=\,\mathrm{VaR}_{\lambda }^{+}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1},{\lambda }_{2}\in [\mathrm{0,1})$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1\ge 0$ , and that

(1.3) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{-}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{+}(X)=\,\mathrm{VaR}_{\lambda }^{+}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1}\in [\mathrm{0,1}],{\lambda }_{2}\in [\mathrm{0,1})$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1\ge 0$ . More recently, Lauzier et al. [Reference Lauzier, Lin and Wang20] investigated the problem of sharing risk among agents with preferences modeled by a general class of comonotonic additive and law-based distortion riskmetrics that need not be either monotone or convex, and solved explicitly Pareto-optimal allocations among agents using the Gini deviation, the mean-median deviation, or the inter-quantile difference as the relevant variability measures.

The Lambda Value-at-Risk (VaR) was proposed by Frittelli et al. [Reference Frittelli, Maggis and Peri15] as a generalization of the usual VaR. The formal definitions of the left and right Lambda VaRs are given in Section 2 (Definition 1). The Lambda VaRs are not monetary risk measures, as can be seen from Proposition 2. One naturally wonders whether an explicit formula also holds for the inf-convolution of the Lambda VaR agents. In this paper, we generalize the formulas (1.1)–(1.3) in several directions within the context of the Lambda VaRs.

The novelty of Lambda VaR is considering a function $ \Lambda $ , called ‘probability loss function’, which can change and adjust according to the profits and losses of a risk variable. The Lambda VaR can discriminate different risk variables with the same VaR at level $ \lambda $ but with different tail behavior. The function $ \Lambda $ can be either increasing or decreasing in [Reference Frittelli, Maggis and Peri15]. Burzoni et al. [Reference Burzoni, Peri and Ruffo6] focused on the conditions under which the Lambda VaR is robust, elicitable and consistent in the sense of [Reference Davis9]. Hitaj et al. [Reference Hitaj, Mateus and Peri17] applied Lambda VaR in financial risk management as an alternative to VaR to access capital requirements, and their findings show that Lambda VaR estimates are able to capture the tail risk and react to market fluctuations significantly faster than the VaR and expected shortfall. Corbetta and Peri [Reference Corbetta and Peri7] proposed three backtesting methodologies and assessed the accuracy of Lambda VaR from different points of view. Ince et al. [Reference Ince, Peri and Pesenti18] presented a novel treatment of Lambda VaR on subsets of $ {\mathbb{R}}^{n}$ , and derived risk contributions of individual assets to the overall portfolio risk, measured via Lambda VaR of the portfolio composition.

The rest of this paper is organized as follows. In Section 2, we provide the formal definitions of the Lambda VaR, collect some basic properties of the Lambda VaR and derive explicit formulas for worst-case Lambda VaRs under model uncertainty induced by likelihood ratios. In Section 3, we introduce the inf-convolution of decreasing functions, and study its detailed properties. These properties will be used in Sections 4 and 5. In Section 4, we obtain explicit formulas of the inf-convolution with respect to the left Lambda VaRs, the right Lambda VaRs and a mixed collection of the left and right Lambda VaRs. Section 5 focuses on the construction of optimal allocations and asymptotically optimal allocations of inf-convolution of several Lambda VaRs. In Section 6, we consider inf-convolution of Lambda VaRs constrained to comonotonic allocations. Section 7 contains some concluding remarks. The proofs of some lemmas and propositions appearing in the previous sections are relegated to Appendices AD.

2. Properties of Lambda VaRs

2.1. Definitions of Lambda VaRs

Let $ X\in {L}^{0}$ with distribution function $ {F}_{X}$ . The (ordinary) left-VaR of $ X$ at confidence level $ \alpha \in [\mathrm{0,1}]$ is defined as

\begin{align*} \,\mathrm{VaR}_{\alpha }^{-}(X)={F}_{X}^{-1}\!\left(\alpha \right)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\ge \alpha \}=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \lt \alpha \},\end{align*}

and the (ordinary) right-VaR of $ X$ at confidence level $ \alpha \in [\mathrm{0,1}]$ is defined as

\begin{align*} \,\mathrm{VaR}_{\alpha }^{+}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \gt \alpha \}=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\le \alpha \}.\end{align*}

Here and henceforth, we use the convention that $ \mathrm{inf}\,\mathrm{\varnothing }=+\mathrm{\infty }$ and $ \mathrm{sup}\,\mathrm{\varnothing }=-\mathrm{\infty }$ . For the role of left-quantile ( $\mathrm{VaR}_{\alpha }^{-}$ ) and right quantile ( $\mathrm{VaR}_{\alpha }^{+}$ ) with $ \alpha \in \left(\mathrm{0,1}\right]$ as risk measures, see the discussion in [Reference Acerbi and Tasche2] and [Reference Liu, Mao, Wang and Wei21, Remark 5].

Next, we recall the definition of Lambda VaRs from Bellini and Peri [Reference Bellini and Peri5], which are generalizations of ordinary VaRs.

Definition 1. Let $ X\in {L}^{0}$ with distribution function $ {F}_{X}$ , and let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ . The Lambda VaRs of $ X$ or $ {F}_{X}$ are defined as follows:

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\ge \Lambda (x)\},\end{align*}
\begin{align*} \,\mathrm{VaR}_{\Lambda }^{+}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \gt \Lambda (x)\},\end{align*}

and

\begin{align*} \widetilde{{\mathrm{VaR}}}_{\Lambda }^{-}(X)=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \lt \Lambda (x)\},\end{align*}
\begin{align*} \widetilde{{\mathrm{VaR}}}_{\Lambda }^{+}(X)=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\le \Lambda (x)\}.\end{align*}

$\mathrm{VaR}_{\Lambda }^{\kappa }(X)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{\kappa }(X)$ are also denoted by $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left({F}_{X}\right)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{\kappa }\!\left({F}_{X}\right)$ , where $ \kappa \in \{-,+\}$ .

It is known from [Reference Bellini and Peri5] that $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{-}(X)=\,\mathrm{VaR}_{\Lambda }^{-}(X)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{+}(X)=\,\mathrm{VaR}_{\Lambda }^{+}(X)$ for $ X\in {L}^{0}$ when $ \Lambda $ is decreasing. In this paper, “increasing” and “decreasing” are used in the weak sense. Thus, Lambda VaRs reduce from four to two. In the sequel, we only consider the left and the right Lambda VaRs, $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ .

Instead of a constant confidence level $ \lambda $ in the definition of $\mathrm{VaR}_{\lambda }$ , the function $ \Lambda $ adds flexibility in modeling tail behavior of risks. Under this assumption, properties of Lambda VaRs closely resemble those of the usual VaRs. The financial interpretation of the assumption of a decreasing $ \Lambda $ is well illustrated by a simple two-level Lambda VaR [Reference Bellini and Peri5, Example 2.7].

2.2. Basic properties of Lambda VaRs

We collect some basic properties of Lambda VaRs from [Reference Bellini and Peri5]. Throughout, let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ be decreasing to avoid pathological cases, and let $ {\mathcal{M}}_{1}$ denote the set of probability measures on $ (\mathbb{R},\mathcal{B}(\mathbb{R}))$ . Then

  1. (B1) $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ are finite if and only if $ \Lambda \not\equiv 0$ and $ \Lambda \not\equiv 1$ .

  2. (B2) If $ {\Lambda }_{1}(x)={\Lambda }_{2}(x)$ on their common points of continuity or $ {\Lambda }_{1}(x)={\Lambda }_{2}(x)$ almost surely with respective to the Lebesgue measure, then $\mathrm{VaR}_{{\Lambda }_{1}}^{\kappa }=\,\mathrm{VaR}_{{\Lambda }_{2}}^{\kappa }$ on $ {L}^{0}$ for $ \kappa \in \{-,+\}$ .

  3. (B3) For $ \kappa \in \{-,+\}$ , $\mathrm{VaR}_{\Lambda }^{\kappa }$ is quasi-concave on $ {\mathcal{M}}_{1}$ , that is,

    \begin{align*} \,\mathrm{VaR}_{\Lambda }^{\kappa }(\alpha {F}_{1}+(1-\alpha \!\left){F}_{2}\right)\ge \mathrm{min}\big\{\mathrm{VaR}_{\Lambda }^{\kappa }({F}_{1}),\,\mathrm{VaR}_{\Lambda }^{\kappa }({F}_{2})\big\}\end{align*}
    for any $ {F}_{1},{F}_{2}\in {\mathcal{M}}_{1}$ and $ 0 \lt \alpha \lt 1$ .
  4. (B4) For $ \kappa \in \{-,+\}$ , $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(F\right)$ has the “convex level set” (CxLS) property. A risk measure $ \rho \,:\,{\mathcal{M}}_{1}\to \mathbb{R}$ is said to have the CxLS property if for any $ {F}_{1},{F}_{2}\in {\mathcal{M}}_{1}$ , $ \alpha \in \left(\mathrm{0,1}\right)$ and $ \gamma \in \mathbb{R}$ , it holds that

    \begin{align*} \rho \!\left({F}_{1}\right)=\rho \!\left({F}_{2}\right)=\gamma \Rightarrow \rho (\alpha {F}_{1}+(1-\alpha \!\left){F}_{2}\right)=\gamma .\end{align*}
  5. (B5) $\mathrm{VaR}_{\Lambda }^{-}$ is weakly lower semi-continuous, i.e. if $ {F}_{n}\stackrel{d}{\to }F$ for $ {F}_{n},F\in {\mathcal{M}}_{1}$ , then

    \begin{align*} \underset{n\to \mathrm{\infty }}{\mathrm{lim}\,\mathrm{inf}}\,\mathrm{VaR}_{\Lambda }^{-}\!\left({F}_{n}\right)\ge \,\mathrm{VaR}_{\Lambda }^{-}\!\left(F\right).\end{align*}

    $\mathrm{VaR}_{\Lambda }^{+}$ is weakly upper semi-continuous, i.e. if $ {F}_{n}\stackrel{d}{\to }F$ for $ {F}_{n},F\in {\mathcal{M}}_{1}$ , then

    \begin{align*} \underset{n\to \mathrm{\infty }}{\mathrm{lim\,sup}}\,\mathrm{VaR}_{\Lambda }^{+}\!\left({F}_{n}\right)\le \,\mathrm{VaR}_{\Lambda }^{+}\!\left(F\right).\end{align*}

Some further properties of the Lambda VaRs are presented in the following propositions, whose proofs are postponed to Appendix A.

Proposition 1. For any $ X\in {L}^{0}$ and $ \kappa \in \{-,+\}$ , we have

(2.1) \begin{align} \,\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right) & \ge \lambda \mathrm{}\,\mathrm{VaR}_{\Lambda }^{\kappa }(X), \quad 0 \lt \lambda \lt 1;\nonumber\\[4pt] \mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right) & \le \lambda \mathrm{}\,\mathrm{VaR}_{\Lambda }^{\kappa }(X),\quad \lambda \gt 1.\end{align}

Consequently, $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right)/\lambda $ is decreasing in $ \lambda \in (0,\mathrm{\infty })$ for any fixed $ X\in {L}^{0}$ .

Han et al. [Reference Han, Wang, Wang and Xia16] in their Remark 3.1 showed that Lambda VaRs are not star-shaped but quasi-star-shaped. Proposition 1 states that the Lambda VaRs possess a “reverse star-shape” property.

Proposition 2. $ \mathrm{VaR}_{\Lambda }^{-}$ or $ \mathrm{VaR}_{\Lambda }^{+}$ is translation invariant on $ {L}^{0}$ if and only if $ \Lambda $ is a constant.

Proposition 3. Let $ \kappa \in \{-,+\}$ . If $ \mathrm{VaR}_{\Lambda }^{\kappa }$ is positively homogeneous, i.e. $ \mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right)$ $ =\lambda \mathrm{VaR}_{\Lambda }^{\kappa }(X)$ for all $ X\in {L}^{0}$ and $ \lambda \in (0,\infty )$ , then $ \Lambda $ is constant on intervals $ (0,\infty )$ and $ ({-}\infty ,0)$ , respectively, that is, there exist $ 1\ge {\alpha }_{1}\ge {\alpha }_{2}\ge {\alpha }_{3}\ge 0$ such that

(2.2) \begin{align} \Lambda (x)={\alpha }_{1}{1}_{({-}\mathrm{\infty },0)}(x)+{\alpha }_{2}{1}_{\left\{0\right\}}(x)+{\alpha }_{3}{1}_{(0,\mathrm{\infty })}(x).\end{align}

Next, we give three lemmas concerning properties of Lambda VaRs, which will be used in this paper. The first one will be used repeatedly in this paper. The second and the third ones give alternative representations of the Lambda VaRs in terms of the usual VaRs. Here and in the sequel, $ \overline{\Lambda }=1-\Lambda $ , and $ \Lambda (x{-})$ and $ \Lambda (x{+})$ denote the left and right limits of function $ \Lambda $ at point $ x$ , respectively.

Lemma 1. For $ X\in {L}^{0}$ and $ x\in \mathbb{R}$ , we have

(2.3) \begin{align} \mathbb{P}(X \gt x)\le \overline{\Lambda }(x{+})\iff \,\mathrm{VaR}_{\Lambda }^{-}(X)\le x,\end{align}
(2.4) \begin{align} \mathbb{P}(X \gt x) \lt \overline{\Lambda }(x{+})\Rightarrow \,\mathrm{VaR}_{\Lambda }^{+}(X)\le x,\quad\ \end{align}
(2.5) \begin{align} \mathbb{P}(X\ge x) \gt \overline{\Lambda }(x{+})\Rightarrow \,\mathrm{VaR}_{\Lambda }^{-}(X)\ge x,\quad\ \end{align}
(2.6) \begin{align} \mathbb{P}(X\ge x)\ge \overline{\Lambda }(x{-})\iff \,\mathrm{VaR}_{\Lambda }^{+}(X)\ge x.\end{align}

Lemma 2. [Reference Han, Wang, Wang and Xia16, Proposition 3.1] If $ \Lambda \!\left(t\right)$ is not constantly $ 0$ , that is, $ \Lambda ({-}\infty ) \gt 0$ , then

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-}(X)=\underset{y\in \mathbb{R}}{\mathrm{i}\mathrm{n}\mathrm{f}}\big\{\mathrm{VaR}_{\Lambda (y)}^{-}(X)\vee y\big\},\quad X\in {L}^{0}.\end{align*}

Lemma 3. If $ \Lambda \!\left(t\right)$ is not constantly $ 0$ , that is, $ \Lambda ({-}\infty ) \gt 0$ , then

(2.7) \begin{align} \mathrm{VaR}_{\Lambda }^{+}(X)=\underset{y\in \mathbb{R}}{\mathrm{i}\mathrm{n}\mathrm{f}}\big\{\mathrm{VaR}_{\Lambda (y)}^{+}(X)\vee y\big\},\quad X\in {L}^{0}.\end{align}

2.3. Worst-case Lambda VaR under model uncertainty

Let $ \mathcal{P}$ be the set of all probability measures that are absolutely continuous with respect to $ \mathbb{P}$ , where $ \mathbb{P}$ is a common benchmark for all agents. For any $ Q\in \mathcal{P}$ , let $\mathrm{VaR}_{\Lambda }^{-,Q}$ and $\mathrm{VaR}_{\Lambda }^{+,Q}$ be the $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ evaluated under the probability measure $ Q$ instead of $ \mathbb{P}$ . We consider the worst-case Lambda VaR risk measures

\begin{align*} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,\mathcal{Q}}={\mathrm{sup}}_{Q\in \mathcal{Q}}\,\mathrm{VaR}_{\Lambda }^{-,Q}\,\mathrm{and}\,{\overline{\mathrm{VaR}}}_{\Lambda }^{+,\mathcal{Q}}={\mathrm{sup}}_{Q\in \mathcal{Q}}\,\mathrm{VaR}_{\Lambda }^{+,Q},\end{align*}

where $ \mathcal{Q}$ is the subset of $ \mathcal{P}$ , describing model uncertainty. We call $ \mathcal{Q}$ an uncertainty set of probability measures. A particular choice of $ \mathcal{Q}$ is induced by likelihood ratios, which is the following set of probability measures whose Randon-Nikodym derivatives with respect to $ \mathbb{P}$ do not exceed a constant, i.e.

\begin{align*} {\mathcal{P}}_{\beta }=\left\{Q\in \mathcal{P}\,:\,\frac{\mathrm{}\mathrm{d}Q}{\mathrm{}\mathrm{d}\mathbb{P}}\le \frac{1}{\beta }\right\}\,\mathrm{for}\,\beta \in \left(\mathrm{0,1}\right].\end{align*}

Liu et al. [Reference Liu, Mao, Wang and Wei21] considered the special cases $\mathrm{VaR}_{\lambda }^{-}$ and $\mathrm{VaR}_{\lambda }^{+}$ with $ \Lambda \equiv \lambda \in \left(\mathrm{0,1}\right)$ under uncertainty set $ {\mathcal{P}}_{\beta }$ , and obtained that

\begin{align*} {\overline{\mathrm{VaR}}}_{\lambda }^{-,{\mathcal{P}}_{\beta }}=\,\mathrm{VaR}_{1-(1-\lambda )\beta }^{-},\,\mathrm{}\mathrm{}\mathrm{}{\overline{\mathrm{VaR}}}_{\lambda }^{+,{\mathcal{P}}_{\beta }}=\,\mathrm{VaR}_{1-(1-\lambda )\beta }^{+}.\end{align*}

Proposition 4. Let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ be decreasing. For $ \beta \in \left(\mathrm{0,1}\right]$ , define $ {\Lambda }_{\beta }=1-\beta \overline{\Lambda }$ . Then

(2.8) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{+,{\mathcal{P}}_{\beta }}(X)=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{+}(X),\quad X\in {L}^{0},\end{align}

Furthermore, if $ \Lambda \gt 0$ , then

(2.9) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X),\quad X\in {L}^{0}.\end{align}

Proof. We give the proof for the left Lambda VaR since the proof for the right Lambda VaR is similar. First, note that for any given $ X\in {L}^{0}$ and $ Q\in \mathcal{Q}$ , we have $ Q(X \gt x)\le \mathbb{P}(X \gt x)/\beta $ for any $ x\in \mathbb{R}$ and, hence,

\begin{align*} \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X) & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{}\mathbb{}\mathbb{P}(X \gt x)\le \overline{{\Lambda }_{\beta }}(x)\}=\mathrm{i}\mathrm{n}\mathrm{f}\!\left\{x\,:\,\mathrm{}\mathrm{}\frac{1}{\beta }\mathbb{P}(X \gt x)\le \overline{\Lambda }(x)\right\}\\ & \ge \mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathrm{}\mathrm{}Q(X \gt x)\le \overline{\Lambda }(x)\}=\,\mathrm{VaR}_{\Lambda }^{-,Q}(X).\end{align*}

Thus,

(2.10) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)\le \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X),\quad X\in {L}^{0}.\end{align}

To prove the reverse inequality of (2.10), we choose a special $ {Q}_{0}\in \mathcal{P}_{\beta }$ such that $ \mathrm{}\mathrm{d}{Q}_{0}/\mathrm{}\mathrm{d}\mathbb{P}=(1/\beta ){1}_{\{{U}_{X} \gt 1-\beta \}}$ , where $ {U}_{X}\sim U\!\left(\mathrm{0,1}\right)$ such that $ X={F}_{X}^{-1}\!\left({U}_{X}\right)$ , a.s. Then

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X) & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,{Q}_{0}(X\le x)\ge \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{P}(X\le x,{U}_{X} \gt 1-\beta )\ge \beta \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{P}(1-\beta \lt {U}_{X}\le {F}_{X}(x))\ge \beta \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathrm{m}\mathrm{a}\mathrm{x}\{{F}_{X}(x)-1+\beta ,0\}\ge \beta \Lambda (x)\}.\end{align*}

Since $ \Lambda \gt 0$ , it follows that

(2.11) \begin{align} \,\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,{F}_{X}(x)\ge 1-\beta \overline{\Lambda }(x)\}=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X).\end{align}

Therefore, $ {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)\ge \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X)$ for $ X\in {L}^{0}$ . This proves (2.9) for left Lambda VaR.

Remark 1. Eq. (2.11) cannot be true without the assumption $ \Lambda \gt 0$ . A counterexample is as follows. Let $ \Lambda (x)={1}_{({-}\mathrm{\infty },2]}(x)$ , $ X\sim U\!\left(\mathrm{0,4}\right)$ under probability measure $ \mathbb{P}$ , and set $ \beta =1/4$ . Choose $ {Q}_{0}\in {P}_{\beta }$ such that $ \mathrm{}\mathrm{d}{Q}_{0}/\mathrm{}\mathrm{d}\mathbb{P}=4\mathrm{}{1}_{\{{U}_{X} \gt 3/4\}}$ , that is, $ X\sim U\!\left(\mathrm{3,4}\right)$ under probability measure $ {Q}_{0}$ . Then $\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X)=2$ . However, $\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X)=3 \gt {\mathrm{VaR}}_{\Lambda }^{-,{Q}_{0}}(X)$ . Thus, (2.11) does not hold in this case.

Further properties of Lambda VaRs under model uncertainty induced by Wasserstein metrics can be found in Xia [Reference Xia29].

3. Inf-convolution of real functions

In order to study inf-convolution of Lambda VaRs, we introduce the following inf-convolution of real functions. We restrict ourselves to consider bounded and decreasing functions.

Definition 2. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be a bounded and decreasing function for each $ i\in [m]$ . The inf-convolution of $ {\Lambda }_{1},\dots ,{\Lambda }_{m}$ is denoted by $ \stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}(y)$ , defined as

(3.1) \begin{align} \stackrel{m}{\underset{i=1}{\oslash }}\!{\Lambda }_{i}(y)\,:\!=\,\underset{{y}_{1},\dots ,{y}_{m}\in \mathbb{R},\sum _{i=1}^{m} {y}_{i}=y}{\mathrm{i}\mathrm{n}\mathrm{f}}\!\left\{1-\sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right)\right\}.\end{align}

Throughout, we denote $ {\Lambda }^{\mathrm{*}}\left(y\right)=\stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}(y)$ .

It is easy to see that $ {\Lambda }^{\mathrm{*}}(y)$ is also decreasing and that

(3.2) \begin{align} \overline{{\Lambda }^{\mathrm{*}}}(y)=\overline{\stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}}(y)={\mathrm{sup}}_{{y}_{1},\dots ,{y}_{m}\in \mathbb{R},\sum _{i=1}^{m} {y}_{i}=y}\sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right).\end{align}

That is, $ \overline{{\Lambda }^{\mathrm{*}}}$ is the sup-convolution of $ \overline{{\Lambda }_{1}},\dots ,\overline{{\Lambda }_{m}}$ . The next proposition justifies the simple fact that the inf-convolution of $ m$ functions can be seen as the repeated applications of the inf-convolution of two functions. In the expression $ {\Lambda }_{1}\oslash {\Lambda }_{2}\dots \oslash {\Lambda }_{m}$ below, the convention is to perform the operations $ \oslash $ from left to right.

Proposition 5. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded and decreasing for each $ i\in [m]$ . For any $ y\in \mathbb{R}$ , we have $ {\oslash }_{i=1}^{m}{\Lambda }_{i}(y)={\Lambda }_{1}\oslash {\Lambda }_{2}\dots \oslash {\Lambda }_{m}(y)$ , where $ \Lambda_1\oslash\Lambda_2(y)={\oslash }_{i=1}^{2}{\Lambda }_{i}(y)$ .

Several further properties of inf-convolution of real functions are listed in the following propositions, whose proofs are presented in Appendix B. The first proposition, Proposition 6, will be used repeatedly to prove other results in this paper, which gives the expressions of $ {\Lambda }^{\mathrm{*}}$ at positive infinity and negative infinity. We denote $ \Lambda ({+}\mathrm{\infty })={\mathrm{l}\mathrm{i}\mathrm{m}}_{x\uparrow \mathrm{\infty }}\Lambda (x)$ and $ \Lambda ({-}\mathrm{\infty })={\mathrm{l}\mathrm{i}\mathrm{m}}_{x\downarrow -\mathrm{\infty }}\Lambda (x)$ for any decreasing function $ \Lambda $ .

Proposition 6. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded and decreasing for $ i\in [m]$ . Then

\begin{align*} {\Lambda }^{\mathrm{*}}({-}\mathrm{\infty }) & =\underset{1\le i\le m}{\mathrm{min}}\bigg(1-\overline{{\Lambda }_{i}}({-}\mathrm{\infty })-\sum _{j\ne i} \overline{{\Lambda }_{j}}({+}\mathrm{\infty })\bigg),\\ {\Lambda }^{\mathrm{*}}({+}\mathrm{\infty }) & =1-\sum _{i=1}^{m} \overline{{\Lambda }_{i}}({+}\mathrm{\infty }).\end{align*}

Proposition 7. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded, right-continuous and decreasing for $ i\in [m]$ . For any $ y\in \mathbb{R}$ , $ {\Lambda }^{*}(y)$ has either one of the following properties:

  1. (P1) There exists $ ({y}_{1},\dots ,{y}_{m})\in {\mathbb{R}}^{m}$ such that $ {\sum }_{i=1}^{m} {y}_{}=y$ and $ \overline{{\Lambda }^{\mathrm{*}}}(y)={\sum }_{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right)$ .

  2. (P2) There exists a sequence $ \left\{\right({y}_{1n},\dots ,{y}_{mn}){\}}_{n\in \mathbb{N}}$ such that

    \begin{align*} \sum _{i=1}^{m} {y}_{in}=y\ \ and\ \ \sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{in}\right)\to \overline{{\Lambda }^{\mathrm{*}}}(y)\ as\ n\to \mathrm{\infty },\end{align*}
    where $\{(y_{1n}, \dots, y_{mn})\}_{n\in{\mathbb{N}}}$ does not have a cluster point in ${\mathbb{R}}^m$ . In this case, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ , and $\sum_{i=1}^m\overline{{\Lambda}_i} (y_i) < \overline{{\Lambda}^{\ast}} (y)$ whenever $\sum_{i=1}^m y_i=y$ .

Furthermore, if ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{y_0}} \right)$ has property $(\mathrm{P}_2)$ , then so does ${{\rm{\Lambda }}^{\rm{*}}}(x)$ for any $x \lt {y_0}$ .

The next proposition gives sufficient conditions on $\left\{ {{{\rm{\Lambda }}_i}} \right\}$ under which ${{\rm{\Lambda }}^{\rm{*}}}$ is right-continuous or continuous.

Proposition 8. (Continuity.) Let ${\Lambda}_i\,:\, {\mathbb{R}}\to{\mathbb{R}}$ be bounded and decreasing for $i \in [m]$ .

  1. (1) If ${{\rm{\Lambda }}_i}$ is continuous for some $i$ , then so is ${{\rm{\Lambda }}^{\rm{*}}}$ .

  2. (2) If all ${{\rm{\Lambda }}_i}$ are right-continuous, then so is ${{\rm{\Lambda }}^{\rm{*}}}$ .

Proposition 9 gives a sufficient condition under which Property $(\mathrm{P}_1)$ holds. The condition is that the right tail of each ${{\rm{\Lambda }}_i}$ is a constant. The special case of ${{\rm{\Lambda }}^{\rm{*}}}$ being constant is investigated in Proposition 10.

Proposition 9. Let ${\Lambda _i}$ be bounded, right-continuous and decreasing for $i \in [m]$ . If, for each $i \in [m]$ , ${\Lambda _i}\!\left( {{y_i}} \right) = {\Lambda _i}({+}\infty)$ for some $y_i\in\mathbb{R}$ , then for any $x\in \mathbb{R}$ , there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum^m_{i=1} x_i=x$ and $\overline{\Lambda ^{\ast}} (x) = \sum_{i=1}^m \overline{\Lambda _i}(x_i)$ .

Proposition 10. Let ${\Lambda _i}$ be bounded, right-continuous and decreasing for each $i \in [m]$ .

  1. (1) ${{\rm{\Lambda }}^{\rm{*}}}$ is constant if and only if at least one ${{\rm{\Lambda }}_i}$ is constant.

  2. (2) Let ${\Lambda}^{\ast}$ be a constant function. Then $\overline{{\Lambda}^{\ast}}(x)>\sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ for any $(x_1, \ldots, x_m)\in{\mathbb{R}}^m$ with $x=\sum^m_{i=1} x_i$ if and only if there exists ${\Lambda}_{i_0}$ such that ${\Lambda}_{i_0}(y)>{\Lambda}_{i_0}({+}{\infty})$ for any $y\in {\mathbb{R}}$ .

In view of property (B2) in Subsection 2.2, we always assume that all ${{\rm{\Lambda }}_i}$ are right-continuous in the next sections.

4. Inf-convolution of several Lambda VaRs

Theorem 1. Let $\Lambda_i\,:\, {\mathbb{R}} \rightarrow (0, 1]$ be decreasing for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2). If ${\Lambda ^*}({-}\infty) \gt 0$ , then

(4.1) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

The proof of Theorem 1 requires the following lemma, which was pointed out to us by an anonymous referee.

Lemma 4. For ${\lambda _i} \in [{0,1}]$ and $(y_1, \ldots, y_m)\in \mathbb{R}^m$ , we have

(4.2) \begin{equation} \inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^-_{\lambda_i}(X_i)\vee y_i\right\}= \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\lambda_i}(X)\vee\sum_{i=1}^m y_i,\quad X\in L^0.\end{equation}

Proof of Theorem 1. By Lemma 2, for $X \in {L^0}$ , we have

(4.3) \begin{align} \mathop{\Box}\limits_{i=1}^m {\rm VaR}_{\Lambda _i}^-(X) &= \inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i)\notag \\ &=\inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m \inf_{y_i\in\mathbb{R}} \!\left\{{\rm VaR}^-_{\Lambda _i(y_i)}(X_i)\vee y_i\right\}\notag\\ &=\inf_{(y_1,\dots, y_m)\in\mathbb{R}^m}\ \inf_{(X_1,\dots,X_m) \in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^-_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\!\left\{ \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda_i(y_i)}(X)\vee\sum_{i=1}^m y_i\right\} \end{align}
(4.4) \begin{align}\qquad = \mathop {\inf }\limits_{({y_1},...,ym)\epsilon {\mathbb{R}^m}} \!\left\{ {{\rm{Va}}{{\rm{R}}^ - }_{1 - \sum\nolimits_{i = 1}^m {\overline {{ \wedge _i}} ({y_i})} }(X){\rm{V}}\sum\limits_{i = 1}^m {{y_i}} } \right\},\end{align}

where (4.3) follows from Lemma 4, and (4.4) follows from the fact

\begin{align*} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda _i(y_i)}(X) = {\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\end{align*}

by Corollary 2 in [Reference Embrechts, Liu and Wang12]. Here, we use the convention that ${\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)={\rm VaR}^-_{0}(X)=-\infty$ when $1-\sum_{i=1}^m\overline{\Lambda _i}(y_i) <0$ . Furthermore, by the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ , we have

(4.5) \begin{align}& \inf_{(y_1, \dots, y_m)\in\mathbb{R}^m} \!\left\{{\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda_i}(y_i)}(X)\vee \sum_{i=1}^m y_i \right\} \notag\\ & \hskip 1.5cm =\inf_{y\in\mathbb{R}}\ \inf_{(y_1, \dots, y_m)\in\mathbb{R}^m, \sum_{i=1 }^m y_i=y} \!\left\{{\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee y\right\}\notag \\ & \hskip 1.5cm \ge \inf_{y\in\mathbb{R}}\!\left\{{\rm VaR}^-_{\Lambda ^{\ast}(y)}(X)\vee y\right\} \end{align}
(4.6) \begin{align} = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X),\qquad\qquad\qquad\qquad\quad\qquad\ \ \end{align}

where (4.5) holds since $\Lambda ^{\ast}(y)\le 1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)$ for any $(y_1,\dots, y_m)\in\mathbb{R}^m$ with $\sum_{i=1}^m y_i= y$ , and (4.6) follows from Lemma 2.

The equality in (4.1) of Theorem 1 does not hold without further assumptions, as shown by the following counterexample. We will investigate sufficient conditions on ${{\rm{\Lambda }}^{\rm{*}}}$ in Theorems 4 and 5, under which the equality in (4.1) is true.

Example 1. Let $X \in {L^0}$ , with ${\mathbb{P}}(X=0)={\mathbb{P}}(X=1)=1/2$ , and let ${{\rm{\Lambda }}_1}(x) \equiv 3/4$ and

\begin{align*}{{\rm{\Lambda }}_2}(x) = \frac{1}{{4\pi }}\left( {\frac{\pi }{2} - {\rm{arctan}}(x)} \right) + \frac{3}{4}.\end{align*}

By Proposition 10, it follows that ${{\rm{\Lambda }}^{\rm{*}}} \equiv 1/2$ , and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {x + y} \right) \gt \overline {{{\rm{\Lambda }}_1}} (x) + \overline {{{\rm{\Lambda }}_2}} (y)$ for all $(x,y)\in\mathbb{R}^2$ . Thus, ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = 0$ . We claim that, for any $(X_1,X_2)\in \mathbb{A}_2(X)$ ,

\begin{align*}{\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ -( {{X_1}}) + {\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ - ({{X_2}}) \ge 1 \gt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X).\end{align*}

In fact, if this is not true, there exists $(Y_1,Y_2)\in \mathbb{A}_2(X)$ such that ${y_1} + {y_2} \lt 1$ , where ${y_1} = {\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ - ({{Y_1}})$ and ${y_2} = {\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ - ({{Y_2}})$ . By Lemma 1, we have $ {\mathbb{P}}(X_1>y_1)\le\overline{\Lambda _1}(y_1)$ and ${\mathbb{P}}(X_2>y_2) \le\overline{\Lambda _2}(y_2)$ . Thus,

\begin{align*} \frac {1}{2}={\mathbb{P}}(X>y_1+y_2)\le\sum^2_{i=1}{\mathbb{P}}(X_i>y_i)\le\sum^2_{i=1}\overline{\Lambda _i}(y_i) < \frac {1}{2},\end{align*}

which is a contradiction.

Theorem 2. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow (0, 1)$ be decreasing for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2). If ${\Lambda ^*}({-}\infty) \gt 0$ , then

(4.7) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

The proof of Theorem 2 requires the following lemma, Lemma 5, whose proof is similar to that of Lemma 4 by using cash invariance of VaR and Theorem 1 in [Reference Liu, Mao, Wang and Wei21].

Lemma 5. Let ${\lambda _i} \in [{0,1}]$ and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ . For any $(y_1, \ldots, y_m)\in \mathbb{R}^m$ , we have

(4.8) \begin{align}\inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^{\kappa_i}_{\lambda_i}(X_i)\vee y_i\right\}= \mathop{\Box}\limits_{i=1}^m {\rm VaR}^{\kappa_i}_{\lambda_i}(X)\vee\sum_{i=1}^m y_i,\quad X\in L^0.\end{align}

Here, the ${\kappa _i}$ and the ${\lambda _i}$ are chosen to avoid the appearance of ${\rm{VaR}}_0^ - \square{\rm{VaR}}_1^ + $ in (4.8).

Proof of Theorem 2. The proof is similar to that of Theorem 1. By Lemma 3, for $X \in {L^0}$ , we have

(4.9) \begin{align}\mathop{\Box}\limits_{i=1}^m {\rm VaR}_{\Lambda _i}^+(X) &= \inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m{\rm VaR}_{\Lambda _i}^+(X_i) \notag \\ & =\inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m \inf_{y_i\in\mathbb{R}} \!\left\{{\rm VaR}^+_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\ \inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\ \left\{\mathop{\Box}\limits_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X)\vee\sum_{i=1}^m y_i \right\} \end{align}
(4.10) \begin{align} &\qquad =\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m} \!\left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee\sum_{i=1}^m y_i \right\} ,\end{align}

where (4.9) follows from Lemma 5, and (4.10) follows from Theorem 1 of [Reference Liu, Mao, Wang and Wei21], which implies that

(4.11) \begin{align} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X)={\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X), \quad X\in L^0,\end{align}

since ${{\rm{\Lambda }}_i}\!\left( {{y_i}} \right) \lt 1$ for $i \in [m]$ . Here we use the convention that ${\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X) =-\infty$ when $1-\sum_{i=1}^m\overline{\Lambda_i}(y_i)<0$ . Note that

(4.12) \begin{align} \inf_{(y_1,\dots,y_m)\in\mathbb{R}^m} & \left \{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee \sum_{i=1}^m y_i \right\} \notag\\ &=\inf_{y\in\mathbb{R}}\ \inf_{(y_1,\dots,y_m)\in\mathbb{R}^m, \sum_{i=1}^m y_i=y} \left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee y\right\} \notag\\ &= \inf_{y\in\mathbb{R}} \!\left\{{\rm VaR}^+_{\Lambda ^{\ast}(y)}(X)\vee y\right\} \end{align}
(4.13) \begin{align} \,\,= {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),\qquad\qquad\qquad\qquad\qquad\qquad\end{align}

where (4.13) is due to Lemma 3, and (4.12) follows since

(4.14) \begin{align} \inf\!\left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\,:\,\ (y_1,\dots,y_m)\in\mathbb{R}^m, \sum_{i=1}^m y_i=y \right\} ={\rm VaR}^+_{\Lambda ^{\ast}(y)}(X).\end{align}

More detail is given on (4.14) as follows. Denote by LHS the left-hand side of (4.14). Obviously, ${\rm{LHS}} \ge {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}(y)}^ + (X)$ since ${\rm{VaR}}_\lambda ^ + $ is increasing in $\lambda $ and $1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)\ge \Lambda ^{\ast}(y)$ . On the other hand, note that ${\rm{VaR}}_\lambda ^ + $ is right-continuous in $\lambda $ . By (3.2), there exists a sequence $\{(y_{1n}, \ldots, y_{mn})\}_{n\in{\mathbb{N}}}$ satisfying that $y=\sum^m_{k=1} y_{kn}$ and $1-\sum_{i=1}^m\overline{{\Lambda}_i}(y_{in}) \searrow {\Lambda}^{\ast}(y)$ as $n \to \infty $ . Thus, the lower bound ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}(y)}^ + (X)$ is attainable by LHS. Therefore, (4.14) is true.

It should be pointed out that (4.14) does not hold for ${\rm{Va}}{{\rm{R}}^ - }$ because ${\rm{Va}}{{\rm{R}}^ - }$ is left-continuous but not right-continuous in $\lambda $ . It is the reason the equality in (4.1) cannot be expected without additional conditions. In Theorem 2, it is required that ${{\rm{\Lambda }}_i} \lt 1$ for all $i \in [m]$ . If ${{\rm{\Lambda }}_{{i_0}}} \equiv 1$ and ${{\rm{\Lambda }}_{{j_0}}} \lt 1$ for some ${i_0},{j_0} \in [m]$ , then $\Box_{i=1}^m {\rm VaR}^+_{{\Lambda}_i(y_i)}(X) =+{\infty}> {\rm VaR}^+_{1-\sum_{i=1}^m\overline{{\Lambda}_i}(y_i)}(X)$ for $X \in {L^0}$ , violating (4.11), and hence (4.7) does not hold.

An explicit formula of the inf-convolution is also obtained in Theorem 3 for the case of a mixed collection of left and right Lambda VaRs. Its proof can be found in Appendix C.

Theorem 3. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow (0, 1]$ be decreasing and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2), with ${\Lambda ^*}({-}\infty) \gt 0$ . If ${\kappa _i} = + $ for at least one $i$ , and ${\Lambda _j} \lt 1$ whenever ${\kappa _j} = + $ , then

(4.15) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

As a special consequence of Theorems 1, 2 and 3, we get the following VaR inf-convolution formulas of ordinary VaRs:

Corollary 1. (1) [Reference Embrechts, Liu and Wang12, Corollary 2] For ${\lambda _1},{\lambda _2} \in [{0,1}]$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \gt 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ - \square{\rm{VaR}}_{{\lambda _2}}^ - (X) = {\rm{VaR}}_\lambda ^ - (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

(2) [Reference Liu, Mao, Wang and Wei21, Theorem 1] For ${\lambda _1},{\lambda _2} \in [{0,1})$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \ge 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ + \square{\rm{VaR}}_{{\lambda _2}}^ + (X) = {\rm{VaR}}_\lambda ^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

(3) [Reference Liu, Mao, Wang and Wei21, Theorem 1] For ${\lambda _1} \in [{0,1}],{\lambda _2} \in [{0,1})$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \ge 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ - \square{\rm{VaR}}_{{\lambda _2}}^ + (X) = {\rm{VaR}}_\lambda ^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

Theorems 1, 2 and 3 are established under the assumption ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ . In the end of this section, we present other results of inf-convolution of Lambda VaRs under the assumption ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ . All proofs are postponed to Appendix C. The proofs of Propositions 11, 12 and 13 are based on Lemmas 6 and 7.

Proposition 11. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1]$ be decreasing for each $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = - \infty $ for $X \in {L^0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,L,{\rm{ess\mbox{-}inf}}(X)} \right\},$ where

    \begin{align*}L\,:\!=\, \left\{x\in\mathbb{R}\,:\,\Lambda ^{\ast}(x)=0, \not\exists \{x_i\}\ such\ that\ \sum_{i=1}^mx_i = x \ and \ \sum_{i=1}^m\overline{\Lambda _i}(x_i)=1\right\}.\end{align*}

Proposition 12. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1)$ be decreasing for each $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = - \infty $ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,T,{\rm{ess\mbox{-}inf}}(X)} \right\}$ , where $T =\{x\in\mathbb{R}\,:\, \Lambda ^{\ast}(x) = 0\}$ .

Proposition 13. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow [0, 1]$ be decreasing and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ . Assume that ${\kappa _i} = + $ for at least one $i$ , and ${\Lambda _j} \lt 1$ whenever ${\kappa _j} = + $ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = - \infty $ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,T,{\rm{ess\mbox{-}inf}}(X)} \right\}$ , where $T =\{x\in{\mathbb{R}}\,:\, {\Lambda}^{\ast}(x) = 0\}$ .

Lemma 6. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1]$ be decreasing for each $i \in [m]$ , and let $X \in {L^0}$ .

  1. (1) If $X \ge {x_0}$ , a.s., with ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = 0$ , and if there does exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i= x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(x_i)=1$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {x_0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \le {\rm{ess\mbox{-}inf}}(X)$ .

Lemma 7. Let ${\Lambda}_i\,:\, {\mathbb{R}} \to [0, 1)$ be decreasing for $i \in [m]$ , and let $X \in {L^0}$ .

  1. (1) If $X\ge x_0\in{\mathbb{R}}$ , a.s., and ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \ge {x_0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \le {\rm{ess\mbox{-}inf}}(X)$ .

5. Optimal risk sharing for Lambda VaRs

From the proof of Theorem 1 and Example 1, it is known that whether ${{\rm{\Lambda }}^{\rm{*}}}$ satisfies property $(\mathrm{P}_1)$ in Proposition 7 plays an important role in establishing the equality of (4.1). In this section, we consider the case ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ , and study optimal allocations of inf-convolution for several Lambda VaRs according to whether ${{\rm{\Lambda }}^{\rm{*}}}$ satisfies $(\mathrm{P}_1)$ or $(\mathrm{P}_2)$ in Proposition 7.

5.1. Left Lambda VaRs

Theorem 4. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0, 1]$ be decreasing for $i \in [m]$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ - (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda_i}(x_i)$ , then

(5.1) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X)\end{align}

Moreover, there exists an optimal allocation $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ satisfying ${x_i} = {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i})$ for $i \in [m]$ .

Proof. Note that $x_0={\rm VaR}^-_{{\Lambda}^{\ast}}(X)\in{\mathbb{R}}$ since ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ . First, assume that $\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) = 0$ . By the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ , ${{\rm{\Lambda }}_i} \equiv 1$ for all $i \in [m]$ and, hence, ${{\rm{\Lambda }}^{\rm{*}}} \equiv 1$ . In view of Theorem 1, we conclude that (5.1) holds and $\left( {X,0, \ldots ,0} \right)$ is an optimal allocation of $X$ .

Next, consider the case ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \lt 1$ . We will construct an optimal allocation of $X$ directly. Note that $\{ X \lt {x_0}\} $ , $\left\{ {X = {x_0}} \right\}$ and $\{ X \gt {x_0}\} $ constitute a partition of ${\rm{\Omega }}$ . Construct an allocation ${\textbf{X}}{ \in \mathbb{A}_m}(X)$ as follows: On the set $\{ X \lt {x_0}\} $ , define ${X_k} = {x_k}$ for $k \in [{m - 1}]$ and $X_m= X-\sum_{i=1}^{m-1}x_i$ . On the set $\left\{ {X = {x_0}} \right\}$ , define ${X_j} = {x_j}$ for $j \in [m]$ . On the set $\{ X \gt {x_0}\} $ , let $\left\{ {{C_1}, \ldots ,{C_m}} \right\}$ be a partition of $\{ X \gt {x_0}\} $ , satisfying that

\begin{align*} {\mathbb{P}}(C_j)={\mathbb{P}}(X>x_0)\cdot \frac{\overline{\Lambda _j}(x_j)}{\sum_{i=1}^m \overline{\Lambda _i}(x_i)},\quad j\in [m]. \end{align*}

Then, define ${X_j} = X - {x_0} + {x_j}$ on ${C_j}$ and ${X_j} = {x_j}$ on $\{ X \gt x\} \backslash {C_j}$ for $j \in [m]$ . Therefore, $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ has the following representation:

(5.2) \begin{align}X_k &= x_k+(X-x_0)\,1_{C_k},\quad k\in [m-1], \nonumber\\ X_m &= X-\sum_{i=1}^{m-1}X_i = x_m + (X-x_0)\, 1_{C_m} + (X-x_0)\, 1_{\{X\le x_0\}}.\end{align}

By Lemma 1 and Proposition 8, ${x_0} = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X)$ implies that ${\mathbb{P}}(X>x_0)\le \Lambda ^{\ast}(x_0)$ . Also, by construction, it follows that

(5.3) \begin{align}{\mathbb{P}}(X_j > x_j) ={\mathbb{P}}(C_j)=\overline{\Lambda _j}(x_j) \cdot \frac{{\mathbb{P}}(X>x_0)}{\overline{\Lambda ^{\ast}}(x_0)}\le \overline{\Lambda _j}(x_j),\quad j\in [m],\end{align}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}}) \le {x_j}$ . Thus, $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i) \le x_0 = {\rm VaR}_{\Lambda ^{\ast}}^-(X)$ . By Theorem 1, we conclude that (5.1) holds and ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

Theorem 4 states that Property $(\mathrm{P}_1)$ in Proposition 7 is a sufficient condition for (5.1). In Theorem 5, we show that, under Property $(\mathrm{P}_2)$ in Proposition 7, ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ is a necessary and sufficient condition for (5.1) to hold. We will consider the inf-convolution of left Lambda VaRs in Theorem 10 when ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ .

Theorem 5. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i\in [m]$ with $ {\Lambda}^{\ast}({-}{\infty})\gt 0$ . For any $ X\in L^0$ , denote $ x_0={\rm VaR}^-_{{\Lambda}^{\ast}}(X)$ . It there does not exist $ (x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $ \sum_{i=1}^mx_i=x_0$ and $ \overline{{\Lambda}^{\ast}}(x_0)= \sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ , then (5.1) holds if and only if $ {\rm VaR}^-_{{\Lambda^\ast}}(X)={\rm VaR}^+_{{\Lambda^\ast}}(X)$ . Furthermore, under the condition $ {\rm VaR}^-_{{\Lambda}^{\ast}}(X)={\rm VaR}^+_{{\Lambda^\ast}}(X)$ , if $ {\mathbb{P}}(X \gt x_0) \lt \overline{{\Lambda}^{\ast}}(x_0)$ , an optimal allocation of $ X$ exists; if $ {\mathbb{P}}(X \gt x_0) =\overline{{\Lambda}^{\ast}}(x_0)$ , no optimal allocation of $X$ exists, while there exists a sequence of asymptotically optimal allocations.

Proof. Necessity: We prove it by contradiction. Assume on the contrary that ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ . Under this assumption, from (5.1) it follows that there exists $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^my_i =y <{\rm VaR}^+_{\Lambda ^{\ast}}(X)$ , where ${y_j} = {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}})$ for $j \in [m]$ . Note that

\begin{align*} \left\{X\ge{\rm VaR}^+_{\Lambda ^{\ast}}(X) \right\} \subset \left \{X>\sum^m_{i=1} y_i\right \}\subset \bigcup_{i=1}^m\!\left\{X_i > y_i\right\}\!.\end{align*}

By Lemma 1, we have ${\mathbb{P}}(X_j > y_j) \le \overline{{\Lambda}_j}(y_j)$ for $j \in [m]$ . Hence we obtain that

(5.4) \begin{align}{\mathbb{P}}\big(X \ge {\rm VaR}^+_{\Lambda ^{\ast}}(X) \big) \le \sum_{i=1}^m{\mathbb{P}}(X_i > y_i)\le \sum^m_{i=1}\overline{\Lambda _i}(y_i).\end{align}

By Proposition 7, we have ${{\rm{\Lambda }}^{\rm{*}}}(y) = {{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)<\overline{\Lambda ^{\ast}}(x_0)$ . On the other hand, note that ${\mathbb{P}}(X>x_0)\ge \overline{\Lambda ^{\ast}}(x_0)$ (Otherwise, if ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X) \le {x_0}$ , a contradiction.) Therefore,

\begin{align*} {\mathbb{P}}\big(X\ge{\rm VaR}^+_{\Lambda ^{\ast}}(X) \big) &={\mathbb{P}}\big(X>{\rm VaR}^-_{\Lambda ^{\ast}}(X) \big)={\mathbb{P}}(X>x_0) \ge \overline{\Lambda ^{\ast}}(x_0)> \sum_{i=1}^m\overline{\Lambda _i}(y_i),\end{align*}

which contradicts (5.4). This proves the necessity.

Sufficiency: Suppose that ${\rm VaR}^-_{\Lambda ^{\ast}}(X)={\rm VaR}^+_{\Lambda ^{\ast}}(X)$ . First, we consider the case ${\mathbb{P}}(X > x_0) < \overline{\Lambda^{\ast}}(x_0)$ . In this case, there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^mx_i=x_0$ and $\sum_{i=1}^m \overline{\Lambda _i}(x_i)\in\left({\mathbb{P}}(X>x_0), \overline{\Lambda ^{\ast}}(x_0)\right)$ . Let $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ be as defined by (5.2). Then, ${\mathbb{P}}(X_j>x_j)={\mathbb{P}}(C_j)< \overline{\Lambda _j}(x_j)$ for $j \in [m]$ , implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ for $j \in [m]$ . Thus,

\begin{align*} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda _i}(X)\le \sum_{i=1}^m{\rm VaR}^-_{\Lambda _i}(X_i)\le \sum_{i=1}^mx_i = {\rm VaR}^-_{\Lambda ^{\ast}}(X).\end{align*}

This, together with Theorem 1, implies our desired statement (5.1). Moreover, ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}}) = {x_j}$ for $k \in [m]$ , and ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

Next, consider the case ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ . In this case we show that no optimal allocation exists, but that there exists a sequence of allocation $(X_{1n},\dots,X_{mn})\in \mathbb{A}_m(X)$, $n\in {\mathbb{N}}$ , such that $\sum_{i=1}^m{\rm VaR}^-_{\Lambda _i}(X_{i,n})\to x_0$ .

Assume on the contrary that there exists an optimal allocation of $X$ , say, ${\boldsymbol{{X}}} = \left( {{X_1}, \ldots ,{X_m}} \right)$ . Denote ${x_j}\,:\!=\, {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}})$ for $j \in [m]$ . Then we have $\sum_{i=1}^m x_i= x_0$ and

(5.5) \begin{align}\overline{\Lambda ^{\ast}}(x_0)={\mathbb{P}}(X>x_0) \le \sum_{i=1}^m{\mathbb{P}}(X_i>x_i)\le \sum_{i=1}^m\overline{\Lambda _i}(x_i).\end{align}

However, by Proposition 7, $\sum_{i=1}^m\overline{\Lambda _i}(x_i)<\overline{\Lambda ^{\ast}}(x_0)$ , which contradicts (5.5). Therefore, no optimal allocation exists. In order to find a sequence of admissible allocations of $X$ approaching the lower bound of the inf-convolution, we consider the following two cases.

Case 1: Suppose that ${\mathbb{P}}(X> x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any ${\epsilon}>0$ . Denote $\delta_n={\mathbb{P}}(X>x_0+1/n)$ . There exists a sequence $\{(x_{1n},\dots,x_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum_{i=1}^m x_{in}= x_0$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_{in})\in \left(\delta_n,\overline{{\Lambda}^\ast}(x_0) \right)$ . In an atomless probability space, let $\{C_{1n},\dots, C_{mn}\}$ be a partition of $\left\{X >x_0+1/n \right\}$ , satisfying

\begin{align*} {\mathbb{P}}(C_{jn}) =\delta_n\frac{\overline{\Lambda _j}(x_{jn})}{\sum_{i=1}^m\overline{\Lambda _i}(x_{in})},\quad j\in [m].\end{align*}

Define

(5.6) \begin{align}{X_{k,n}} = {x_{kn}} + \frac{1}{{nm}} + \left( {X - {x_0} - \frac{1}{n}} \right){\rm{\;}}{1_{{C_{kn}}}},{\rm{\;\;\;\;}}k \in [{m - 1}],\end{align}

and $X_{mn} = X-\sum_{i=1}^{m-1} X_{in}$ . Then,

\begin{align*} {\mathbb{P}}\Big(X_{jn}> x_{jn}+\frac{1}{mn}\Big) = {\mathbb{P}}(C_{jn}) = \delta_n\cdot \frac{\overline{\Lambda _j}(x_{jn})}{ \sum_{i=1}^m\overline{\Lambda _i}(x_{in})} <\overline{\Lambda _j}(x_{jn}),\end{align*}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ({X_{jn}}) \le {x_{jn}} + 1/\left( {mn} \right)$ . Thus,

(5.7) \begin{align}\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_{in}) \le \sum_{i=1}^m x_{in}+\frac{1}{n} = x_0 +\frac{1}{n}.\end{align}

Case 2: Suppose that ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ . In this case, from ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , it follows that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right)$ for any $\epsilon \gt 0$ . By Proposition 7, there exists a sequence $\{(x_{1n},\ldots,x_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum_{i=1}^m x_{in} = x_0+1/{n}$ and $\sum_{i=1}^m \overline{\Lambda _i}(x_{in})= \overline{\Lambda ^{\ast}}(x_0+1/n)$ . Then,

\begin{align*} \overline{\Lambda _1}\Big(x_{1n}-\frac{1}{n}\Big) + \sum_{i=2}^m \overline{\Lambda _i}(x_{in}) \le \overline{\Lambda ^{\ast}}(x_0).\end{align*}

In an atomless probability space, let $\left( {{C_{1n}}, \ldots ,{C_{mn}}} \right)$ be a partition of the set $\{ X \gt {x_0}\} $ , satisfying

\begin{align*} {\mathbb{P}}(C_{1n})=\overline{\Lambda _1}(x_{1n})-\frac {1}{2}\bigg[\sum_{i=1}^m\overline{\Lambda _i}(x_{in})-\overline{\Lambda ^{\ast}}(x_0)\bigg] =\overline{\Lambda _1}(x_{1n})-\frac {1}{2}\bigg[\overline{\Lambda ^{\ast}}\Big(x_0+\frac {1}{n}\Big) -\overline{\Lambda ^{\ast}}(x_0)\bigg],\end{align*}

and

\begin{align*} {\mathbb{P}}(C_{kn}) =\left(\overline{\Lambda ^{\ast}}(x_0) - {\mathbb{P}}(C_{1,n})\right) \frac{\overline{\Lambda _k}(x_{kn})}{\sum_{i=2}^{m}\overline{\Lambda _i}(x_{in})}, \quad k=2, \ldots, m.\end{align*}

It is easy to see that ${\mathbb{P}}(C_{1n})+\sum_{i=2}^m\overline{\Lambda _i}(x_{in})> \overline{\Lambda ^{\ast}}(x_0)$ , which implies that

\begin{align*} {\mathbb{P}}(C_{1n})\in \left(\overline{\Lambda _1}\Big (x_{1n}-\frac {1}{n}\Big ), \overline{\Lambda _1}(x_{1n}) \right)\end{align*}

and ${\mathbb{P}}(C_{kn})\lt \overline{{\Lambda}_k}(x_{kn})$ for $k\in [m]$ . Construct a sequence of admissible allocations $(X_{1n},\dots,X_{mn})\in \mathbb{A}_m(X)$ as follows:

(5.8) \begin{align}X_{1n} & = x_{1n}-\frac{1}{n} + (X-x_0)\, 1_{C_{1n}},\nonumber\\ X_{jn} &= x_{jn} + (X-x_0)\, 1_{C_{jn}},\quad j=2, \ldots, m-1,\\ X_{mn} &= X-\sum_{i=1}^{m-1} X_{in}.\nonumber\end{align}

Note that ${\mathbb{P}}(X_{1n}> x_{1n}-1/n)= {\mathbb{P}}(C_{1n})<\overline{{\Lambda}_1}(x_{1n})$ and ${\mathbb{P}}(X_{kn}> x_{kn}) = {\mathbb{P}}(C_{kn}) < \overline{{\Lambda}_k}(x_{kn})$ for $k\ge 2$ . Hence, ${\mathrm{VaR}}_{{\Lambda}_i}^-(X_{in})\le x_{in}$ for $i\in [m]$ . Therefore,

(5.9) \begin{align}\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_{in}) \le \sum_{i=1}^m x_{in}= x_0 + \frac{1}{n}.\end{align}

In view of Theorem 1, we conclude our desired statement from (5.7) and (5.9).

Corollary 2 in Embrechts et al. [Reference Embrechts, Liu and Wang12] is a special case of Theorem 4 with ${{\rm{\Lambda }}_i} \equiv {\lambda _i}$ for $i \in [m]$ satisfying ${\Lambda}^\ast\equiv \sum^m_{i=1}{\lambda}_i -m+1>0$ . Also, Proposition 9 gives a sufficient condition on the ${{\rm{\Lambda }}_i}$ under which property $(\mathrm{P}_1)$ of Proposition 7 holds. An immediate consequence of Theorem 4 is the following corollary.

Corollary 2. Let ${\Lambda}_i\,:\,{\mathbb{R}} \rightarrow (0, 1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . If for any $j \in [m]$ there exists $x_j\in{\mathbb{R}}$ such that ${\Lambda _j}\!\left( {{x_j}} \right) = {\Lambda _j}({+}\infty)$ , then $\mathop {\mathop \square\limits_{i = 1} }\limits^m \mathrm{VaR}_{{\Lambda _i}}^ - (X) = \mathrm{VaR}_{{\Lambda ^*}}^ - (X),$ for which an optimal allocation exists.

5.2. Right Lambda VaRs

Theorem 6. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0,1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0}\,:\!=\, \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then

(5.10) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , and ${\mathbb{P}}(X>x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ , then an optimal allocation exists.

  3. (3) Suppose that ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ and ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ and that ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ for any $\epsilon \gt 0$ .

    • If ${{\rm{\Lambda }}_j}\!\left( {{x_j} + \epsilon } \right) \lt {{\rm{\Lambda }}_j}\!\left( {{x_j}} \right)$ for any $\epsilon \gt 0$ and $j \in [m]$ , then an optimal allocation exists.

    • If, for any $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^m y_i=x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)= \overline{\Lambda ^{\ast}}(x_0)$ , there always exists some ${\tau _0} \gt 0$ such that ${{\rm{\Lambda }}_k}\!\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\!\left( {{y_k} + {\tau _0}} \right)$ for some $k \in [m]$ , then no optimal allocation exists.

Moreover, if an optimal allocation exists, then there exists $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ such that ${\mathrm{VaR}}_{{\Lambda}_i}^+(X_i)=x_i$ for $i\in [m]$ . If no optimal allocation exists, then there exists a sequence of allocations $(X_{1n},\dots, X_{mn})\in \mathbb{A}_m(X)$ such that ${\mathrm{VaR}}^{+}_{{\Lambda}_j}(X_{jn})\to x_j$ as $n\to{\infty}$ for $j\in [m]$ , and $\sum_{i=1}^m {\mathrm{VaR}}^+_{{\Lambda}_i}(X_{in})\to x_0$ .

Theorem 7. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0,1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0}\,:\!=\, \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there does not exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then (5.10) holds. Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

In Theorems 6 and 7, the range of the ${{\rm{\Lambda }}_i}$ cannot be weakened from $\left( {0,1} \right)$ to be $\left( {0,1} \right]$ as shown by the following counterexample.

Example 2. Let ${{\rm{\Lambda }}_1}(x) = {1_{\{ x \lt 2\} }} + \left( {4/5} \right){\rm{*}}{1_{\left\{ {x \ge 2} \right\}}}$ and ${{\rm{\Lambda }}_2}(x) = \left( {4/5} \right){\rm{*}}{1_{\{ x \lt 0\} }} + \left( {1/2} \right){\rm{*}}{1_{\left\{ {x \ge 0} \right\}}}$ . From (3.2), it follows that ${{\rm{\Lambda }}^{\rm{*}}}(x) = \left( {1/2} \right){\rm{*}}{1_{\{ x \lt 2\} }} + \left( {3/10} \right){\rm{*}}{1_{\left\{ {x \ge 2} \right\}}}$ . Let $X$ be a $\left( {0,1} \right)$ -uniformly distributed random variable. Then ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X) = {x_0} = 1/2$ , ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ , and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {1/2} \right) = \overline {{{\rm{\Lambda }}_1}} \!\left( {1/2} \right) + \overline {{{\rm{\Lambda }}_2}} (0)$ . If Theorem 6 holds, then there exist $(X_1,X_2) \in \mathbb{A}_2(X)$ and ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + \left( {{X_1}} \right) = 1/2$ . However, from the definition of ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + $ , it follows that ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + (Y) \ge 2$ for any random variable $Y \in {L^0}$ . This is a contradiction. Thus, Theorem 6 does not hold in this case.

5.3. Mixed Lambda VaRs

Theorem 8. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ , and let ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ such that $K\,:\!=\, \left\{ {j\,:\,{\kappa _j} = + ,j \in [m]} \right\} \ne \emptyset $ . Assume that ${\Lambda _j} \lt 1$ for $j \in K$ . For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)= \sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then

(5.11) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , and ${\mathbb{P}}(X>x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ , then an optimal allocation exists.

  3. (3) Suppose that ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ and ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ , and that ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ for any $\epsilon \gt 0$ .

    • If ${{\rm{\Lambda }}_j}\!\left( {{x_j} + \epsilon } \right) \lt {{\rm{\Lambda }}_j}\!\left( {{x_j}} \right)$ for any $\epsilon \gt 0$ and $j \in K$ , then an optimal allocation exists.

    • If, for any $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^m y_i=x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)= \overline{\Lambda ^{\ast}}(x_0)$ , there always exists some ${\tau _0} \gt 0$ such that ${{\rm{\Lambda }}_k}\!\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\!\left( {{y_k} + {\tau _0}} \right)$ for some $k \in [m]$ , then no optimal allocation exists.

Moreover, if an optimal allocation exists, then there exists $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ such that ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}({X_i}) = {x_i}$ for $i \in [m]$ . If no optimal allocation exists, then there exists a sequence of allocations $(X_{1n},\dots, X_{mn})\in \mathbb{A}_m(X)$ such that ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^{{\kappa _j}}({X_{jn}}) \to {x_j}$ as $n \to \infty $ for $j \in [m]$ and $\sum_{i=1}^m \mathrm{VaR}^{\kappa_i}_{{\Lambda}_i}(X_{in})\to x_0$ as $n \to \infty $ .

Theorem 9. Let the ${\Lambda _i}$ be the same as those in Theorem 8. For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there does not exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then (5.11) holds. Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

In Theorem 5, we establish (5.1) under the assumption ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ and $(\mathrm{P}_2)$ in Proposition 7. How about the explicit formula of $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X)$ under the assumption ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ and $(\mathrm{P}_2)$ ? By a similar argument to that in the proof of Theorems 7, we have the next result.

Theorem 10. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i\in [m]$ with ${\Lambda}^\ast({-}{\infty})>0$ . For any $X\in L^0$ , denote $x_0=\mathrm{VaR}^-_{{\Lambda}^\ast}(X)$ . Assume that there does not exist $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^mx_i=x_0$ and $\overline{{\Lambda}^\ast}(x_0)= \sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ . If $\mathrm{VaR}^-_{{\Lambda}^\ast}(X) < \mathrm{VaR}^+_{{\Lambda}^\ast}(X)$ , then

(5.12) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

6. Comonotonic inf-convolution of Lambda VaRs

In this section, we consider inf-convolution of Lambda VaRs constrained to comonotonic allocations, that is, allocations are constrained to be comonotonic. Comonotonicity, an extremal form of positive dependence, was introduced and has been widely used in economics, financial mathematics and actuarial science over the last two decades. The formal definition and its characterization can be found in Dhaene et al. [Reference Dhaene, Denuit, Goovaerts, Kaas, Tang and Vynche10, Reference Dhaene, Denuit, Goovaerts, Kaas and Vyncke11]. Random variables ${X_1}, \ldots ,{X_m}$ are said to be comonotonic if there exist a random variable $Z$ and increasing functions ${g_1}, \ldots ,{g_m}$ such that ${X_i} = {g_i}\!\left( Z \right)$ almost surely for $i \in [m]$ . Comonotonicity of more than two random variables is equivalent to pair-wise comonotonicity. In the sequel, when ${X_1}, \ldots ,{X_m}$ are comonotonic, we denote by $X_i/\!/\sum^m_{k=1} X_k$ for $i \in [m]$ .

It is well-known that ordinary VaRs possess comonotonic additivity on ${L^0}$ , that is, the VaR of a sum of comonotonic random variables is simply the sum of the VaRs of the marginal distributions [Reference Dhaene, Denuit, Goovaerts, Kaas, Tang and Vynche10, Theorem 5]. However, this property is not true for Lambda VaRs. In the next proposition, we prove that Lambda VaRs possess comonotonic subadditivity on $L_ + ^0$ . The property of comonotonic subadditivity was first proposed by Song and Yan [Reference Song and Yan24] and further investigated by Song and Yan [Reference Song and Yan25].

Proposition 14. Let ${\Lambda}\,:\, {\mathbb{R}}_+\to [0,1]$ be decreasing, and let ${X_1}$ and ${X_2}$ be nonnegative comonotonic random variables. Then

(6.1) \begin{align}{\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1} + {X_2}} \right) \le {\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}})\end{align}

and

(6.2) \begin{align}{\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1} + {X_2}} \right) \le {\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}).\end{align}

Proof. Denote by $x = {\rm{VaR}}_{\rm{\Lambda }}^\kappa \!\left( {{X_1} + {X_2}} \right)$ for $\kappa \in \left\{ { - , + } \right\}$ and set $X = {X_1} + {X_2}$ . Without loss of generality, assume that $0 \lt x \lt \infty $ . First note that $x \gt \mathrm{ess\mbox{-}sup}(X)$ occurs only for ${\rm{VaR}}_{\rm{\Lambda }}^ + $ and $x = {\rm{sup}}\!\left\{ {t\,:\,{\rm{\Lambda }}\!\left( t \right) = 1} \right\} \gt \mathrm{ess\mbox{-}sup}(X)$ . Thus, ${\rm{VaR}}_{\rm{\Lambda }}^ + ({X_i}) = x$ for $i = 1,2$ and, hence, ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}) = 2x \gt x = {\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1} + {X_2}} \right).$ That is, (6.2) holds when $x \gt \mathrm{ess\mbox{-}sup}(X)$ .

Next, assume that $x \le \mathrm{ess\mbox{-}sup}(X)$ . Then there exists $\lambda \in \left[ {{\rm{\Lambda }}\!\left( {x + } \right),{\rm{\Lambda }}(x{-})} \right]$ and $\alpha \in [{0,1}]$ such that ${\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1} + {X_2}} \right) = x$ , where ${\rm{VaR}}_\lambda ^\alpha = \left( {1 - \alpha } \right){\rm{VaR}}_\lambda ^ - + \alpha {\rm{VaR}}_\lambda ^ + $ . Since ${X_1}$ and ${X_2}$ are comonotonic, it follows that ${\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1} + {X_2}} \right) = {\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1}} \right) + {\rm{VaR}}_\lambda ^\alpha ({{X_2}})$ . Denote by ${x_1} = {\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1}} \right)$ and ${x_2} = {\rm{VaR}}_\lambda ^\alpha ({{X_2}})$ . Now consider two cases.

Case 1. Consider the right Lambda VaR, and assume that ${x_i} \lt x$ for $i = 1,2$ (otherwise, (6.2) is trivial). Then, for any $\epsilon \gt 0$ , $ {\mathbb{P}}(X_1\le x_1-{\epsilon}) \le {\lambda}\le {\Lambda}(x{-})\le {\Lambda} (x_1-{\epsilon}),$ implying ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) \ge {x_1} - \epsilon $ . Since $\epsilon $ is arbitrary, we have ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) \ge {x_1}$ . Similarly, ${\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}) \ge {x_2}$ . So we get (6.2) when ${x_i} \lt x$ for $i = 1,2$ .

Case 2. Consider the left Lambda VaR, and assume that ${x_i} \lt x$ for $i = 1,2$ .

  1. (i) If ${\rm{\Lambda }}(x{-}) \gt \lambda $ , then ${\mathbb{P}}(X_1\le x_1-{\epsilon}) \le {\lambda}<{\Lambda}(x{-})\le {\Lambda} (x_1-{\epsilon})$ for any $\epsilon \gt 0$ , implying ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1} - \epsilon $ and, hence, ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1}$ . Similarly, we have ${\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}}) \ge {x_2}$ . Therefore, we conclude (6.1) when ${\rm{\Lambda }}(x{-}) \gt \lambda $ .

  2. (ii) If ${\rm{\Lambda }}(x{-}) = \lambda $ and ${\rm{\Lambda }}({x-\epsilon}) \gt {\rm{\Lambda }}(x{-})$ for any $\epsilon \gt 0$ , then ${\mathbb{P}}(X_1\le x_1-{\epsilon}) \le \lambda \le \Lambda (x{-})< \Lambda (x_1-{\epsilon})$ , implying ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1}$ . Similarly, we have ${\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}}) \ge {x_2}$ . We also obtain (6.1) in subcase (ii).

  3. (iii) If ${\rm{\Lambda }}(x{-}) = \lambda $ and ${\rm{\Lambda }}\!\left( {x - {\epsilon _0}} \right) = {\rm{\Lambda }}(x{-})$ for some ${\epsilon _0} \gt 0$ , it follows from the definition of ${\rm{VaR}}_{\rm{\Lambda }}^ - $ that ${\mathbb{P}}(X_1+X_2\le x-{\epsilon})<{\lambda}$ for any $\epsilon \gt 0$ . This implies $\alpha = 0$ , i.e. $x = {\rm{VaR}}_\lambda ^ - \left( {{X_1} + {X_2}} \right) = {x_1} + {x_2}$ . Also, since ${\mathbb{P}}(X_1\le x_1-{\epsilon})<\lambda\le \Lambda (x_1-{\epsilon})$ , we have ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1}$ . Similarly, ${\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}}) \ge {x_2}$ . Again, we conclude (6.1) in subcase (iii).

This completes the proof of the proposition.

Remark 2. Proposition 14 cannot be true without the assumption $X \in L_ + ^0$ . Counterexamples are as follows.

  1. (1) Let ${\rm{\Lambda }}(x) = \left( {1/2} \right) \cdot {1_{\left( { - \infty ,a} \right)}}(x)$ , and $X = Y \sim U\!\left( {a - 1,a + 1} \right)$ with $a \lt 0$ . Then ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {X + Y} \right) = {\rm{VaR}}_{\rm{\Lambda }}^ - (X) = {\rm{VaR}}_{\rm{\Lambda }}^ - (Y) = a$ . So we get that ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {X + Y} \right) \gt {\rm{VaR}}_{\rm{\Lambda }}^ - (X) + {\rm{VaR}}_{\rm{\Lambda }}^ - (Y)$ , violating (6.1).

  2. (2) Let ${\rm{\Lambda }}(x) = {1_{\left( { - \infty ,c} \right)}}(x)$ , and $X = Y \sim U\!\left( {a,b} \right)$ with $a \lt b \lt c \lt 0$ . Then ${\rm{VaR}}_{\rm{\Lambda }}^ + (X) = {\rm{VaR}}_{\rm{\Lambda }}^ + (Y) = c$ , and ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {X + Y} \right) = c$ . So we get that ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {X + Y} \right) \gt {\rm{VaR}}_{\rm{\Lambda }}^ + (X) + {\rm{VaR}}_{\rm{\Lambda }}^ + (Y)$ , violating (6.2).

In view of Remark 2, we restrict ourselves to considering nonnegative random variables in $L_ + ^0$ . For $X \in L_ + ^0$ , we define the set of comonotonic allocations as

\begin{align*} \mathbb{A}_m^{c+}(X)=\big \{ (X_1, \ldots, X_m)\in \mathbb{A}^+_m(X)\,:\,\ X_i\in L^0_+, \ X_i/\!/X,\, i\in [m]\big \}.\end{align*}

The constrained (comonotonic) inf-convolution of risk measures ${\rho _1}, \ldots ,{\rho _m}$ is defined as

\begin{align*} \overset{m}{\underset{i=1}{\boxplus}} \rho_i (X)=\inf\!\left\{ \sum^m_{i=1} \rho_i(X_i)\,:\, (X_1, \ldots, X_m)\in \mathbb{A}_m^{c+}(X)\right \}.\end{align*}

An $m$ -tuple $(X_1, \ldots, X_m)\in\mathbb{A}_m^{c+}$ is said to be an optimal constrained allocation of $X$ for $\left( {{\rho _1}, \ldots ,{\rho _m}} \right)$ if $\sum^m_{i=1}\rho_i(X_i)= \boxplus^m_{i=1} \rho_i (X)$ .

Theorem 11. Let ${\Lambda}_i\,:\, {\mathbb{R}}_+\to [0,1]$ be decreasing for $i \in [m]$ , and set $\Lambda = \mathrm{min}_{1 \le i \le m}{\Lambda _i},$ with $m \ge 2$ . Then

(6.3) \begin{align}\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{VaR}}_{\rm{\Lambda }}^ - (X),{\rm{\;\;\;\;}}X \in L_ + ^0.\end{align}

If, in addition, ${{\rm{\Lambda }}_i} \lt 1$ for each $i$ , then

(6.4) \begin{align}\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{VaR}}_{\rm{\Lambda }}^ + (X),{\rm{\;\;\;\;}}X \in L_ + ^0.\end{align}

Proof. First, consider the case of the left Lambda VaR . Choose any $\boldsymbol{{X}}\in \mathbb{A}_m^{c+}(X)$ . Since ${\rm{\Lambda }} \le {{\rm{\Lambda }}_i}$ for each $i$ , and ${\rm{VaR}}_{\rm{\Lambda }}^ - $ is increasing in ${\rm{\Lambda }}$ , it follows that ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i}) \ge {\rm{VaR}}_{\rm{\Lambda }}^ - ({X_i})$ for each $i$ . Then $ \sum^m_{i=1} \mathrm{VaR}^-_{{\Lambda}_i}(X_i) \ge \sum^m_{i=1} \mathrm{VaR}^-_{{\Lambda}}(X_i) \ge \mathrm{VaR}^-_{\Lambda} (X),$ where the second inequality follows from Proposition 14. Thus, we have

(6.5) \begin{align}\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {\rm{VaR}}_{\rm{\Lambda }}^ - (X).\end{align}

To prove the reverse inequality of (6.5), we set $k = {\rm{argmi}}{{\rm{n}}_{1 \le i \le m}}{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X)$ , that is, $\inf\{ x\in \mathbb{R}_+\,:\, F_X(x)\ge \Lambda _k(x) \}\le\inf\{ x\in \mathbb{R}_+\,:\, F_X(x)\ge \Lambda _i(x) \}$ for $i \ne k$ . Then

\begin{align*} {\rm VaR}^-_{\Lambda _k}(X) &= \inf\{ x\in \mathbb{R}_+\,:\, F_X(x)\ge \Lambda _k(x) \}\\ &= \inf\{ x\in \mathbb{R}_+\,:\, F_X(x)\ge \Lambda _j(x)\ \hbox{for\ some}\ j\in [m] \}\\ &= \inf\!\left\{ x\in \mathbb{R}_+\,:\, F_X(x)\ge \min_{1\le j\le m} \Lambda _j(x)\right \} = {\rm VaR}^-_{\Lambda }(X).\end{align*}

Now choose ${X_k} = X$ and ${X_i} = 0$ for all $i \ne k$ . Obviously, ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i}) = 0$ for $i \ge 2$ . So we have $ \sum^m_{i=1} \mathrm{VaR}^-_{{\Lambda}_i}(X_i)= \mathrm{VaR}^-_{{\Lambda}_k}(X)=\mathrm{VaR}^-_{{\Lambda}}(X),$ implying that $\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \le {\rm{VaR}}_{\rm{\Lambda }}^ - (X)$ . This proves (6.3).

The proof of the right Lambda VaR is similar by observing that ${{\rm{\Lambda }}_i} \lt 1$ implies ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (0) = 0$ .

Remark 3. One may wonder whether ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ - \boxplus ({\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ + (X) = {\rm{VaR}}_{\rm{\Lambda }}^ + (X)$ for $X \in L_ + ^0,$ with ${\rm{\Lambda }} = {\rm{min}}\!\left\{ {{{\rm{\Lambda }}_1},{{\rm{\Lambda }}_2}} \right\}$ . However, this is not true, as shown by the following counterexample. Let ${X_1} = {X_2}\sim B\!\left( {1,1/2} \right)$ , $X = {X_1} + {X_2}$ , and define ${{\rm{\Lambda }}_1} = {{\rm{\Lambda }}_2} \equiv 1/2$ . Then ${\rm{VaR}}_{1/2}^ + (X) = 2,$ ${\rm{VaR}}_{1/2}^ - \left( {{X_1}} \right) = 0$ , and ${\rm{VaR}}_{1/2}^ + ({{X_2}}) = 1$ . Therefore, ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ - \boxplus {\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ + (X) \lt {\rm{VaR}}_{\rm{\Lambda }}^ + (X)$ .

For optimal comonotonic allocations, see Jouini et al. [Reference Jouini, Schachermayer and Touzi19] for law-determined convex risk measures, and Cui et al. [Reference Cui, Yang and Wu8] for general distortion risk measures in the context of designing optimal reinsurance contracts. Embrechts et al. [Reference Embrechts, Liu and Wang12] obtained explicit formulas for comonotonic inf-convolutions under distortion risk measures including RVaR and ES. Wang and Zitikis [Reference Wang and Zitikis28] provided analytical solutions to inf-convolution for VaRs under the weak comonotonicity constraints on the dependence structure of admissible allocations. Weak comonotonicity ranges from strong comonotonicity to no dependence structure. Liu et al. [Reference Liu, Mao, Wang and Wei21] considered comonotonic inf-convolution of tail risk measures.

The next proposition gives a connection between comonotonic inf-convolution and unconstricted inf-convolution of Lambda VaRs.

Proposition 15. Let ${\Lambda}_i\,:\, {\mathbb{R}}_+ \rightarrow (0, 1]$ be decreasing for each $i \in [m]$ .

(1) At least $m - 1$ of the ${{\rm{\Lambda }}_i}$ are equal to $1$ if and only if

(6.6) \begin{align}\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = \mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X){\rm{\;\;}}for{\rm{\;\;}}all{\rm{\;\;}}X \in L_ + ^0.\end{align}

(2) If ${{\rm{\Lambda }}_i} \equiv 1$ for some $i \in [m]$ , then

(6.7) \begin{align}\overset{m}{\underset{i=1}{\boxplus}} {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = \mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X){\rm{\;\;}}for{\rm{\;\;}}all{\rm{\;\;}}X \in L_ + ^0.\end{align}

Proof. (1) The necessity is trivial. For the sufficiency, by Theorem 11, Eq. (6.6) holds if and only if ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = {\rm{VaR}}_{{\rm{min}}\left\{ {{{\rm{\Lambda }}_1}, \ldots ,{{\rm{\Lambda }}_m}} \right\}}^ - (X)$ for all $X \in L_ + ^0$ or, equivalently, ${{\rm{\Lambda }}^{\rm{*}}} = {\rm{mi}}{{\rm{n}}_{1 \le i \le m}}{{\rm{\Lambda }}_i}$ almost everywhere with respect to the Lebesgue measure. So we have $\overline{\Lambda ^{\ast}}({+}{\infty}) = \sum_{i=1}^m\overline{\Lambda _i}({+}{\infty}) = \max_{1\le i\le m} \overline{\Lambda _i}({+}{\infty})$ , implying that at least $m - 1$ of the ${{\rm{\Lambda }}_i}$ are equal to $1$ .

(2) The proof follows immediately since both sides of (6.7) are infinity when ${{\rm{\Lambda }}_i} \equiv 1$ for some $i$ .

7. Concluding remarks

This paper is based on a PhD thesis [Reference Xia29]. In this paper, we give a thorough discussion of a risk sharing problem, in which there are $m$ agents equipped with respective risk measures ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^{{\kappa _1}},{\rm{VaR}}_{{{\rm{\Lambda }}_2}}^{{\kappa _2}}, \ldots ,{\rm{VaR}}_{{{\rm{\Lambda }}_m}}^{{\kappa _m}}$ , where ${\kappa _i} \in \left\{ { - , + } \right\}$ and ${\Lambda}_i\,:\, {\mathbb{R}}\to [0,1]$ is decreasing and right-continuous for each $i$ . We obtain the explicit formulas of inf-convolution and optimal allocations of a random variable in ${L^0}$ under different assumptions.

During the revision of this paper, we note that Liu [Reference Liu22] also studied the risk sharing problem among multiple agents using ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ - , \ldots ,{\rm{VaR}}_{{{\rm{\Lambda }}_m}}^ - $ as their preferences when the ${{\rm{\Lambda }}_i}$ are all decreasing and right-continuous, or increasing and left-continuous. However, when the ${{\rm{\Lambda }}_i}$ are all decreasing and right-continuous, the explicit formulas of inf-convolution with respect to left Lambda VaRs and the construction of optimal allocations in two papers are different. Our approach is based on the inf-convolution of decreasing functions, introduced and investigated in Section 3. Moreover, Liu [Reference Liu22] investigated the inf-convolution of two risk measures: (i) ${\rm{VaR}}_{\rm{\Lambda }}^ - $ and one law-invariant monotone risk measure without cash-additivity; (ii) $\widetilde{\mathrm{VaR}}_{{\Lambda}}^-$ and one risk measure that is consistent with the second-order stochastic dominance. In Cases (i) and (ii), no assumption on monotonicity of ${\rm{\Lambda }}$ is imposed.

Appendix A. Proofs of results in Section 2

Proof of Proposition 1. It suffices to prove (2.1) since the other inequality is equivalent to (2.1). For $\lambda \in \left( {0,1} \right)$ , we have

\begin{align*}{\rm{VaR}}_{\rm{\Lambda }}^ + \left( {\lambda X} \right) & = {\rm{inf}}\!\left\{ {t\,:\,{F_X}\!\left( {\frac{t}{\lambda }} \right) \gt {\rm{\Lambda }}\!\left( t \right)} \right\} = \lambda {\rm{inf}}\!\left\{ {x\,:\,{F_X}(x) \gt {\rm{\Lambda }}\!\left( {\lambda x} \right)} \right\}\\ & \ge \lambda {\rm{inf}}\!\left\{ {x\,:\,{F_X}(x) \gt {\rm{\Lambda }}(x)} \right\} = \lambda {\rm{VaR}}_{\rm{\Lambda }}^ + (X),\end{align*}

where the inequality follows since ${\rm{\Lambda }}$ is decreasing. The proof for ${\rm{VaR}}_{\rm{\Lambda }}^ - $ is similar and, hence, omitted.

Proof of Proposition 2. We prove only the necessity. Assume on the contrary that ${\rm{\Lambda }}$ is not a constant. Then there exist ${x_1} \lt {x_2}$ such that ${\rm{\Lambda }}(x_1) \gt {\rm{\Lambda }}\!\left( {{x_2}} \right)$ . Now choose ${x_0} \lt {x_1}$ , and let $X \in {L^0}$ with distribution function ${F_X}$ such that ${F_X}({{x_0}}) = \left[ {{\rm{\Lambda }}(x_1) + {\rm{\Lambda }}\!\left( {{x_2}} \right)} \right]/2$ . Since ${F_X}$ is right-continuous at point ${x_0}$ , there exists $\epsilon \gt 0$ such that ${x_0} + \epsilon \lt {x_1}$ and $F\!\left( {{x_0} + \epsilon } \right) \lt {\rm{\Lambda }}(x_1) \le {\rm{\Lambda }}\!\left( {{x_0} + \epsilon } \right)$ , which implies ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \ge {x_0} + \epsilon \gt {x_0}$ , where $\kappa \in \left\{ { - , + } \right\}$ . Note that ${\mathbb{P}}(X+x_2-x_0\le x_2)=F_X(x_0) > \Lambda (x_2)$ . Thus, ${\rm{VaR}}_{\rm{\Lambda }}^\kappa \!\left( {X + {x_2} - {x_0}} \right) \le {x_2}$ . Moreover, by the assumption of translation invariance of ${\rm{VaR}}_{\rm{\Lambda }}^\kappa $ , we have ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \le {x_0}$ . This contradicts with ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \gt {x_0}$ .

Proof of Proposition 3. The proof consists of the following three steps:

  1. (1) Suppose that there exist $0 \lt {x_1} \lt {x_2}$ such that ${\rm{\Lambda }}(x_1) \gt {\rm{\Lambda }}\!\left( {{x_2}} \right)$ . Choose $0 \lt {x_0} \lt {x_1}$ , and let $X \in {L^0}$ , with distribution function ${F_X}$ satisfying ${F_X}({{x_0}}) = \left[ {{\rm{\Lambda }}(x_1) + {\rm{\Lambda }}\!\left( {{x_2}} \right)} \right]/2$ . From the proof of Proposition 2, it follows that ${\rm{VaR}}_{\rm{\Lambda }}^{}(X) \gt {x_0}$ , where $\kappa \in \left\{ { - , + } \right\}$ . On the other hand, ${\mathbb{P}}\!\left ( (x_2/x_0) X \le x_2\right )={\mathbb{P}}(X\le x_0)> \Lambda (x_2)$ , which implies ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (({x_2}/{x_0})X) \le {x_2}$ . Moreover, by positive homogeneity of ${\rm{VaR}}_{\rm{\Lambda }}^\kappa $ , we obtain that $\left( {{x_2}/{x_0}} \right){\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \le {x_2}$ and, hence, ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \le {x_0}$ , a contradiction. This means that ${\rm{\Lambda }}$ is constant on $\left( {0,\infty } \right)$ .

  2. (2) Suppose that there exist ${x_1} \lt {x_2} \lt 0$ such that ${\rm{\Lambda }}(x_1) \gt {\rm{\Lambda }}\!\left( {{x_2}} \right)$ . Choose ${x_0} \in \left( {{x_2},0} \right)$ , and let $X \in {L^0}$ with distribution function ${F_X}$ satisfying ${F_X}({{x_0}}) = \left[ {{\rm{\Lambda }}(x_1) + {\rm{\Lambda }}\!\left( {{x_2}} \right)} \right]/2$ . Obviously, we have ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \le {x_0}$ by the definition of Lambda VaRs. Since ${F_X}$ is right-continuous at point ${x_0}$ , there exists $\epsilon \gt 0$ such that ${x_0} + \epsilon \lt 0$ and $ {\mathbb{P}}\big(\frac {x_1}{x_0+{\epsilon}} X \le x_1\big) ={\mathbb{P}}(X\le x_0+{\epsilon}) \lt {\Lambda} (x_1), $ which implies ${\rm{VaR}}_{\rm{\Lambda }}^\kappa \big( {\frac{{{x_1}}}{{{x_0} + \epsilon }}X}\big) = \frac{{{x_1}}}{{{x_0} + \epsilon }}{\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \ge {x_1}$ . Thus, ${\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) \ge {x_0} + \epsilon $ , leading to a contradiction. Therefore, ${\rm{\Lambda }}$ is also constant on $\left( { - \infty ,0} \right)$ .

  3. (3) From the previous discussions, it follows that ${\rm{\Lambda }}$ has the representation (2.2) with $1 \ge {\alpha _1} \ge {\alpha _2} \ge {\alpha _3} \ge 0$ . If ${\rm{\Lambda }}$ is of the form (2.2), it can be checked that for any $X \in {L^0}$ ,

    \begin{align*}{\rm{VaR}}_{\rm{\Lambda }}^\kappa (X) = \left\{ {\begin{array}{l@{\quad}l} {0,} & {{F_X}(0) \in \left( {{\alpha _3},{\alpha _1}} \right),}\\[8pt] {{\rm{max}}\!\left\{ {0,{\rm{VaR}}_{{\alpha _3}}^\kappa (X)} \right\},} & {{F_X}(0) \le {\alpha _3} \lt {\alpha _1},}\\[8pt] {{\rm{min}}\!\left\{ {0,{\rm{VaR}}_{{\alpha _1}}^\kappa (X)} \right\},} & {{F_X}(0) \ge {\alpha _1} \gt {\alpha _3},}\\[8pt] {{\rm{VaR}}_{{\alpha _1}}^\kappa (X),} & {{\alpha _1} = {\alpha _3}.}\end{array}} \right.\end{align*}

    Thus, ${\rm{VaR}}_{\rm{\Lambda }}^\kappa $ is positive homogeneous on ${L^0}$ .

This completes the proof of the proposition.

Proof of Lemma 1. We prove only (2.3) and (2.6); the proofs of (2.4) and (2.5) are similar.

Necessity of (2.3): If ${\mathbb{P}}(X>x)\le \overline{\Lambda }(x{+})$ , then ${\mathbb{P}}(X\le x)\ge \Lambda (x{+})$ , which implies ${\mathbb{P}}(X\le x+{\epsilon})\ge {\Lambda}(x+{\epsilon})$ for any $\epsilon \gt 0$ since ${\rm{\Lambda }}\!\left( t \right)$ is decreasing in $t$ . Thus, ${\rm{VaR}}_{\rm{\Lambda }}^ - (X) \le x + \epsilon $ . Setting $\epsilon \to 0$ , we get ${\rm{VaR}}_{\rm{\Lambda }}^ - (X) \le x$ .

Sufficiency of (2.3): If ${\rm{VaR}}_{\rm{\Lambda }}^ - (X) \le x$ , then ${\mathbb{P}}(X\le x+{\epsilon})\ge {\Lambda}(x+{\epsilon})$ for any $\epsilon \gt 0$ . Setting $\epsilon \to 0$ , it follows that ${\mathbb{P}}(X\le x)\ge \Lambda (x{+})$ .

Necessity of (2.6): If ${\mathbb{P}}(X\ge x)\ge \overline{\Lambda }(x{-})$ , then ${\mathbb{P}}(X< x) \le \Lambda (x{-})$ . Thus, ${\mathbb{P}}(X\le x-{\epsilon})\le \Lambda (x-{\epsilon})$ for any $\epsilon \gt 0$ , which implies ${\rm{VaR}}_{\rm{\Lambda }}^ + (X) \ge x - \epsilon $ . Setting $\epsilon \to 0$ yields ${\rm{VaR}}_{\rm{\Lambda }}^ + (X) \ge x$ .

Sufficiency of (2.6): If ${\rm{VaR}}_{\rm{\Lambda }}^ + (X) \ge x$ , then ${\mathbb{P}}(X\le x-{\epsilon})\le {\Lambda}(x-{\epsilon})$ for any $\epsilon \gt 0$ . Setting $\epsilon \to 0$ , it follows that ${\mathbb{P}}(X<x))\le \Lambda (x{-})$ , that is, ${\mathbb{P}}(X\ge x)\ge \overline{\Lambda }(x{-})$ .

Proof of Lemma 3. The proof is similar to that of Proposition 3.1 in Han et al. [Reference Han, Wang, Wang and Xia16]. If ${\rm{\Lambda }} \equiv 1$ , then both sides of (2.7) are infinite and, thus, (2.7) holds trivially. Next, we consider the case ${\Lambda} \not\equiv 1$ . Since ${\mathbb{P}}(X\le x)>{\lambda}$ for $x \in {\mathbb{R}}$ and $\lambda \in [{0,1})$ implies ${\rm{VaR}}_\lambda ^ + (X) \le x$ , it follows that

\begin{align*} {\rm VaR}_\Lambda ^+(X) &= \inf\{x\in \mathbb{R}\,:\, {\mathbb{P}}(X\le x) > \Lambda (x)\} \ge \inf\{x\in \mathbb{R} \,:\, {\rm VaR}_{\Lambda (x)}^+(X)\le x\}\\ & \ge \inf\{{\rm VaR}_{\Lambda (x)}^+(X) \vee x\,:\,{\rm VaR}_{\Lambda (x)}^+(X)\le x \} \ge \inf_{x\in\mathbb{R} } \left\{{\rm VaR}^+_{\Lambda(x)}(X)\vee x\right\}.\end{align*}

To prove (2.7), it suffices to prove that for any $x\in{\mathbb{R}}$ ,

(A.1) \begin{align}{\rm{VaR}}_{\rm{\Lambda }}^ + (X) \le {\rm{VaR}}_{{\rm{\Lambda }}(x)}^ + (X) \vee x.\end{align}

It is trivial for $x \ge {\rm{VaR}}_{\rm{\Lambda }}^ + (X)$ . If $x \lt {\rm{VaR}}_{\rm{\Lambda }}^ + (X)$ , then ${\mathbb{P}}(X\le x)\le {\Lambda}(x)$ . Thus, for any $a \in \big( {x,{\rm{VaR}}_{\rm{\Lambda }}^ + (X)}\big)$ , we have ${\mathbb{P}}(X\le a) \le {\Lambda} (a) \le {\Lambda}(x)$ since ${\rm{\Lambda }}\!\left( t \right)$ is decreasing, which implies ${\rm{VaR}}_{{\rm{\Lambda }}(x)}^ + (X) \ge a$ . Therefore, (A.1) follows since $a$ is chosen arbitrarily. This completes the proof of the lemma.

Appendix B. Proofs of results in Section 3

Proof of Proposition 5. It suffices to prove that $\overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} = \overline {\left( {{{\rm{\Lambda }}_1} \oslash {{\rm{\Lambda }}_2}} \right) \oslash {{\rm{\Lambda }}_3}} $ . Denote ${{\rm{\Lambda }}_{12}} = {{\rm{\Lambda }}_1} \oslash {{\rm{\Lambda }}_2}$ . First, for any given $y\in \mathbb{R}$ , in view of (3.2), there exists a sequence $\{(y_{1n}, y_{2n}, y_{3n})\}_{n\in{\mathbb{N}}}$ such that $\sum^3_{i=1} y_{in}=y$ and $\overline {{{\rm{\Lambda }}_1}} \!\left( {{y_{1n}}} \right) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{y_{2n}}} \right) + \overline {{{\rm{\Lambda }}_3}} \!\left( {{y_{3n}}} \right) \to \overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} (y)$ as $n \to \infty $ . Also, it follows from (3.2) that

\begin{align*}\overline {{{\rm{\Lambda }}_1}} \!\left( {{y_{1n}}} \right) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{y_{2n}}} \right) + \overline {{{\rm{\Lambda }}_3}} \!\left( {{y_{3n}}} \right) \le \overline {{{\rm{\Lambda }}_{12}}} \!\left( {{y_{1n}} + {y_{2n}}} \right) + \overline {{{\rm{\Lambda }}_3}} \!\left( {{y_{3n}}} \right) \le \overline {{{\rm{\Lambda }}_{12}} \oslash {{\rm{\Lambda }}_3}} (y).\end{align*}

Letting $n \to \infty $ yields that

(B.1) \begin{align}\overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} (y) \le \overline {{{\rm{\Lambda }}_{12}} \oslash {{\rm{\Lambda }}_3}} (y).\end{align}

Next, we prove that the reverse inequality of (B.1) also holds. For fixed $y\in \mathbb{R}$ , there exists a sequence $\{(z_n, z_{3n})\}_{n\in{\mathbb{N}}}$ such that ${z_n} + {z_{3n}} = y$ and $\overline {{{\rm{\Lambda }}_{12}}} ({z_n}) + \overline {{{\rm{\Lambda }}_3}} \!\left( {{z_{3n}}} \right) \to \overline {{{\rm{\Lambda }}_{12}} \oslash {{\rm{\Lambda }}_3}} (y)$ as $n \to \infty $ . Also, for each $n$ , there exists a sequence $\{(z_{1,n_j}, z_{2,n_j})\}_{j\in{\mathbb{N}}}$ such that ${z_{1,{n_j}}} + {z_{2,{n_j}}} = {z_n}$ and $\overline {{{\rm{\Lambda }}_1}} \big({z_{1,{n_j}}}\big) + \overline {{{\rm{\Lambda }}_2}} \big({{z_{2,{n_j}}}}\big) \to \overline {{{\rm{\Lambda }}_{12}}} ({z_n})$ as $j \to \infty $ . Note that

\begin{align*}\overline {{{\rm{\Lambda }}_1}} \big({z_{1,{n_j}}}\big) + \overline {{{\rm{\Lambda }}_2}} \big({{z_{2,{n_j}}}}\big) + \overline {{{\rm{\Lambda }}_3}} \!\left( {{z_{3,n}}} \right) \le \overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} (y).\end{align*}

First letting $j \to \infty $ and then $n \to \infty $ , we have $\overline {{{\rm{\Lambda }}_{12}} \oslash {{\rm{\Lambda }}_3}} (y) \le \overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} (y)$ . Combining with (B.1), we conclude $\overline { \oslash _{i = 1}^3{{\rm{\Lambda }}_i}} = \overline {\left( {{{\rm{\Lambda }}_1} \oslash {{\rm{\Lambda }}_2}} \right) \oslash {{\rm{\Lambda }}_3}} $ .

Proof of Proposition 6. (1) For any sequence $\{(y_{1n}, \ldots, y_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum^m_{i=1} y_{in}=y_n$ and ${y_n} \to - \infty $ as $n \to \infty $ , there exist a subsequence $\left\{ {{n_k}} \right\}$ and some $i \in [m]$ such that ${y_{i,{n_k}}} \to - \infty $ as $k \to \infty $ . Setting $k \to \infty $ in the following inequality

\begin{align*} \overline{\Lambda _i} (y_{i,n_k} )+\sum_{j\ne i} \overline{\Lambda _j} (y_{j,n_k})\le \overline{\Lambda _i} (y_{i,n_k} )+ \sum_{j\ne i} \overline{\Lambda _j} ({+}\infty),\end{align*}

we have

(B.2) \begin{align}\overline{\Lambda ^{\ast}}({-}{\infty})\le \max_{1\le i\le m} \bigg (\overline{\Lambda _i} ({-}{\infty})+\sum_{j\ne i} \overline{\Lambda _j} ({+}{\infty})\bigg ).\end{align}

On the other hand, to prove the reverse inequality of (B.2), assume without loss of generality that

\begin{align*} \max_{1\le i\le m} \bigg (\overline{\Lambda _i} ({-}{\infty})+\sum_{j\ne i} \overline{\Lambda _j} ({+}{\infty})\bigg ) = \overline{\Lambda _1} ({-}\infty) + \sum_{j=2}^m \overline{\Lambda _j} ({+}\infty).\end{align*}

Choose $\left( {{x_{1n}}, \ldots ,{x_{mn}}} \right) = \left( { - nm,n, \ldots ,n} \right)$ . Then $\overline{\Lambda ^{\ast}} ({-}n)\ge \overline{\Lambda _1}({-}nm) + \sum_{j=2}^m \overline{\Lambda _j}(n)$ . Thus, the reverse inequality of (B.2) follows by setting $n \to + \infty $ .

(2) For any $(y_1,\ldots,y_m)\in\mathbb{R}^m$ , we have $\sum_{i=1}^m\overline{\Lambda _i} ({+}{\infty}) \ge \sum_{i=1}^m \overline{\Lambda _i}(y_i)$ , implying that $\overline{\Lambda ^{\ast}} (y) \le \sum_{i=1}^m\overline{\Lambda _i} ({+}{\infty})$ . Thus, $\overline{\Lambda ^{\ast}} ({+}\infty) \le\sum_{i=1}^m\overline{\Lambda _i} ({+}{\infty})$ . On the other hand, choosing ${y_1} = \cdots = {y_m} = y/m$ , we get that $\overline{{\Lambda}^\ast} ({+}\infty)\ge \sum_{i=1}^m\overline{{\Lambda}_i} ({+}{\infty})$ by setting $y \to \infty $ . The desired result follows.

To prove Propositions 7 and 8, we need the following lemma.

Lemma 8. For $m = 2$ and any $y\in{\mathbb{R}}, {\Lambda}^\ast(y)$ has one of the following representations:

  1. (1) there exists $(x_1, x_2)\in{\mathbb{R}}^2$ such that ${x_1} + {x_2} = y$ and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}_1}} \!\left( {{x_1} - } \right) + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+})$ ;

  2. (2) there exists $(x_1, x_2)\in{\mathbb{R}}^2$ such that ${x_1} + {x_2} = y$ and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}_1}} \!\left( {{x_1} + } \right) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{x_2} - } \right)$ ;

  3. (3) there exists $(x_1, x_2)\in{\mathbb{R}}^2$ such that ${x_1} + {x_2} = y$ and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}_1}} (x_1) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{x_2}} \right)$ ;

  4. (4) $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \left( {\overline {{{\rm{\Lambda }}_1}} ({+}\infty) + \overline {{{\rm{\Lambda }}_2}} ({-}\infty)} \right) \vee \left( {\overline {{{\rm{\Lambda }}_1}} ({-}\infty) + \overline {{{\rm{\Lambda }}_2}} ({+}\infty)} \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ .

Furthermore, ${{\rm{\Lambda }}^{\rm{*}}}(y)$ has one of the former three representations when ${{\rm{\Lambda }}^{\rm{*}}}(y) \lt {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ . Additionally, if both ${{\rm{\Lambda }}_1}$ and ${{\rm{\Lambda }}_2}$ are right-continuous, then only Case (3) occurs.

Proof. For given $y\in\mathbb{R}$ , there exists a sequence $\{(y_{1n}, y_{2n})\}_{n\in{\mathbb{N}}}$ such that ${y_{1n}} + {y_{2n}} = y$ and $\overline {{{\rm{\Lambda }}_1}} \!\left( {{y_{1n}}} \right) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{y_{2n}}} \right) \to \overline {{{\rm{\Lambda }}^{\rm{*}}}} (y)$ as $n \to \infty $ . Two cases arise: First, if $\{(y_{1n}, y_{2n})\}_{n\in{\mathbb{N}}}$ has a converging subsequence with finite limiting point, that is, there exists $\{ {{n_j}}\}$ such that ${y_{1{n_j}}} \to {x_1}$ and ${y_{2,{n_j}}} \to {x_2}$ as $j \to \infty $ . This leads to $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}_1}} \!\left( {{x_1} - } \right) + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+})$ or $\overline {{{\rm{\Lambda }}_1}} \!\left( {{x_1} + } \right) + \overline {{{\rm{\Lambda }}_2}} \!\left( {{x_2} - } \right)$ or $\overline {{{\rm{\Lambda }}_1}} (x_1) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_2}} \right)$ . If ${{\rm{\Lambda }}_1}$ and ${{\rm{\Lambda }}_2}$ are right-continuous, then $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}_1}} (x_1) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_2}} \right)$ .

Second, if $\{(y_{1n}, y_{2n})\}_{n\in{\mathbb{N}}}$ does not have a cluster point, then there exists $\{ {{n_j}}\}$ such that ${y_{1,{n_j}}} \to + \infty {\rm{\;}}({-}\infty)$ and ${y_{2,{n_j}}} \to - \infty {\rm{\;}}({+}\infty)$ as $j \to \infty $ . This leads to Case (4). This proves the lemma.

Proof of Proposition 7. If ${{\rm{\Lambda }}^{\rm{*}}}$ does not satisfy $(\mathrm{P}_1)$ , then there exists a sequence $\{(y_{1n},\dots, y_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum_{i=1}^m y_{in}=y$ and $\sum_{i=1}^m \overline{{\Lambda}_i} (y_{in})\to \overline{{\Lambda}^{\ast}}(y)$ as $n \to \infty $ . Moreover, $\{y_{in}\}_{n\in{\mathbb{N}}}$ does not have a cluster point in for some $i$ . Without loss of generality, assume that ${y_{1,{n_j}}} \to - \infty $ as $j \to \infty $ . Note that $\overline {{{\rm{\Lambda }}_i}} \left( {{y_{in}}} \right) \le \overline {{{\rm{\Lambda }}_i}} ({+}\infty)$ for $i \ge 2$ . This implies that $\overline{\Lambda ^{\ast}}(y) \le \overline{\Lambda _1} ({-}{\infty})+\sum_{i=2}^m\overline{\Lambda _j} ({+}{\infty})$ . On the other hand, by Proposition 6, we have

\begin{align*} \overline{\Lambda ^{\ast}}(y)\ge \overline{\Lambda ^{\ast}}({-}{\infty}) =\max_{1\le i\le m} \Bigg (\overline{\Lambda _i} ({-}{\infty})+\sum_{j\ne i} \overline{\Lambda _j} ({+}{\infty})\Bigg ).\end{align*}

Therefore, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ .

Now suppose that ${{\rm{\Lambda }}^{\rm{*}}}\left( {{y_0}} \right)$ has Property $(\mathrm{P}_2)$ . Assume on the contrary that there exists ${x_0} \lt {y_0}$ and ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ satisfies $(\mathrm{P}_1)$ , that is, there exists $(y_1, \dots, y_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m y_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(y_i)$ . Then

\begin{align*} \overline{\Lambda ^{\ast}}(y_0)\ge \overline{\Lambda _1} (y_1+y_0-x_0)+\sum_{i=2}^m\overline{\Lambda _i} (y_i) \ge \overline{\Lambda ^{\ast}} (x_0).\end{align*}

However, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {{y_0}} \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ . So we get $\overline{\Lambda ^{\ast}}(y_0)= \overline{\Lambda _1} (y_1+y_0-x_0)+\sum_{i=2}^m\overline{\Lambda _i} (y_i)$ . This means that ${{\rm{\Lambda }}^{\rm{*}}}\left( {{y_0}} \right)$ does not satisfy $(\mathrm{P}_2)$ , which is a contradiction. The desired result follows.

Proof of Proposition 8. We give the proof only for $m = 2$ since, in view of Proposition 5, the proof of the general case $m \ge 3$ follows by induction.

(1) For $m = 2$ , assume without loss of generality that ${{\rm{\Lambda }}_1}$ is continuous. By Lemma 6, for any $x\in \mathbb{R}$ , either one of the following two cases holds for ${\rm{\Lambda }}(x)$ :

Case 1.1. There exists $(x_1,x_2)\in{\mathbb{R}}^2$ such that ${x_1} + {x_2} = x$ and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (x) = \overline {{{\rm{\Lambda }}_1}} (x_1) + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+})$ .

Case 1.2. $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (x) = \left( {\overline {{{\rm{\Lambda }}_1}} ({+}\infty) + \overline {{{\rm{\Lambda }}_2}} ({-}\infty)} \right) \vee \left( {\overline {{{\rm{\Lambda }}_1}} ({-}\infty) + \overline {{{\rm{\Lambda }}_2}} ({+}\infty)} \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ .

If ${{\rm{\Lambda }}^{\rm{*}}}$ is not continuous, there exists some $x_0\in{\mathbb{R}}$ such that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ or ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}-})$ .

First, we prove that ${{\rm{\Lambda }}^{\rm{*}}}$ is right-continuous by contradiction. Assume on the contrary that there exists some $x_0\in{\mathbb{R}}$ such that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ . Choose $\{x_n\}_{n\in{\mathbb{N}}}$ such that ${x_n} \to {x_0} + $ . By Lemma 6, ${{\rm{\Lambda }}^{\rm{*}}}({{x_n}}) \lt {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ and Case 1.1 applies, i.e. there exists a sequence $\{(x_{1n}, x_{2n})\}_{n\in{\mathbb{N}}}$ such that ${x_{1n}} + {x_{2n}} = {x_n}$ and $\overline {{{\rm{\Lambda }}_1}} \left( {{x_{1n}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2n}} + } \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_n}})$ . If $\{x_{1n}\}_{n\in{\mathbb{N}}}$ has a cluster point, there exists $\left\{ {{n_k}} \right\}$ such that ${x_{1,{n_k}}} \to {y_1}$ and ${x_{2,{n_k}}} \to {y_2}$ as $k \to \infty $ , where ${y_1} + {y_2} = {x_0}$ . Thus,

\begin{align*}\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) = \mathop {{\rm{lim}}}\limits_{k \to \infty } \overline {{{\rm{\Lambda }}_1}} \left( {{x_{1,{n_k}}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2,{n_k}}} + } \right) = \overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} ({y_2}{+}).\end{align*}

By (3.2), it follows that $\overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} ({y_2}{+}) \le \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}})$ . This contradicts with ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ . If $\{x_{1n}\}_{n\in{\mathbb{N}}}$ does not have a cluster point, then there exists $\left\{ {{n_k}} \right\}$ such that ${x_{1,{n_k}}} \to + \infty {\rm{\;}}({-}\infty)$ and ${x_{2,{n_k}}} \to - \infty {\rm{\;}}({+}\infty)$ as $k \to \infty $ . Thus,

\begin{align*}\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) & = \mathop {{\rm{lim}}}\limits_{j \to \infty } \overline {{{\rm{\Lambda }}_1}} \left( {{x_{1,{n_j}}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2,{n_j}}} + } \right)\\[4pt] & = \left( {\overline {{{\rm{\Lambda }}_1}} ({+}\infty) + \overline {{{\rm{\Lambda }}_2}} ({-}\infty)} \right) \vee \left( {\overline {{{\rm{\Lambda }}_1}} ({-}\infty) + \overline {{{\rm{\Lambda }}_2}} ({+}\infty)} \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty),\end{align*}

implying that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ by the monotonicity of ${{\rm{\Lambda }}^{\rm{*}}}$ . This leads to a contradiction and proves right-continuity of ${{\rm{\Lambda }}^{\rm{*}}}$ .

Next, we prove that ${{\rm{\Lambda }}^{\rm{*}}}$ is left-continuous, also by contradiction. Assume that there exists $x_0\in{\mathbb{R}}$ such that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}-}) \gt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ . Denote $\epsilon = {{\rm{\Lambda }}^{\rm{*}}}({{x_0}-}) - {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ . By Lemma 6, there exists $(x_1,x_2)\in{\mathbb{R}}^2$ satisfying that ${x_1} + {x_2} = {x_0}$ and $\overline {{{\rm{\Lambda }}_1}} (x_1) + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+}) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}})$ . Since ${{\rm{\Lambda }}_1}$ is continuous, it follows that $\overline {{{\rm{\Lambda }}_1}} (x) \gt \overline {{{\rm{\Lambda }}_1}} (x_1) - \epsilon /2$ for $x \in \left( {{x_1} - \delta ,{x_1}} \right)$ , where $\delta \gt 0$ is small enough. Then

\begin{align*}\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {{x_0} - \frac{\delta }{2}} \right) & \ge \overline {{{\rm{\Lambda }}_1}} \left( {{x_1} - \frac{\delta }{2}} \right) + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+})\\ & \gt \overline {{{\rm{\Lambda }}_1}} (x_1) - \frac{\epsilon }{2} + \overline {{{\rm{\Lambda }}_2}} ({x_2}{+}) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) - \frac{\epsilon }{2},\end{align*}

which implies that $\epsilon = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) - \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}-}) \gt \epsilon /2 \gt \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) - \overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {{x_0} - \delta /2} \right)$ . This contradicts with ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}-}) \le {{\rm{\Lambda }}^{\rm{*}}}\left( {{x_0} - \delta /2} \right)$ and proves Part (1).

(2) Assume on the contrary that ${{\rm{\Lambda }}^{\rm{*}}}$ is not right-continuous, i.e. ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ for some $x_0\in\mathbb{R}$ . Choose ${x_n} \to {x_0} + $ . By Lemma 6 and the right-continuity of ${{\rm{\Lambda }}_1}$ and ${{\rm{\Lambda }}_2}$ , it follows that ${{\rm{\Lambda }}^{\rm{*}}}({{x_n}}) \lt {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ and that there exists $(x_{1n}, x_{2n})\in {\mathbb{R}}^2$ such that ${x_{1n}} + {x_{2n}} = {x_n}$ and $\overline {{{\rm{\Lambda }}_1}} \left( {{x_{1n}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2n}}} \right) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_n}})$ .

If $\{x_{1n}\}_{n\in{\mathbb{N}}}$ has a cluster point, then there exists $\left\{ {{n_k}} \right\}$ such that ${x_{1,{n_k}}} \to {y_1}$ and ${x_{2,{n_k}}} \to {y_2}$ as $k \to \infty $ , where ${y_1} + {y_2} = {x_0}$ . Now, two cases arise.

Case 2.1 Suppose that ${x_{1,{n_k}}} \to {y_1} + $ and ${x_{2,{n_k}}} \to {y_2} + $ as $k \to \infty $ . In this case,

\begin{align*}\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) & = \mathop {{\rm{lim}}}\limits_{k \to \infty } \overline {{{\rm{\Lambda }}_1}} \left( {{x_{1,{n_k}}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2,{n_k}}}} \right)\\ & = \overline {{{\rm{\Lambda }}_1}} \left( {{y_1} + } \right) + \overline {{{\rm{\Lambda }}_2}} ({y_2}{+}) = \overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2}} \right),\end{align*}

where the last equality follows from the right-continuity of ${{\rm{\Lambda }}_1}$ and ${{\rm{\Lambda }}_2}$ . Moreover, from the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ , we have $\overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2}} \right) \le \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}})$ . Thus, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) \le \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}})$ . This contradicts with ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}({x_0}{+})$ .

Case 2.2 Suppose that ${x_{1,{n_k}}} \to {y_1} + $ and ${x_{2,{n_k}}} \to {y_2} - $ as $k \to \infty $ . In this case,

\begin{align*}\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) & = \mathop {{\rm{lim}}}\limits_{k \to \infty } \overline {{{\rm{\Lambda }}_1}} \left( {{x_{1,{n_k}}}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{x_{2,{n_k}}}} \right)\\ & = \overline {{{\rm{\Lambda }}_1}} \left( {{y_1} + } \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2} - } \right) = \overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2} - } \right),\end{align*}

and $\overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2} - } \right) \le \overline {{{\rm{\Lambda }}_1}} \left( {{y_1}} \right) + \overline {{{\rm{\Lambda }}_2}} \left( {{y_2}} \right) \le \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}).$ Thus, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({x_0}{+}) \le \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}})$ . This is also a contradiction.

If $\{x_{1n}\}_{n\in{\mathbb{N}}}$ does not have a cluster point, a similar argument to that of Part (1) yields the desired result.

Proof of Proposition 9. If ${{\rm{\Lambda }}^{\rm{*}}}(x) \lt {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ , the desired result follows from Proposition 7. Now assume that ${{\rm{\Lambda }}^{\rm{*}}}(x) = {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ . Note that for any $k \in [m]$ ,

(B.3) \begin{align}\overline{\Lambda ^{\ast}}(x)\ge \overline{\Lambda _k} \Bigg (x-\sum_{i\ne k} y_i\Bigg ) +\sum_{i\ne k} ^m\overline{\Lambda _i}(y_i) \ge \overline{\Lambda _k} ({-}{\infty}) +\sum_{i\ne k} \overline{\Lambda _i} ({+}{\infty}).\end{align}

By Proposition 6, it follows from (B.3) that there exists some ${k_0}$ such that $\overline{\Lambda _{k_0}} \big (x-\sum_{i\ne k_0} y_i\big ) = \overline{\Lambda _{k_0}} ({-}{\infty})$ , and the equality in (B.3) holds for $k = {k_0}$ . Therefore, $\overline{{\Lambda}^\ast}(x)=\sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ holds by choosing $x_{k_0}= x-\sum_{i\ne k_0} y_i$ and ${x_i} = {y_i}$ for $i \ne {k_0}$ with $\sum_{i=1}^m x_i=x$ .

Proof of Proposition 10. (1) Necessity. Since ${{\rm{\Lambda }}^{\rm{*}}}$ is constant, ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = {{\rm{\Lambda }}^{\rm{*}}}({+}\infty)$ . By Proposition 6, there exists ${k_0} \in [m]$ such that

\begin{align*} \Lambda ^{\ast}({-}{\infty}) = 1-\overline{\Lambda _{k_0}} ({-}{\infty}) -\sum_{j\ne k_0} \overline{\Lambda _j} ({+}{\infty}), \qquad \Lambda ^{\ast}({+}{\infty}) = 1-\sum_{i=1}^m\overline{\Lambda _i} ({+}{\infty}),\end{align*}

implying that ${{\rm{\Lambda }}_{{k_0}}}({-}\infty) = {{\rm{\Lambda }}_{{k_0}}}({+}\infty)$ , i.e. ${{\rm{\Lambda }}_{{k_0}}}$ is constant.

Sufficiency. Suppose that ${{\rm{\Lambda }}_{{i_0}}} \equiv c$ . By Proposition 6, we have $\overline{\Lambda ^{\ast}} ({+}{\infty})=1-a +\sum_{j\ne i_0}\overline{\Lambda_j}({+}{\infty})$ . Also, $\overline{\Lambda ^{\ast}} ({-}{\infty}) \ge 1-a+ \sum_{j\ne i_0}\overline{\Lambda _j} ({+}{\infty}) =\overline{\Lambda ^{\ast} } ({+}{\infty})$ . Therefore, $\overline{\Lambda ^{\ast}}(x)\equiv 1-a + \sum_{j\ne i_0}\overline{\Lambda _j} ({+}{\infty})$ for all $x\in{\mathbb{R}}$ .

(2) Since ${{\rm{\Lambda }}^{\rm{*}}}$ is constant, by part (1) there exists ${k_0}$ such that ${{\rm{\Lambda }}_{{k_0}}}$ is constant. Without loss of generality, assume ${k_0} = 1$ .

Sufficiency. Suppose that ${{\rm{\Lambda }}_{{i_0}}}(y) \gt {{\rm{\Lambda }}_{{i_0}}}({+}\infty)$ for some ${i_0}$ and all $y\in\mathbb{R}$ . Then, for any $(x_1,\dots,x_m)\in \mathbb{R}^m$ with $x=\sum^m_{i=1} x_i, \sum^m_{i=1} \overline{{\Lambda}_i}(x_i) < \sum_{i=1}^m \overline{{\Lambda}_i} ({+}{\infty})=\overline{{\Lambda}^\ast}({\infty})$ . Thus, $\overline{\Lambda ^{\ast}} (x) =\overline{\Lambda ^{\ast}} ({+}\infty)> \sum_{i=1}^m\overline{\Lambda _i} (x_i)$ .

Necessity. Assume on the contrary that for all ${{\rm{\Lambda }}_i}$ , there exists $x_i\in{\mathbb{R}}$ such that ${{\rm{\Lambda }}_i}\left( {{x_i}} \right) = {{\rm{\Lambda }}_i}({+}\infty)$ . Since ${{\rm{\Lambda }}_1}$ is constant, it follows that for any $x\in\mathbb{R}$ ,

\begin{align*} \overline{\Lambda _1} \Bigg (x-\sum_{i=2}^m x_i\Bigg )+\sum_{i=2}^m\overline{\Lambda _i}(x_i) =\sum^m_{i=1} \overline{\Lambda _i}({+}{\infty})=\overline{\Lambda ^{\ast}}({+}{\infty})=\overline{\Lambda ^{\ast}}(x).\end{align*}

This contradicts with the assumption that $\overline{{\Lambda}^\ast}(x)>\sum_{i=1}^m\overline{{\Lambda}_i}(y_i)$ with $y_1=x-\sum^m_{i=2} x_i$ and ${y_k} = {x_k}$ for $k \ge 2$ . This proves the desired result.

Appendix C. Proofs of results in Section 4

Proof of Lemma 4. If ${\lambda _{{i_0}}} = 0$ for some ${i_0} \in [m]$ , then ${\rm{VaR}}_{{\lambda _i}}^ - (Y) = - \infty $ for $Y \in {L^0}$ . Thus, the right-hand side (RHS) of (4.2) is $\sum^m_{i=1} y_i$ . Note that the left-hand side (LHS) of (4.2) is larger than or equal to $\sum^m_{i=1} y_i$ and that the lower bound can be attained by choosing $X_{i_0}=X-\sum_{j\ne i_0} y_j$ and ${X_j} = {y_j}$ for $j \ne {i_0}$ . Thus, (4.2) holds for this special case. So we assume that ${\lambda _i} \in \left( {0,1} \right]$ for $i \in [m]$ .

If $\Box_{i=1}^m{\rm VaR}^-_{\lambda_i}(X)\le \sum_{i=1}^m y_i$ , by cash invariance of ${\rm{VaR}}$ and Corollary 2 in [Reference Embrechts, Liu and Wang12], there exists optimal allocation $(X_1, \dots, X_m)\in\mathbb{A}_m$ for $\square_{i = 1}^m{\rm{VaR}}_{{\lambda _i}}^ - (X)$ such that ${\rm{VaR}}_{{\lambda _i}}^ - ({X_i}) \le {y_i}$ for $i \in [m]$ . Thus, $\sum_{i=1}^m {\rm VaR}^-_{\lambda_i}(X_i)\vee y_i= \sum_{i=1}^m y_i$ , attaining the lower bound of LHS. So, (4.2) holds for this case.

If $\Box_{i=1}^m {\rm VaR}^-_{\lambda_i}(X) > \sum_{i=1}^m y_i$ , also by cash invariance of ${\rm{VaR}}$ and Corollary 2 in [Reference Embrechts, Liu and Wang12], there exists optimal allocation $(X_1, \dots, X_m)\in\mathbb{A}_m$ for $\square_{i = 1}^m{\rm{VaR}}_{{\lambda _i}}^ - (X)$ such that ${\rm{VaR}}_{{\lambda _i}}^ - ({X_i}) \gt {y_i}$ for $i \in [m]$ . Thus, we have

\begin{align*}\mathop \sum \limits_{i = 1}^m {\rm{VaR}}_{{\lambda _i}}^ - ({X_i}) \vee {y_i} = \mathop \sum \limits_{i = 1}^m {\rm{VaR}}_{{\lambda _i}}^ - ({X_i}) = \mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{\lambda _i}}^ - (X),\end{align*}

implying that

(C.1) \begin{align}{\rm{LHS}} \le \mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{\lambda _i}}^ - (X).\end{align}

Next, we show that the reverse inequality of (C.1) is also true. To this end, for any other allocation $(Y_1, \dots, Y_m)\in \mathbb{A}_m(X)$ , denote $K = \{ k\,:\,{\rm{VaR}}_{{\lambda _k}}^ - \left( {{Y_k}} \right) \lt {y_i},k \in [m]\} $ . Then,

\begin{align*} \sum_{i=1}^m {\rm VaR}^-_{\lambda_i}(Y_i)\vee y_i &=\sum_{i\in K}y_i+\sum_{i\in [m]\setminus K} {\rm VaR}^-_{\lambda_i}(Y_i)\\ &\ge \sum_{i\in K} y_i+ \mathop{\Box}\limits_{i=1}^m{\rm VaR}^-_{\lambda_i}(X)-\sum_{i\in K}{\rm VaR}^-_{\lambda_i}(Y_i) \ge \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\lambda_i}(X),\end{align*}

where the first inequality follows from the fact that $\sum_{i=1}^m \mathrm{VaR}^-_{{\lambda}_i}(Y_i)\ge \Box_{i=1}^m \mathrm{VaR}^-_{{\lambda}_i}(X)$ . Thus, the reverse inequality of (C.1) holds. This completes the proof.

Proof of Theorem 3. The proof is similar to those of Theorems 1 and 2. By Lemmas 2, 3 and 5, we have

(C.2) \begin{align} \mathop{\Box}\limits_{i=1}^m {\rm VaR}_{\Lambda _i}^{\kappa_i}(X) &= \inf_{(X_1,\dots,X_m)\in \mathbb{A}_m(X)} \sum_{i=1}^m{\rm VaR}_{\Lambda _i}^{\kappa_i}(X_i)\notag \\ &=\inf_{(X_1,\dots,X_m) \in\mathbb{A}_m(X)} \sum_{i=1}^m \inf_{y_i\in\mathbb{R}} \left\{{\rm VaR}^{\kappa_i}_{\Lambda _i(y_i)}(X_i)\vee y_i\right\}\notag\\ & =\inf_{y_1,\dots,y_m\in\mathbb{R}} \inf_{(X_1,\dots,X_m)\in \mathbb{A}_m(X)}\left\{\sum_{i=1}^m {\rm VaR}^{\kappa_i}_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ & =\inf_{y_1,\dots, y_m\in\mathbb{R}}\left\{ \mathop{\Box}\limits_{i=1}^m {\rm VaR}^{\kappa_i}_{\Lambda _i(y_i)}(X)\vee\sum_{i=1}^m y_i \right\}\notag\\ &=\inf_{y_1,\dots,y_m\in\mathbb{R}} \left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee\sum_{i=1}^m y_i \right\} \\ &={\rm VaR}_{\Lambda ^{\ast}}^+(X),\notag\end{align}

where (C.2) follows from from Theorem 1 of [Reference Liu, Mao, Wang and Wei21]. The rest of the proof is the same as that of Theorem 2 and, hence, omitted.

Proof of Theorem 11. (1) Suppose that ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ . Then, for any $x \lt \mathrm{ess\mbox{-}inf} (X)$ , there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x$ and $\sum_{i=1}^m\overline{\Lambda _i}(x_i)>1$ . Let $\left\{ {{A_1}, \ldots ,{A_m}} \right\}$ be a partition of ${\rm{\Omega }}$ , satisfying

\begin{align*} {\mathbb{P}}(A_i)= \frac{\overline{\Lambda _i}(x_i)}{\sum_{j=1}^m\overline{\Lambda _j}(x_j) },\quad i\in [m].\end{align*}

Define $X_j=(X-x) 1_{A_j}+x_j$ for $j\in [m-1]$ , and $X_m =X-\sum_{j=1}^{m-1}X_j$ . Then ${\mathbb{P}}(X_i > x_i) \le {\mathbb{P}}(A_i) < \overline{{\Lambda}_i}(x_i)$ , implying $\mathrm{VaR}_{{\Lambda}_i}^+(X_i)\le x_i$ for $i\in [m]$ . Thus, $\sum_{i=1}^m \mathrm{VaR}_{{\Lambda}_i}^-(X_i) \le \sum_{i=1}^m x_i=x$ . This proves part (1) by letting $x\searrow -{\infty}$ .

(2) First, consider the case $L=\emptyset$ . For any $x<{\mathrm{ess\mbox{-}inf}} (X)$ , we have $\overline{{\Lambda}^\ast}(x)>1$ or $\overline{{\Lambda}^\ast}(x)=1$ . If $\overline{{\Lambda}^\ast}(x)=1$ , then $x\not\in L$ and, hence, there exists $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^m x_i=x$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_i)=1$ . Therefore, we can always choose $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^mx_i = x$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_i)\ge 1$ whenever $\overline{{\Lambda}^\ast}(x)>1$ or $\overline{{\Lambda}^\ast}(x)=1$ . Construct $(X_1,\ldots,X_m)$ as in part (1). Then, ${\mathbb{P}}(X_j > x_j) \le {\mathbb{P}}(A_j)\le \overline{{\Lambda}_j}(x_j)$ for $j\in [m]$ , implying $\mathrm{VaR}_{{\Lambda}_j}^-(X_j)\le x_j$ . Therefore, $\Box_{i=1}^m\mathrm{VaR}_{{\Lambda}_i}^-(X)=-{\infty}$ .

Next, consider $L\ne\emptyset$ . Two subcases arise.

Subcase 1: Suppose ${\rm{sup}}\,L \lt \mathrm{ess\mbox{-}inf}(X)$ . By Lemma 6, we get $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {\rm{sup}}\,L$ . For any $x \in \left( {{\rm{sup}}\,L,{\mathrm{ess\mbox{-}inf}}(X)} \right)$ , there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_i) \ge 1$ . Construct $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ as in part (1). Similarly, we have ${\rm{VaR}}_{{{\rm{\Lambda }}_{}}}^ - ( {{X_j}}) \le {x_j}$ for $j \in [m]$ . Thus, $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i)\le x$ , yielding $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \le {\rm{sup}}\,L$ by setting $x \searrow {\rm{sup}}\,L$ . Therefore, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{sup}}\,L$ .

Subcase 2: Suppose ${\rm{sup}}\,L \ge \mathrm{ess\mbox{-}inf}(X)$ . By part (1) of Lemma 6, we get that $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge \mathrm{ess\mbox{-}inf}(X)$ . Also, by part (2) of Lemma 6, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \le \mathrm{ess\mbox{-}inf}(X)$ . Thus,

\begin{align*}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = \mathrm{ess\mbox{-}inf}(X) = {\rm{min}}\left\{ {{\rm{sup}}\,L,\mathrm{ess\mbox{-}inf}(X)} \right\}.\end{align*}

This completes the proof of the proposition.

Proof of Proposition 12. (1) The proof is similar to that of part (1) of Proposition 11.

(2) First, consider $T = \emptyset $ . Then, for any $x \lt \mathrm{ess\mbox{-}inf}\;(X)$ , we have ${{\rm{\Lambda }}^{\rm{*}}}(x) \lt 0$ . Thus, there exists $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $x =\sum_{i=1}^m x_i$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_i)> 1$ . Construct ${\boldsymbol{{X}}}$ as in the proof of part (1) of Proposition 11. Then ${\mathbb{P}}(X_j > x_j) \le {\mathbb{P}}(A_j) < \overline{{\Lambda}_j}(x_j)$ for $j \in [m]$ , implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ . Therefore, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = - \infty $ .

Next, consider $T \ne \emptyset $ . Two subcases arise.

Subcase 1: Suppose ${\rm{sup}}\,T \lt \mathrm{ess\mbox{-}inf}(X)$ . For any $x \in \left( {{\rm{sup}}\,T,{\mathrm{ess\mbox{-}inf}}(X)} \right)$ , there exists $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^m x_i=x$ and $\mathop \sum \nolimits_{i = 1}^m \overline {{{\rm{\Lambda }}_i}} \left( {{x_i}} \right) \gt 1$ . Construct $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ as in part (1). Similarly, we have ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ for $j \in [m]$ . Thus, $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^+(X_i)\le x$ , yielding $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \le {\rm{sup}}\,L$ by setting $x \searrow {\rm{sup}}\,T$ . On the other hand, by Lemma 7, for any $x \in T$ , we have $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \ge x$ . Thus, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \ge {\rm{sup}}\,L$ . This proves part (2) in Subcase 1.

Subcase 2: Suppose ${\rm{sup}}\,T \ge {\mathrm{ess\mbox{-}inf}}(X)$ . By part (1) of Lemma 7, we get that $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \ge {\mathrm{ess\mbox{-}inf}}(X)$ . Also, by part (2) of Lemma 7, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \le {\mathrm{ess\mbox{-}inf}}(X)$ . Thus, $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\mathrm{ess\mbox{-}inf}}(X) = {\rm{min}}\left\{ {{\rm{sup}}\,T,{\mathrm{ess\mbox{-}inf}}(X)} \right\}$ . This completes the proof of the proposition.

Proof of Proposition 13. The proof is similar to that of Proposition 12.

Proof of Lemma 6. (1) Assume on the contrary that $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \lt {x_0}$ . Then there exists $(X_1,\dots, X_m)\in \mathbb{A}_m(X)$ such that $\mathop \sum \nolimits_{i = 1}^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i}) \lt {x_0}$ . Denote ${x_i} = {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i})$ for $i \in [m]$ . Since ${{\rm{\Lambda }}_i}$ is right-continuous, by Lemma 1 it follows that ${\mathbb{P}}(X_i> x_i ) \le \overline{{\Lambda}_i}(x_i)$ . Thus, $\sum_{i=1}^m {\mathbb{P}}(X_i > x_i ) \le \sum_{i=1}^m\overline{{\Lambda}_i}(x_i) \le \overline{{\Lambda}^\ast}(x_0)=1.$ However, $1={\mathbb{P}}(X\ge x_0) \le \sum_{i=1}^m {\mathbb{P}}(X_i>x_i)$ . This leads a contradiction.

(2) For any $x \gt \mathrm{ess\mbox{-}inf}(X)$ , ${\mathbb{P}}(X>x)<1$ . From the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ and its monotonicity, it follows that

\begin{align*} \overline{\Lambda ^{\ast}}(x)=\mathrm{sup}_{y_1+\dots+y_m=x} \sum_{i=1}^m \overline{\Lambda _i}(y_i) \ge \overline{\Lambda ^{\ast}}({-}{\infty}) \ge 1,\end{align*}

implying that there exists $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^m x_i=x$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_i)>$ ${\mathbb{P}}(X>x)$ . Let $\left\{ {{A_1}, \ldots ,{A_m}} \right\}$ be a partition of the set $\{ X \gt x\} $ , satisfying that

\begin{align*} {\mathbb{P}}(A_i) ={\mathbb{P}}(X>x)\cdot \frac{\overline{\Lambda _i}(x_i)}{\sum_{j=1}^m\overline{\Lambda _j}(x_j)}, \quad i\in [m],\end{align*}

and define $X_m = X -\sum_{j=1}^{m-1} X_j$ , where ${X_j} = \left( {X - x} \right){1_{{A_j}}} + {x_j}$ for $j \in [{m - 1}]$ . Obviously, $(X_1, \ldots, X_m)\in\mathbb{A}_m(X)$ and ${\mathbb{P}}(X_i > x_i) \le {\mathbb{P}}(A_i) < \overline{{\Lambda}_i}(x_i)$ for $i \in [m]$ , implying ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i}) \le {x_i}$ for $i \in [m]$ . So we get $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i) \le x$ . The desired result now follows by setting $x \downarrow \mathrm{ess\mbox{-}inf}(X)$ .

Proof of Lemma 7. (1) Assume on the contrary that $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \lt {x_0}$ . Then there exists $(X_1, \dots, X_m)\in \mathbb{A}_m(X)$ such that $\sum_{i=1}^{m}{\rm VaR}_{\Lambda _i}^{+}(X_i) <x_0$ . Denote ${x_i} = {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + ({X_i})$ for $i \in [m]$ , and set ${\epsilon}=x_0-\sum_{i=1}^m x_i$ . Since ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = 0$ , it follows that $\sum_{i=1}^m\overline{\Lambda _i}(x_i)\le 1$ . Since $x_1+{\epsilon}/2+\sum_{i=2}^m x_i <x_0$ , we have

\begin{align*} \overline{\Lambda _1}\Big(x_1+\frac{{\epsilon}}{2}\Big) + \sum_{i=2}^{m}\overline{\Lambda _i}(x_i) \le \overline{\Lambda ^{\ast}}(x_0)=1.\end{align*}

By Lemma 1 and right-continuity of ${{\rm{\Lambda }}_i}$ , it follows that ${\mathbb{P}}(X_i>x_i) \le \overline{\Lambda _i}(x_i)$ and ${\mathbb{P}}(X_1> x_1+{\epsilon}/2) < \overline{\Lambda _1}(x_1+ {\epsilon}/2)$ . Thus,

\begin{align*} 1 ={\mathbb{P}}\Bigg (X >\sum_{i=1}^m x_i+\frac{{\epsilon}}{2}\Bigg ) &\le {\mathbb{P}}\Big(X_1 > x_1+\frac{{\epsilon}}{2} \Big)+\sum_{i=2}^m {\mathbb{P}}(X_i > x_i ) \\ & < \overline{\Lambda _1}\Big(x_1+\frac{{\epsilon}}{2}\Big) +\sum_{i=2}^{m}\overline{\Lambda _i}(x_i)\le 1,\end{align*}

which is a contradiction. This proves part (1).

(2) The proof is the same as that of part (2) of Lemma 6.

Appendix D. Proofs of results in Section 5

Proof of Theorem 6. Eq. (5.10) follows from Theorem 2. We focus on constructing its optimal and asymptotically optimal allocations according to different situations.

(1) Suppose that ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ . Let $\boldsymbol{{X}} \in\mathbb{A}_m(X)$ be constructed as in the proof of Theorem 4. It is easy to see that ${\mathbb{P}}(X_j>x_j)<\overline{\Lambda _j}(x_j)$ , implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ for $j \in [m]$ . By (2.6) in Lemma 1, we have

\begin{align*} {\mathbb{P}}(X_j<x_j)\le {\mathbb{P}}(X<x_0) \le \Lambda (x_0{-})\le \Lambda _j(x_j{-}), \end{align*}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \ge {x_j}$ for $j \in [m]$ , where the last inequality follows from the fact that ${\rm{\Lambda }}(y) \le {{\rm{\Lambda }}_j}(y)$ for any $y\in{\mathbb{R}}$ . Thus, ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) = {x_j}$ for $j \in [m]$ , i.e. ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

(2) Suppose that ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , and ${\mathbb{P}}(X>x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ . Let $\{B_n\}_{n\in{\mathbb{N}}}$ be a partition of $\{ X \gt {x_0}\} $ , defined by ${B_1} = \{ X \gt {x_0} + 1\} $ and

\begin{align*}{B_n} = \left\{ {{x_0} + \frac{1}{n} \lt X \le {x_0} + \frac{1}{{n - 1}}} \right\},{\rm{\;\;\;\;}}n \ge 2.\end{align*}

For $k \ge 1$ , let $\left\{ {{B_{k1}}, \ldots ,{B_{km}}} \right\}$ be a partition of ${B_k}$ , satisfying

\begin{align*} {\mathbb{P}}(B_{kj}) = {\mathbb{P}}(B_k)\cdot \frac{\overline{\Lambda _j}(x_j)}{\sum_{i=1}^m \overline{\Lambda _i}(x_i)},\quad j\in [m].\end{align*}

Denote $C_j=\bigcup_{k\ge 1} B_{kj}$ for $j \in [m]$ . Thus, $\left\{ {{C_1}, \ldots ,{C_m}} \right\}$ constitutes a partition of $\{ X \gt {x_0}\} $ . Construct an allocation of $X$ as follows:

(D.1) \begin{align} X_i = x_i + (X-x_0)\, 1_{C_i},\ i\in [m-1];\quad X_m = X-\sum_{j=1}^{m-1} X_j.\end{align}

Note that ${\mathbb{P}}(X_j>x_j)={\mathbb{P}}(C_j) =\overline{{\Lambda}_j}(x_j)$ for $j \in [m]$ and that ${\mathbb{P}}(X_j>x_j+{\epsilon})< {\mathbb{P}}(C_j) = \overline{{\Lambda}_j}(x_j)$ for any $\epsilon \gt 0$ . Thus, ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) = {x_j}$ for $j \in [m]$ . This proves part (2).

(3) First, suppose that ${{\rm{\Lambda }}_j}\left( {{x_j} + \epsilon } \right) \lt {{\rm{\Lambda }}_j}\left( {{x_j}} \right)$ for any $\epsilon \gt 0$ . Let $\boldsymbol{{X}}\in\mathbb{A}_m(X)$ be defined as in (D.1). Then ${\mathbb{P}}(X_j > x_j) = \overline{{\Lambda}_j}(x_j) < \overline{{\Lambda}_j}(x_j+{\epsilon}),$ implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) = {x_j}$ for $j \in [m]$ . Thus, ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

Next, consider the second half of part (3). We prove that no optimal allocation exists by contradiction. Assume on the contrary that there exists an optimal allocation $\boldsymbol{{X}}\in\mathbb{A}_m(X)$ . Denote ${y_j} = {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}})$ for $j \in [m]$ , satisfying $\sum^m_{i=1} y_i=x_0$ . By the assumption of part (3), there exists $k$ , say, $k = 1$ , such that ${{\rm{\Lambda }}_k}\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\left( {{y_k} + {\tau _0}} \right)$ . Denote ${\epsilon _1} = {\rm{min}}\left\{ {{\epsilon _0},{\tau _0}} \right\}/2$ . Then ${\mathbb{P}}(X_1 >y_1+{\epsilon}_1) <\overline{{\Lambda}_1}(y_1+{\epsilon}_1) = \overline{{\Lambda}_1}(y_1),$ and ${\mathbb{P}}(X_k>y_k) \le\overline{{\Lambda}_k}(y_k)$ for $k \ge 2$ . Hence,

\begin{align*} {\mathbb{P}}(X > x_0) & = {\mathbb{P}}(X > x_0+{\epsilon}_1) \le {\mathbb{P}}(X_1>y_1+{\epsilon}_1) +\sum_{i=2}^m {\mathbb{P}}(X_i>y_i)\\ & < \sum_{i=1}^m \overline{\Lambda _i}(y_i) =\overline{\Lambda ^{\ast}}(x_0),\end{align*}

which contradicts with the assumption ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ .

Let $(X_{1n},\dots,X_{mn})\in\mathbb{A}_m(X)$ be as defined by (5.8). By a similar argument to that of part (2) in Theorem 5, we have

\begin{align*} {\mathbb{P}}\Big(X_{1n} > x_{1n}-\frac{1}{n}\Big) <\overline{\Lambda_1}(x_{1n}),\quad {\mathbb{P}}(X_{kn} > x_{kn}) < \overline{\Lambda_k}(x_{kn}),\ k\ge 2,\end{align*}

implying that ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + \left( {{X_{in}}} \right) \le {x_{in}}$ for $i \in [m]$ . Hence, $\sum_{i=1}^m{\rm VaR}_{{\Lambda}_i}^+(X_{in}) \le \sum_{i=1}^m x_{in}=x_0 + 1/n$ . By Theorem 2, the desired statement follows by letting $n \to + \infty $ .

Proof of Theorem 7. (1) Suppose that ${\mathbb{P}}(X>x_0) <\overline{\Lambda ^{\ast}}(x_0)$ . In this case, there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $x_0=\sum_{i=1}^{m}x_i$ and $\sum_{i=1}^m \overline{\Lambda _i}(x_i)\in\left({\mathbb{P}}(X>x_0), \overline{\Lambda ^{\ast}}(x_0) \right)$ . Let $\boldsymbol{{X}}\in\mathbb{A}(X)$ be as defined by (5.2). Then ${\mathbb{P}}(X_j>x_j) = {\mathbb{P}}(C_j) < \overline{\Lambda _j}(x_j)$ , implying that ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ for $j \in [m]$ and

\begin{align*} \mathop{\Box}\limits_{i=1}^m{\rm VaR}_{\Lambda _i}^+(X) \le\sum_{i=1}^{m}{\rm VaR}^+_{\Lambda _i}(X_i)\le \sum_{i=1}^m x_i=x_0.\end{align*}

In view of Theorem 2, we conclude that ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ with ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) = {x_j}$ for $j \in [m]$ .

(2) First, we show that no optimal allocation exists by contradiction. Assume on the contrary that there exists an optimal allocation $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ . Denote ${x_j} = {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}})$ for $j \in [m]$ . Then $\sum_{i=1}^m x_i= x_0$ and $\overline{{\Lambda}^\ast}(x_0)={\mathbb{P}}(X>x_0) \le\sum_{i=1}^m{\mathbb{P}}(X_i>x_i)\le \sum_{i=1}^m\overline{{\Lambda}_i}(x_i),$ However, by Proposition 7, it follows that $\sum_{i=1}^m\overline{\Lambda _i}(x_i) <\overline{\Lambda ^{\ast}}(x_0)$ , a contradiction.

Next, we turn to constructing a sequence of asymptotically optimal allocations. We consider two subcases.

Subcase 1: Suppose that ${\mathbb{P}}(X>x_0+{\epsilon})< {\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ . Let $({X_{1n}}, \ldots $ , $X_{mn})\in \mathbb{A}_m(X)$ be defined by (5.6). Then

\begin{align*}{\mathbb{P}}\Big(X_{jn} > x_{jn}+\frac{1}{mn}\Big) <\overline{{\Lambda}_j}(x_{jn}),\quad j \in [m],\end{align*}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ({X_{jn}}) \le {x_{jn}} + 1/\left( {mn} \right)$ for $j \in [m]$ . Thus, $\sum_{i=1}^m{\rm VaR}_{{\Lambda}_i}^+(X_{in}) \le \sum_{i=1}^m x_{in}+1/n= x_0 +1/n$ .

Subcase 2: Suppose that ${\mathbb{P}}(X> x_0+{\epsilon}_0)= {\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ . Then ${{\rm{\Lambda }}^{\rm{*}}}\left( {{x_0} + \epsilon } \right) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ for any $\epsilon \gt 0$ . Construct $(X_{1n},\dots,X_{mn})\in\mathbb{A}_m(X)$ as in (5.8). Similarly, we have $\sum_{i=1}^m{\rm VaR}_{\Lambda_i}^+(X_{i,n}) \le x_0 +1/n$ .

By Theorem 2, the desired statement follows by letting $n \to + \infty $ .

Proof of Theorem 8. We prove only part (3); the proofs of parts (1) and (2) are the same as those of Theorem 6. We consider two subcases.

Subcase 1: Suppose that ${{\rm{\Lambda }}_i}\left( {{x_i} + \epsilon } \right) \lt {{\rm{\Lambda }}_i}\left( {{x_i}} \right)$ for any $\epsilon \gt 0$ and $i \in K$ . Let $(X_1,\dots,X_m)\in\mathbb{A}_m(X)$ be defined by (D.1). Then, ${\mathbb{P}}(X_i>x_i)= \overline{{\Lambda}_i}(x_i) < \overline{{\Lambda}_i}(x_i+{\epsilon})$ for $i \in K$ , and ${\mathbb{P}}(X_i>x_i)= \overline{{\Lambda}_i}(x_i)$ for $i \notin K$ . Thus, ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^{{\kappa _j}}( {{X_j}}) \le {x_j}$ for all $j \in [m]$ , implying $\sum_{i=1}^m {\rm VaR}_{\Lambda _i}^{\kappa_i}(X_i) = {\rm VaR}_{\Lambda ^{\ast}}^+(X)$ .

Subcase 2: Suppose that for any $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^m y_i=x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)= \overline{\Lambda ^{\ast}}(x_0)$ , there always exists some ${\tau _0} \gt 0$ such that ${{\rm{\Lambda }}_k}\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\left( {{y_k} + {\tau _0}} \right)$ for some $k \in [m]$ . By a similar argument to that in the proof of Theorem 6 (3), we can construct a sequence of allocations $(X_{1n},\dots,X_{mn}) \in \mathbb{A}_m(X)$ , satisfying $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^{\kappa_i}(X_{in})\rightarrow x_0.$

Proof of Theorem 9. The proof is similar to that of Theorem 7.

Acknowledgements

The authors thank the Editors and the two anonymous referees for constructive comments on an earlier version of this paper. In particular, the results in Section 3 on inf-convolution of real functions are largely inspired by the received comments.

Funding information

T. Hu acknowledges financial support from National Natural Science Foundation of China (No. 72332007, 12371476).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Acciaio, B., Optimal risk sharing with non-monotone monetary functionals, Finance Stoch. 11 (2007), 267289.CrossRefGoogle Scholar
Acerbi, C. and Tasche, D., On the coherence of expected shortfall . J. Bank. Finance 26 (2002), 14871503.CrossRefGoogle Scholar
Artzner, P., Delbaen, F., Eber, J. M. and Heath, D., Coherent measures of risk, Math. Finance. 9 (1999), 203228.CrossRefGoogle Scholar
Barrieu, P. and El Karoui, N., Inf-convolution of risk measures and optimal risk transfer, Finance Stoch. 9 (2005), 269298.CrossRefGoogle Scholar
Bellini, F. and Peri, I., An axiomatization of Λ-quantiles, SIAM J. Financial Math. 13 (2002), SC26–SC38.CrossRefGoogle Scholar
Burzoni, M., Peri, I. and Ruffo, C. M., On the properties of the Lambda Value at Risk: robustness, elicitability and consistency , Quant. Finance. 17 (2017), 17351743.CrossRefGoogle Scholar
Corbetta, J. and Peri, I., Backtesting Lambda Value at Risk , Eur. J. Fin. 24 (2018), 10751087.CrossRefGoogle Scholar
Cui, W., Yang, J. and Wu, L., Optimal reinsurance minimizing the distortion risk measure under general reinsurance premium principles, Insurance Math. Econom. 53 (2013), 74–85.CrossRefGoogle Scholar
Davis, M. H. A., Verification of internal risk measure estimates, Stat. Risk Model. 33 (2016), 6793.Google Scholar
Dhaene, J., Denuit, M., Goovaerts, M. J., Kaas, R., Tang, Q. and Vynche, D., Risk measures and comonotonicity: a review, Stoch. Models. 22 (2006), 573–606.CrossRefGoogle Scholar
Dhaene, J., Denuit, M., Goovaerts, M. J., Kaas, R. and Vyncke, D., The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31 (2002), 3–33.CrossRefGoogle Scholar
Embrechts, P., Liu, H. and Wang, R., Quantile-based risk sharing , Oper. Res. 66 (2018), 936949.CrossRefGoogle Scholar
Filipović, D. and Svindland, G., Optimal capital and risk allocations for law- and cash-invariant convex functions, Finance Stoch. 12 (2008), 423–439.CrossRefGoogle Scholar
Föllmer, H. and Schied, A., Stochastic finance: An introduction in discrete time, fourth edition (Berlin: Walter de Gruyter, 2016).Google Scholar
Frittelli, M., Maggis, M., and Peri, I., Risk measures on $\mathcal{P}(\mathbb{R})$ and Value at Risk with probability/loss function, Math. Finance. 24 (2014), 442463.CrossRefGoogle Scholar
Han, X., Wang, Q., Wang, R. and Xia, J., Cash-subadditive risk measures without quasi-convexity. (2021), arXiv: 2110.12198.Google Scholar
Hitaj, A., Mateus, C. and Peri, I., Lambda Value at Risk and regulatory capital: A dynamic approach to tail risk, Risks. 6 (2018), Article uumber 17.CrossRefGoogle Scholar
Ince, A., Peri, I. and Pesenti, S., Risk contributions of Lambda quantiles , Quant. Finance. 22 (2022), 18711891.CrossRefGoogle Scholar
Jouini, E., Schachermayer, W. and Touzi, N., Optimal risk sharing for law invariant monetary utility functions, Math. Finance. 18 (2008), 269292.CrossRefGoogle Scholar
Lauzier, J.-G., Lin, L. and Wang, R., Risk sharing, measuring variability, and distortion riskmetrics. (2023), arXiv:2302.04034v1.Google Scholar
Liu, F., Mao, T., Wang, R. and Wei, L., Inf-convolution and optimal allocations for tail risk measures. Math. Oper Res. 47 (2022), 24942519.Google Scholar
Liu, P., Risk sharing with Lambda Value-at-Risk. Math. Oper. Res. (2024), Doi: 10.1287/moor.2023.0246 CrossRefGoogle Scholar
Liu, P., Wang, R. and Wei, L., Is the inf-convolution of law-invariant preferences law-invariant? Insurance Math. Econom. 91 (2020), 144154.Google Scholar
Song, Y. and Yan, J., The representations of two types of functions on ${L^{\infty} }\left( {\Omega ,{\mathcal{F}}} \right) \,\,\textrm{and}\,\,{L^{\infty} }\left( {\Omega ,{\mathcal{F}}, \mathbb{P}} \right)$ , Science in China, Series A: Mathematics. 49 (2006), 13761382.Google Scholar
Song, Y. and Yan, J., Risk measures with comonotonic subadditivity or convexity and respecting stochastic orders, Insurance Math. Econom. 45 (2009), 459–465.CrossRefGoogle Scholar
Tsanakas, A., To split or not to split: capital allocation with convex risk measures, Insurance Math. Econom. 44 (2009), 268–277.CrossRefGoogle Scholar
Wang, R. and Wei, Y., Characterizing optimal allocations in quantile-based risk sharing. Insurance Math. Econom. 93 (2020), 288–300.Google Scholar
Wang, R. and Zitikis, R., Weak comonotonicity , Eur. J. Oper. Res. 282 (2020), 386397.CrossRefGoogle Scholar
Xia, Z., Inf-convolution and optimal allocations for mixed-VaRs and Lambda VaRs, PhD thesis, University of Science and Technology of China, Hefei (2023).CrossRefGoogle Scholar