Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-12-07T00:43:05.520Z Has data issue: false hasContentIssue false

Optimal risk sharing for lambda value-at-risk

Published online by Cambridge University Press:  29 July 2024

Zichao Xia*
Affiliation:
University of Science and Technology of China
Taizhong Hu*
Affiliation:
University of Science and Technology of China
*
*Postal address: International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
*Postal address: International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
Rights & Permissions [Opens in a new window]

Abstract

A new risk measure, the Lambda Value-at-Risk (VaR), was proposed from a theoretical point of view as a generalization of the ordinary VaR in the literature. Motivated by the recent developments in risk sharing problems for the VaR and other risk measures, we study the optimization of risk sharing for the Lambda VaR. Explicit formulas of the inf-convolution and sum-optimal allocations are obtained with respect to the left Lambda VaRs, the right Lambda VaRs, or a mixed collection of the left and right Lambda VaRs. The inf-convolution of Lambda VaRs constrained to comonotonic allocations is investigated. Explicit formula for worst-case Lambda VaRs under model uncertainty induced by likelihood ratios is also given.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $ (\mathrm{\Omega },\mathcal{F},\mathbb{P})$ be an atomless probability space, and let $ {L}^{0}$ be the set of all random variables defined on $ (\mathrm{\Omega },\mathcal{F},\mathbb{P})$ . Let $ \mathcal{X}$ be a convex cone of random variables in $ {L}^{0}$ , and let $ {L}^{k}$ be the set of all random variables with finite $ k$ th moments, where $ k \gt 0$ . For any $ X\in {L}^{0}$ , a positive (negative) value of $ X$ represents a financial loss (profit). A risk measure is a functional $ \rho \,:\,\mathcal{X}\to ({-}\mathcal{\infty },+\mathcal{\infty }]$ ; see [Reference Artzner, Delbaen, Eber and Heath3, Reference Föllmer and Schied14]. In a risk sharing problem, there are $ m$ agents equipped with respective risk measures $ {\rho }_{1},\dots ,{\rho }_{m}$ . Let $ X\in \mathcal{X}$ denote the total risk, which is shared by $ m$ agents. $ X$ is splitted into an allocation $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ among $ m$ agents, where $ {\mathbb{A}}_{m}(X)$ is the set of all possible allocations of $ X$ , defined as

\begin{align*} {\mathbb{A}}_{m}(X)=\left\{({X}_{1},\dots ,{X}_{m})\in {\mathcal{X}}^{m}\,:\,\mathrm{}\mathrm{}\sum _{j=1}^{m} {X}_{j}=X\right\}.\end{align*}

The inf-convolution of risk measures $ {\rho }_{1},\dots ,{\rho }_{m}$ is the mapping $ {\square}_{i=1}^{n}{\rho }_{i}\,:\,\mathcal{X}\to ({-}\mathcal{\infty },\mathcal{\infty }]$ , defined as

\begin{align*} \stackrel{m}{\underset{i=1}{\square}}{\rho }_{i}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\!\left\{\sum _{i=1}^{m} {\rho }_{i}\!\left({X}_{i}\right):\,\mathrm{}\mathrm{}({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)\right\},\quad X\in \mathcal{X}.\end{align*}

An $ m$ -tuple $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ is optimal (also termed as sum-optimal) of $ X$ for $ ({\rho }_{1},\dots ,{\rho }_{m})$ if $ {\square}_{i=1}^{m}{\rho }_{i}(X)={\sum }_{i=1}^{m} {\rho }_{i}\!\left({X}_{i}\right)$ . A sequence of allocations $ ({X}_{1n},\dots ,{X}_{mn})\in {\mathbb{A}}_{m}(X)$ , $ n\in \mathbb{N}$ , is asymptotically optimal if $ {\sum }_{i=1}^{m} {\rho }_{i}\!\left({X}_{in}\right)\to {\square}_{i=1}^{m}{\rho }_{i}(X)$ as $ n\to \mathrm{\infty }$ . An allocation $ ({X}_{1},\dots ,{X}_{m})\in {\mathbb{A}}_{m}(X)$ is Pareto-optimal if for any $ ({Y}_{1},\dots $ , $ {Y}_{m})$ $ \in {\mathbb{A}}_{m}(X)$ , $ {\rho }_{i}\!\left({Y}_{i}\right)\le {\rho }_{i}\!\left({X}_{i}\right)$ for all $ i\in [m]$ implies $ {\rho }_{i}\!\left({Y}_{i}\right)={\rho }_{i}\!\left({X}_{i}\right)$ for all $ i\in [m]$ , where $ [m]=\{1,\dots ,m\}$ . It is shown in Proposition 1 of [Reference Embrechts, Liu and Wang12] that sum-optimality is equivalent to Pareto-optimality for monetary risk measures. For non-monetary risk measures, it is easy to see that sum-optimality implies Pareto-optimality.

Liu et al. [Reference Liu, Wang and Wei23] investigated conditions under which the inf-convolution possesses the property of law invariance. For more on inf-convolution for the case of convex risk measures, see [Reference Acciaio1], [Reference Barrieu and El Karoui4], [Reference Filipović and Svindland13], [Reference Jouini, Schachermayer and Touzi19 and [Reference Tsanakas26], among others.

Embrechts et al. [Reference Embrechts, Liu and Wang12], Liu et al. [Reference Liu, Mao, Wang and Wei21], and Wang and Wei [Reference Wang and Wei27] studied the optimization of risk sharing for non-convex risk measures, for examples, Value-at-Risk (VaR) and Range-Value-at-Risk (RVaR). Explicit formulas of the inf-convolution and Pareto-optimal allocations were obtained with respect to the left VaRs, the right VaRs or a mixed collection of the left and right VaRs for $ m\ge 2$ . Formal definitions of the left and right VaRs are defined in Subsection 2.1. More precisely, for $ m=2$ , Embrechts et al. [Reference Embrechts, Liu and Wang12] proved that

(1.1) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{-}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{-}(X)=\,\mathrm{VaR}_{\lambda }^{-}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1},{\lambda }_{2}\in [\mathrm{0,1}]$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1 \gt 0$ . Liu et al. [Reference Liu, Mao, Wang and Wei21] considered the case of a mixed collection of the left and right VaRs, and proved that

(1.2) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{+}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{+}(X)=\,\mathrm{VaR}_{\lambda }^{+}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1},{\lambda }_{2}\in [\mathrm{0,1})$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1\ge 0$ , and that

(1.3) \begin{align} \,\mathrm{VaR}_{{\lambda }_{1}}^{-}\square\,\mathrm{VaR}_{{\lambda }_{2}}^{+}(X)=\,\mathrm{VaR}_{\lambda }^{+}(X),\quad X\in {L}^{0},\end{align}

for $ {\lambda }_{1}\in [\mathrm{0,1}],{\lambda }_{2}\in [\mathrm{0,1})$ such that $ \lambda ={\lambda }_{1}+{\lambda }_{2}-1\ge 0$ . More recently, Lauzier et al. [Reference Lauzier, Lin and Wang20] investigated the problem of sharing risk among agents with preferences modeled by a general class of comonotonic additive and law-based distortion riskmetrics that need not be either monotone or convex, and solved explicitly Pareto-optimal allocations among agents using the Gini deviation, the mean-median deviation, or the inter-quantile difference as the relevant variability measures.

The Lambda Value-at-Risk (VaR) was proposed by Frittelli et al. [Reference Frittelli, Maggis and Peri15] as a generalization of the usual VaR. The formal definitions of the left and right Lambda VaRs are given in Section 2 (Definition 1). The Lambda VaRs are not monetary risk measures, as can be seen from Proposition 2. One naturally wonders whether an explicit formula also holds for the inf-convolution of the Lambda VaR agents. In this paper, we generalize the formulas (1.1)–(1.3) in several directions within the context of the Lambda VaRs.

The novelty of Lambda VaR is considering a function $ \Lambda $ , called ‘probability loss function’, which can change and adjust according to the profits and losses of a risk variable. The Lambda VaR can discriminate different risk variables with the same VaR at level $ \lambda $ but with different tail behavior. The function $ \Lambda $ can be either increasing or decreasing in [Reference Frittelli, Maggis and Peri15]. Burzoni et al. [Reference Burzoni, Peri and Ruffo6] focused on the conditions under which the Lambda VaR is robust, elicitable and consistent in the sense of [Reference Davis9]. Hitaj et al. [Reference Hitaj, Mateus and Peri17] applied Lambda VaR in financial risk management as an alternative to VaR to access capital requirements, and their findings show that Lambda VaR estimates are able to capture the tail risk and react to market fluctuations significantly faster than the VaR and expected shortfall. Corbetta and Peri [Reference Corbetta and Peri7] proposed three backtesting methodologies and assessed the accuracy of Lambda VaR from different points of view. Ince et al. [Reference Ince, Peri and Pesenti18] presented a novel treatment of Lambda VaR on subsets of $ {\mathbb{R}}^{n}$ , and derived risk contributions of individual assets to the overall portfolio risk, measured via Lambda VaR of the portfolio composition.

The rest of this paper is organized as follows. In Section 2, we provide the formal definitions of the Lambda VaR, collect some basic properties of the Lambda VaR and derive explicit formulas for worst-case Lambda VaRs under model uncertainty induced by likelihood ratios. In Section 3, we introduce the inf-convolution of decreasing functions, and study its detailed properties. These properties will be used in Sections 4 and 5. In Section 4, we obtain explicit formulas of the inf-convolution with respect to the left Lambda VaRs, the right Lambda VaRs and a mixed collection of the left and right Lambda VaRs. Section 5 focuses on the construction of optimal allocations and asymptotically optimal allocations of inf-convolution of several Lambda VaRs. In Section 6, we consider inf-convolution of Lambda VaRs constrained to comonotonic allocations. Section 7 contains some concluding remarks. The proofs of some lemmas and propositions appearing in the previous sections are relegated to Appendices AD.

2. Properties of Lambda VaRs

2.1. Definitions of Lambda VaRs

Let $ X\in {L}^{0}$ with distribution function $ {F}_{X}$ . The (ordinary) left-VaR of $ X$ at confidence level $ \alpha \in [\mathrm{0,1}]$ is defined as

\begin{align*} \,\mathrm{VaR}_{\alpha }^{-}(X)={F}_{X}^{-1}\!\left(\alpha \right)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\ge \alpha \}=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \lt \alpha \},\end{align*}

and the (ordinary) right-VaR of $ X$ at confidence level $ \alpha \in [\mathrm{0,1}]$ is defined as

\begin{align*} \,\mathrm{VaR}_{\alpha }^{+}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \gt \alpha \}=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\le \alpha \}.\end{align*}

Here and henceforth, we use the convention that $ \mathrm{inf}\,\mathrm{\varnothing }=+\mathrm{\infty }$ and $ \mathrm{sup}\,\mathrm{\varnothing }=-\mathrm{\infty }$ . For the role of left-quantile ( $\mathrm{VaR}_{\alpha }^{-}$ ) and right quantile ( $\mathrm{VaR}_{\alpha }^{+}$ ) with $ \alpha \in \left(\mathrm{0,1}\right]$ as risk measures, see the discussion in [Reference Acerbi and Tasche2] and [Reference Liu, Mao, Wang and Wei21, Remark 5].

Next, we recall the definition of Lambda VaRs from Bellini and Peri [Reference Bellini and Peri5], which are generalizations of ordinary VaRs.

Definition 1. Let $ X\in {L}^{0}$ with distribution function $ {F}_{X}$ , and let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ . The Lambda VaRs of $ X$ or $ {F}_{X}$ are defined as follows:

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\ge \Lambda (x)\},\end{align*}
\begin{align*} \,\mathrm{VaR}_{\Lambda }^{+}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \gt \Lambda (x)\},\end{align*}

and

\begin{align*} \widetilde{{\mathrm{VaR}}}_{\Lambda }^{-}(X)=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x) \lt \Lambda (x)\},\end{align*}
\begin{align*} \widetilde{{\mathrm{VaR}}}_{\Lambda }^{+}(X)=\mathrm{sup}\{x\in \mathbb{R}\,:\,{F}_{X}(x)\le \Lambda (x)\}.\end{align*}

$\mathrm{VaR}_{\Lambda }^{\kappa }(X)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{\kappa }(X)$ are also denoted by $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left({F}_{X}\right)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{\kappa }\!\left({F}_{X}\right)$ , where $ \kappa \in \{-,+\}$ .

It is known from [Reference Bellini and Peri5] that $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{-}(X)=\,\mathrm{VaR}_{\Lambda }^{-}(X)$ and $ \widetilde{{\mathrm{VaR}}}_{\Lambda }^{+}(X)=\,\mathrm{VaR}_{\Lambda }^{+}(X)$ for $ X\in {L}^{0}$ when $ \Lambda $ is decreasing. In this paper, “increasing” and “decreasing” are used in the weak sense. Thus, Lambda VaRs reduce from four to two. In the sequel, we only consider the left and the right Lambda VaRs, $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ .

Instead of a constant confidence level $ \lambda $ in the definition of $\mathrm{VaR}_{\lambda }$ , the function $ \Lambda $ adds flexibility in modeling tail behavior of risks. Under this assumption, properties of Lambda VaRs closely resemble those of the usual VaRs. The financial interpretation of the assumption of a decreasing $ \Lambda $ is well illustrated by a simple two-level Lambda VaR [Reference Bellini and Peri5, Example 2.7].

2.2. Basic properties of Lambda VaRs

We collect some basic properties of Lambda VaRs from [Reference Bellini and Peri5]. Throughout, let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ be decreasing to avoid pathological cases, and let $ {\mathcal{M}}_{1}$ denote the set of probability measures on $ (\mathbb{R},\mathcal{B}(\mathbb{R}))$ . Then

  1. (B1) $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ are finite if and only if $ \Lambda \not\equiv 0$ and $ \Lambda \not\equiv 1$ .

  2. (B2) If $ {\Lambda }_{1}(x)={\Lambda }_{2}(x)$ on their common points of continuity or $ {\Lambda }_{1}(x)={\Lambda }_{2}(x)$ almost surely with respective to the Lebesgue measure, then $\mathrm{VaR}_{{\Lambda }_{1}}^{\kappa }=\,\mathrm{VaR}_{{\Lambda }_{2}}^{\kappa }$ on $ {L}^{0}$ for $ \kappa \in \{-,+\}$ .

  3. (B3) For $ \kappa \in \{-,+\}$ , $\mathrm{VaR}_{\Lambda }^{\kappa }$ is quasi-concave on $ {\mathcal{M}}_{1}$ , that is,

    \begin{align*} \,\mathrm{VaR}_{\Lambda }^{\kappa }(\alpha {F}_{1}+(1-\alpha \!\left){F}_{2}\right)\ge \mathrm{min}\big\{\mathrm{VaR}_{\Lambda }^{\kappa }({F}_{1}),\,\mathrm{VaR}_{\Lambda }^{\kappa }({F}_{2})\big\}\end{align*}
    for any $ {F}_{1},{F}_{2}\in {\mathcal{M}}_{1}$ and $ 0 \lt \alpha \lt 1$ .
  4. (B4) For $ \kappa \in \{-,+\}$ , $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(F\right)$ has the “convex level set” (CxLS) property. A risk measure $ \rho \,:\,{\mathcal{M}}_{1}\to \mathbb{R}$ is said to have the CxLS property if for any $ {F}_{1},{F}_{2}\in {\mathcal{M}}_{1}$ , $ \alpha \in \left(\mathrm{0,1}\right)$ and $ \gamma \in \mathbb{R}$ , it holds that

    \begin{align*} \rho \!\left({F}_{1}\right)=\rho \!\left({F}_{2}\right)=\gamma \Rightarrow \rho (\alpha {F}_{1}+(1-\alpha \!\left){F}_{2}\right)=\gamma .\end{align*}
  5. (B5) $\mathrm{VaR}_{\Lambda }^{-}$ is weakly lower semi-continuous, i.e. if $ {F}_{n}\stackrel{d}{\to }F$ for $ {F}_{n},F\in {\mathcal{M}}_{1}$ , then

    \begin{align*} \underset{n\to \mathrm{\infty }}{\mathrm{lim}\,\mathrm{inf}}\,\mathrm{VaR}_{\Lambda }^{-}\!\left({F}_{n}\right)\ge \,\mathrm{VaR}_{\Lambda }^{-}\!\left(F\right).\end{align*}

    $\mathrm{VaR}_{\Lambda }^{+}$ is weakly upper semi-continuous, i.e. if $ {F}_{n}\stackrel{d}{\to }F$ for $ {F}_{n},F\in {\mathcal{M}}_{1}$ , then

    \begin{align*} \underset{n\to \mathrm{\infty }}{\mathrm{lim\,sup}}\,\mathrm{VaR}_{\Lambda }^{+}\!\left({F}_{n}\right)\le \,\mathrm{VaR}_{\Lambda }^{+}\!\left(F\right).\end{align*}

Some further properties of the Lambda VaRs are presented in the following propositions, whose proofs are postponed to Appendix A.

Proposition 1. For any $ X\in {L}^{0}$ and $ \kappa \in \{-,+\}$ , we have

(2.1) \begin{align} \,\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right) & \ge \lambda \mathrm{}\,\mathrm{VaR}_{\Lambda }^{\kappa }(X), \quad 0 \lt \lambda \lt 1;\nonumber\\[4pt] \mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right) & \le \lambda \mathrm{}\,\mathrm{VaR}_{\Lambda }^{\kappa }(X),\quad \lambda \gt 1.\end{align}

Consequently, $\mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right)/\lambda $ is decreasing in $ \lambda \in (0,\mathrm{\infty })$ for any fixed $ X\in {L}^{0}$ .

Han et al. [Reference Han, Wang, Wang and Xia16] in their Remark 3.1 showed that Lambda VaRs are not star-shaped but quasi-star-shaped. Proposition 1 states that the Lambda VaRs possess a “reverse star-shape” property.

Proposition 2. $ \mathrm{VaR}_{\Lambda }^{-}$ or $ \mathrm{VaR}_{\Lambda }^{+}$ is translation invariant on $ {L}^{0}$ if and only if $ \Lambda $ is a constant.

Proposition 3. Let $ \kappa \in \{-,+\}$ . If $ \mathrm{VaR}_{\Lambda }^{\kappa }$ is positively homogeneous, i.e. $ \mathrm{VaR}_{\Lambda }^{\kappa }\!\left(\lambda X\right)$ $ =\lambda \mathrm{VaR}_{\Lambda }^{\kappa }(X)$ for all $ X\in {L}^{0}$ and $ \lambda \in (0,\infty )$ , then $ \Lambda $ is constant on intervals $ (0,\infty )$ and $ ({-}\infty ,0)$ , respectively, that is, there exist $ 1\ge {\alpha }_{1}\ge {\alpha }_{2}\ge {\alpha }_{3}\ge 0$ such that

(2.2) \begin{align} \Lambda (x)={\alpha }_{1}{1}_{({-}\mathrm{\infty },0)}(x)+{\alpha }_{2}{1}_{\left\{0\right\}}(x)+{\alpha }_{3}{1}_{(0,\mathrm{\infty })}(x).\end{align}

Next, we give three lemmas concerning properties of Lambda VaRs, which will be used in this paper. The first one will be used repeatedly in this paper. The second and the third ones give alternative representations of the Lambda VaRs in terms of the usual VaRs. Here and in the sequel, $ \overline{\Lambda }=1-\Lambda $ , and $ \Lambda (x{-})$ and $ \Lambda (x{+})$ denote the left and right limits of function $ \Lambda $ at point $ x$ , respectively.

Lemma 1. For $ X\in {L}^{0}$ and $ x\in \mathbb{R}$ , we have

(2.3) \begin{align} \mathbb{P}(X \gt x)\le \overline{\Lambda }(x{+})\iff \,\mathrm{VaR}_{\Lambda }^{-}(X)\le x,\end{align}
(2.4) \begin{align} \mathbb{P}(X \gt x) \lt \overline{\Lambda }(x{+})\Rightarrow \,\mathrm{VaR}_{\Lambda }^{+}(X)\le x,\quad\ \end{align}
(2.5) \begin{align} \mathbb{P}(X\ge x) \gt \overline{\Lambda }(x{+})\Rightarrow \,\mathrm{VaR}_{\Lambda }^{-}(X)\ge x,\quad\ \end{align}
(2.6) \begin{align} \mathbb{P}(X\ge x)\ge \overline{\Lambda }(x{-})\iff \,\mathrm{VaR}_{\Lambda }^{+}(X)\ge x.\end{align}

Lemma 2. [Reference Han, Wang, Wang and Xia16, Proposition 3.1] If $ \Lambda \!\left(t\right)$ is not constantly $ 0$ , that is, $ \Lambda ({-}\infty ) \gt 0$ , then

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-}(X)=\underset{y\in \mathbb{R}}{\mathrm{i}\mathrm{n}\mathrm{f}}\big\{\mathrm{VaR}_{\Lambda (y)}^{-}(X)\vee y\big\},\quad X\in {L}^{0}.\end{align*}

Lemma 3. If $ \Lambda \!\left(t\right)$ is not constantly $ 0$ , that is, $ \Lambda ({-}\infty ) \gt 0$ , then

(2.7) \begin{align} \mathrm{VaR}_{\Lambda }^{+}(X)=\underset{y\in \mathbb{R}}{\mathrm{i}\mathrm{n}\mathrm{f}}\big\{\mathrm{VaR}_{\Lambda (y)}^{+}(X)\vee y\big\},\quad X\in {L}^{0}.\end{align}

2.3. Worst-case Lambda VaR under model uncertainty

Let $ \mathcal{P}$ be the set of all probability measures that are absolutely continuous with respect to $ \mathbb{P}$ , where $ \mathbb{P}$ is a common benchmark for all agents. For any $ Q\in \mathcal{P}$ , let $\mathrm{VaR}_{\Lambda }^{-,Q}$ and $\mathrm{VaR}_{\Lambda }^{+,Q}$ be the $\mathrm{VaR}_{\Lambda }^{-}$ and $\mathrm{VaR}_{\Lambda }^{+}$ evaluated under the probability measure $ Q$ instead of $ \mathbb{P}$ . We consider the worst-case Lambda VaR risk measures

\begin{align*} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,\mathcal{Q}}={\mathrm{sup}}_{Q\in \mathcal{Q}}\,\mathrm{VaR}_{\Lambda }^{-,Q}\,\mathrm{and}\,{\overline{\mathrm{VaR}}}_{\Lambda }^{+,\mathcal{Q}}={\mathrm{sup}}_{Q\in \mathcal{Q}}\,\mathrm{VaR}_{\Lambda }^{+,Q},\end{align*}

where $ \mathcal{Q}$ is the subset of $ \mathcal{P}$ , describing model uncertainty. We call $ \mathcal{Q}$ an uncertainty set of probability measures. A particular choice of $ \mathcal{Q}$ is induced by likelihood ratios, which is the following set of probability measures whose Randon-Nikodym derivatives with respect to $ \mathbb{P}$ do not exceed a constant, i.e.

\begin{align*} {\mathcal{P}}_{\beta }=\left\{Q\in \mathcal{P}\,:\,\frac{\mathrm{}\mathrm{d}Q}{\mathrm{}\mathrm{d}\mathbb{P}}\le \frac{1}{\beta }\right\}\,\mathrm{for}\,\beta \in \left(\mathrm{0,1}\right].\end{align*}

Liu et al. [Reference Liu, Mao, Wang and Wei21] considered the special cases $\mathrm{VaR}_{\lambda }^{-}$ and $\mathrm{VaR}_{\lambda }^{+}$ with $ \Lambda \equiv \lambda \in \left(\mathrm{0,1}\right)$ under uncertainty set $ {\mathcal{P}}_{\beta }$ , and obtained that

\begin{align*} {\overline{\mathrm{VaR}}}_{\lambda }^{-,{\mathcal{P}}_{\beta }}=\,\mathrm{VaR}_{1-(1-\lambda )\beta }^{-},\,\mathrm{}\mathrm{}\mathrm{}{\overline{\mathrm{VaR}}}_{\lambda }^{+,{\mathcal{P}}_{\beta }}=\,\mathrm{VaR}_{1-(1-\lambda )\beta }^{+}.\end{align*}

Proposition 4. Let $ \Lambda \,:\,\mathbb{R}\to [\mathrm{0,1}]$ be decreasing. For $ \beta \in \left(\mathrm{0,1}\right]$ , define $ {\Lambda }_{\beta }=1-\beta \overline{\Lambda }$ . Then

(2.8) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{+,{\mathcal{P}}_{\beta }}(X)=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{+}(X),\quad X\in {L}^{0},\end{align}

Furthermore, if $ \Lambda \gt 0$ , then

(2.9) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X),\quad X\in {L}^{0}.\end{align}

Proof. We give the proof for the left Lambda VaR since the proof for the right Lambda VaR is similar. First, note that for any given $ X\in {L}^{0}$ and $ Q\in \mathcal{Q}$ , we have $ Q(X \gt x)\le \mathbb{P}(X \gt x)/\beta $ for any $ x\in \mathbb{R}$ and, hence,

\begin{align*} \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X) & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{}\mathbb{}\mathbb{P}(X \gt x)\le \overline{{\Lambda }_{\beta }}(x)\}=\mathrm{i}\mathrm{n}\mathrm{f}\!\left\{x\,:\,\mathrm{}\mathrm{}\frac{1}{\beta }\mathbb{P}(X \gt x)\le \overline{\Lambda }(x)\right\}\\ & \ge \mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathrm{}\mathrm{}Q(X \gt x)\le \overline{\Lambda }(x)\}=\,\mathrm{VaR}_{\Lambda }^{-,Q}(X).\end{align*}

Thus,

(2.10) \begin{align} {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)\le \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X),\quad X\in {L}^{0}.\end{align}

To prove the reverse inequality of (2.10), we choose a special $ {Q}_{0}\in \mathcal{P}_{\beta }$ such that $ \mathrm{}\mathrm{d}{Q}_{0}/\mathrm{}\mathrm{d}\mathbb{P}=(1/\beta ){1}_{\{{U}_{X} \gt 1-\beta \}}$ , where $ {U}_{X}\sim U\!\left(\mathrm{0,1}\right)$ such that $ X={F}_{X}^{-1}\!\left({U}_{X}\right)$ , a.s. Then

\begin{align*} \,\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X) & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,{Q}_{0}(X\le x)\ge \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{P}(X\le x,{U}_{X} \gt 1-\beta )\ge \beta \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathbb{P}(1-\beta \lt {U}_{X}\le {F}_{X}(x))\ge \beta \Lambda (x)\}\\ & =\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,\mathrm{m}\mathrm{a}\mathrm{x}\{{F}_{X}(x)-1+\beta ,0\}\ge \beta \Lambda (x)\}.\end{align*}

Since $ \Lambda \gt 0$ , it follows that

(2.11) \begin{align} \,\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X)=\mathrm{i}\mathrm{n}\mathrm{f}\{x\,:\,{F}_{X}(x)\ge 1-\beta \overline{\Lambda }(x)\}=\,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X).\end{align}

Therefore, $ {\overline{\mathrm{VaR}}}_{\Lambda }^{-,{\mathcal{P}}_{\beta }}(X)\ge \,\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X)$ for $ X\in {L}^{0}$ . This proves (2.9) for left Lambda VaR.

Remark 1. Eq. (2.11) cannot be true without the assumption $ \Lambda \gt 0$ . A counterexample is as follows. Let $ \Lambda (x)={1}_{({-}\mathrm{\infty },2]}(x)$ , $ X\sim U\!\left(\mathrm{0,4}\right)$ under probability measure $ \mathbb{P}$ , and set $ \beta =1/4$ . Choose $ {Q}_{0}\in {P}_{\beta }$ such that $ \mathrm{}\mathrm{d}{Q}_{0}/\mathrm{}\mathrm{d}\mathbb{P}=4\mathrm{}{1}_{\{{U}_{X} \gt 3/4\}}$ , that is, $ X\sim U\!\left(\mathrm{3,4}\right)$ under probability measure $ {Q}_{0}$ . Then $\mathrm{VaR}_{\Lambda }^{-,{Q}_{0}}(X)=2$ . However, $\mathrm{VaR}_{{\Lambda }_{\beta }}^{-}(X)=3 \gt {\mathrm{VaR}}_{\Lambda }^{-,{Q}_{0}}(X)$ . Thus, (2.11) does not hold in this case.

Further properties of Lambda VaRs under model uncertainty induced by Wasserstein metrics can be found in Xia [Reference Xia29].

3. Inf-convolution of real functions

In order to study inf-convolution of Lambda VaRs, we introduce the following inf-convolution of real functions. We restrict ourselves to consider bounded and decreasing functions.

Definition 2. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be a bounded and decreasing function for each $ i\in [m]$ . The inf-convolution of $ {\Lambda }_{1},\dots ,{\Lambda }_{m}$ is denoted by $ \stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}(y)$ , defined as

(3.1) \begin{align} \stackrel{m}{\underset{i=1}{\oslash }}\!{\Lambda }_{i}(y)\,:\!=\,\underset{{y}_{1},\dots ,{y}_{m}\in \mathbb{R},\sum _{i=1}^{m} {y}_{i}=y}{\mathrm{i}\mathrm{n}\mathrm{f}}\!\left\{1-\sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right)\right\}.\end{align}

Throughout, we denote $ {\Lambda }^{\mathrm{*}}\left(y\right)=\stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}(y)$ .

It is easy to see that $ {\Lambda }^{\mathrm{*}}(y)$ is also decreasing and that

(3.2) \begin{align} \overline{{\Lambda }^{\mathrm{*}}}(y)=\overline{\stackrel{m}{\underset{i=1}{\oslash }}{\Lambda }_{i}}(y)={\mathrm{sup}}_{{y}_{1},\dots ,{y}_{m}\in \mathbb{R},\sum _{i=1}^{m} {y}_{i}=y}\sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right).\end{align}

That is, $ \overline{{\Lambda }^{\mathrm{*}}}$ is the sup-convolution of $ \overline{{\Lambda }_{1}},\dots ,\overline{{\Lambda }_{m}}$ . The next proposition justifies the simple fact that the inf-convolution of $ m$ functions can be seen as the repeated applications of the inf-convolution of two functions. In the expression $ {\Lambda }_{1}\oslash {\Lambda }_{2}\dots \oslash {\Lambda }_{m}$ below, the convention is to perform the operations $ \oslash $ from left to right.

Proposition 5. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded and decreasing for each $ i\in [m]$ . For any $ y\in \mathbb{R}$ , we have $ {\oslash }_{i=1}^{m}{\Lambda }_{i}(y)={\Lambda }_{1}\oslash {\Lambda }_{2}\dots \oslash {\Lambda }_{m}(y)$ , where $ \Lambda_1\oslash\Lambda_2(y)={\oslash }_{i=1}^{2}{\Lambda }_{i}(y)$ .

Several further properties of inf-convolution of real functions are listed in the following propositions, whose proofs are presented in Appendix B. The first proposition, Proposition 6, will be used repeatedly to prove other results in this paper, which gives the expressions of $ {\Lambda }^{\mathrm{*}}$ at positive infinity and negative infinity. We denote $ \Lambda ({+}\mathrm{\infty })={\mathrm{l}\mathrm{i}\mathrm{m}}_{x\uparrow \mathrm{\infty }}\Lambda (x)$ and $ \Lambda ({-}\mathrm{\infty })={\mathrm{l}\mathrm{i}\mathrm{m}}_{x\downarrow -\mathrm{\infty }}\Lambda (x)$ for any decreasing function $ \Lambda $ .

Proposition 6. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded and decreasing for $ i\in [m]$ . Then

\begin{align*} {\Lambda }^{\mathrm{*}}({-}\mathrm{\infty }) & =\underset{1\le i\le m}{\mathrm{min}}\bigg(1-\overline{{\Lambda }_{i}}({-}\mathrm{\infty })-\sum _{j\ne i} \overline{{\Lambda }_{j}}({+}\mathrm{\infty })\bigg),\\ {\Lambda }^{\mathrm{*}}({+}\mathrm{\infty }) & =1-\sum _{i=1}^{m} \overline{{\Lambda }_{i}}({+}\mathrm{\infty }).\end{align*}

Proposition 7. Let $ {\Lambda }_{i}\,:\,\mathbb{R}\to \mathbb{R}$ be bounded, right-continuous and decreasing for $ i\in [m]$ . For any $ y\in \mathbb{R}$ , $ {\Lambda }^{*}(y)$ has either one of the following properties:

  1. (P1) There exists $ ({y}_{1},\dots ,{y}_{m})\in {\mathbb{R}}^{m}$ such that $ {\sum }_{i=1}^{m} {y}_{}=y$ and $ \overline{{\Lambda }^{\mathrm{*}}}(y)={\sum }_{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{i}\right)$ .

  2. (P2) There exists a sequence $ \left\{\right({y}_{1n},\dots ,{y}_{mn}){\}}_{n\in \mathbb{N}}$ such that

    \begin{align*} \sum _{i=1}^{m} {y}_{in}=y\ \ and\ \ \sum _{i=1}^{m} \overline{{\Lambda }_{i}}\!\left({y}_{in}\right)\to \overline{{\Lambda }^{\mathrm{*}}}(y)\ as\ n\to \mathrm{\infty },\end{align*}
    where $\{(y_{1n}, \dots, y_{mn})\}_{n\in{\mathbb{N}}}$ does not have a cluster point in ${\mathbb{R}}^m$ . In this case, $\overline {{{\rm{\Lambda }}^{\rm{*}}}} (y) = \overline {{{\rm{\Lambda }}^{\rm{*}}}} ({-}\infty)$ , and $\sum_{i=1}^m\overline{{\Lambda}_i} (y_i) < \overline{{\Lambda}^{\ast}} (y)$ whenever $\sum_{i=1}^m y_i=y$ .

Furthermore, if ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{y_0}} \right)$ has property $(\mathrm{P}_2)$ , then so does ${{\rm{\Lambda }}^{\rm{*}}}(x)$ for any $x \lt {y_0}$ .

The next proposition gives sufficient conditions on $\left\{ {{{\rm{\Lambda }}_i}} \right\}$ under which ${{\rm{\Lambda }}^{\rm{*}}}$ is right-continuous or continuous.

Proposition 8. (Continuity.) Let ${\Lambda}_i\,:\, {\mathbb{R}}\to{\mathbb{R}}$ be bounded and decreasing for $i \in [m]$ .

  1. (1) If ${{\rm{\Lambda }}_i}$ is continuous for some $i$ , then so is ${{\rm{\Lambda }}^{\rm{*}}}$ .

  2. (2) If all ${{\rm{\Lambda }}_i}$ are right-continuous, then so is ${{\rm{\Lambda }}^{\rm{*}}}$ .

Proposition 9 gives a sufficient condition under which Property $(\mathrm{P}_1)$ holds. The condition is that the right tail of each ${{\rm{\Lambda }}_i}$ is a constant. The special case of ${{\rm{\Lambda }}^{\rm{*}}}$ being constant is investigated in Proposition 10.

Proposition 9. Let ${\Lambda _i}$ be bounded, right-continuous and decreasing for $i \in [m]$ . If, for each $i \in [m]$ , ${\Lambda _i}\!\left( {{y_i}} \right) = {\Lambda _i}({+}\infty)$ for some $y_i\in\mathbb{R}$ , then for any $x\in \mathbb{R}$ , there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum^m_{i=1} x_i=x$ and $\overline{\Lambda ^{\ast}} (x) = \sum_{i=1}^m \overline{\Lambda _i}(x_i)$ .

Proposition 10. Let ${\Lambda _i}$ be bounded, right-continuous and decreasing for each $i \in [m]$ .

  1. (1) ${{\rm{\Lambda }}^{\rm{*}}}$ is constant if and only if at least one ${{\rm{\Lambda }}_i}$ is constant.

  2. (2) Let ${\Lambda}^{\ast}$ be a constant function. Then $\overline{{\Lambda}^{\ast}}(x)>\sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ for any $(x_1, \ldots, x_m)\in{\mathbb{R}}^m$ with $x=\sum^m_{i=1} x_i$ if and only if there exists ${\Lambda}_{i_0}$ such that ${\Lambda}_{i_0}(y)>{\Lambda}_{i_0}({+}{\infty})$ for any $y\in {\mathbb{R}}$ .

In view of property (B2) in Subsection 2.2, we always assume that all ${{\rm{\Lambda }}_i}$ are right-continuous in the next sections.

4. Inf-convolution of several Lambda VaRs

Theorem 1. Let $\Lambda_i\,:\, {\mathbb{R}} \rightarrow (0, 1]$ be decreasing for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2). If ${\Lambda ^*}({-}\infty) \gt 0$ , then

(4.1) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

The proof of Theorem 1 requires the following lemma, which was pointed out to us by an anonymous referee.

Lemma 4. For ${\lambda _i} \in [{0,1}]$ and $(y_1, \ldots, y_m)\in \mathbb{R}^m$ , we have

(4.2) \begin{equation} \inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^-_{\lambda_i}(X_i)\vee y_i\right\}= \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\lambda_i}(X)\vee\sum_{i=1}^m y_i,\quad X\in L^0.\end{equation}

Proof of Theorem 1. By Lemma 2, for $X \in {L^0}$ , we have

(4.3) \begin{align} \mathop{\Box}\limits_{i=1}^m {\rm VaR}_{\Lambda _i}^-(X) &= \inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i)\notag \\ &=\inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m \inf_{y_i\in\mathbb{R}} \!\left\{{\rm VaR}^-_{\Lambda _i(y_i)}(X_i)\vee y_i\right\}\notag\\ &=\inf_{(y_1,\dots, y_m)\in\mathbb{R}^m}\ \inf_{(X_1,\dots,X_m) \in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^-_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\!\left\{ \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda_i(y_i)}(X)\vee\sum_{i=1}^m y_i\right\} \end{align}
(4.4) \begin{align}\qquad = \mathop {\inf }\limits_{({y_1},...,ym)\epsilon {\mathbb{R}^m}} \!\left\{ {{\rm{Va}}{{\rm{R}}^ - }_{1 - \sum\nolimits_{i = 1}^m {\overline {{ \wedge _i}} ({y_i})} }(X){\rm{V}}\sum\limits_{i = 1}^m {{y_i}} } \right\},\end{align}

where (4.3) follows from Lemma 4, and (4.4) follows from the fact

\begin{align*} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda _i(y_i)}(X) = {\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\end{align*}

by Corollary 2 in [Reference Embrechts, Liu and Wang12]. Here, we use the convention that ${\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)={\rm VaR}^-_{0}(X)=-\infty$ when $1-\sum_{i=1}^m\overline{\Lambda _i}(y_i) <0$ . Furthermore, by the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ , we have

(4.5) \begin{align}& \inf_{(y_1, \dots, y_m)\in\mathbb{R}^m} \!\left\{{\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda_i}(y_i)}(X)\vee \sum_{i=1}^m y_i \right\} \notag\\ & \hskip 1.5cm =\inf_{y\in\mathbb{R}}\ \inf_{(y_1, \dots, y_m)\in\mathbb{R}^m, \sum_{i=1 }^m y_i=y} \!\left\{{\rm VaR}^-_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee y\right\}\notag \\ & \hskip 1.5cm \ge \inf_{y\in\mathbb{R}}\!\left\{{\rm VaR}^-_{\Lambda ^{\ast}(y)}(X)\vee y\right\} \end{align}
(4.6) \begin{align} = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X),\qquad\qquad\qquad\qquad\quad\qquad\ \ \end{align}

where (4.5) holds since $\Lambda ^{\ast}(y)\le 1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)$ for any $(y_1,\dots, y_m)\in\mathbb{R}^m$ with $\sum_{i=1}^m y_i= y$ , and (4.6) follows from Lemma 2.

The equality in (4.1) of Theorem 1 does not hold without further assumptions, as shown by the following counterexample. We will investigate sufficient conditions on ${{\rm{\Lambda }}^{\rm{*}}}$ in Theorems 4 and 5, under which the equality in (4.1) is true.

Example 1. Let $X \in {L^0}$ , with ${\mathbb{P}}(X=0)={\mathbb{P}}(X=1)=1/2$ , and let ${{\rm{\Lambda }}_1}(x) \equiv 3/4$ and

\begin{align*}{{\rm{\Lambda }}_2}(x) = \frac{1}{{4\pi }}\left( {\frac{\pi }{2} - {\rm{arctan}}(x)} \right) + \frac{3}{4}.\end{align*}

By Proposition 10, it follows that ${{\rm{\Lambda }}^{\rm{*}}} \equiv 1/2$ , and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {x + y} \right) \gt \overline {{{\rm{\Lambda }}_1}} (x) + \overline {{{\rm{\Lambda }}_2}} (y)$ for all $(x,y)\in\mathbb{R}^2$ . Thus, ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = 0$ . We claim that, for any $(X_1,X_2)\in \mathbb{A}_2(X)$ ,

\begin{align*}{\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ -( {{X_1}}) + {\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ - ({{X_2}}) \ge 1 \gt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X).\end{align*}

In fact, if this is not true, there exists $(Y_1,Y_2)\in \mathbb{A}_2(X)$ such that ${y_1} + {y_2} \lt 1$ , where ${y_1} = {\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ - ({{Y_1}})$ and ${y_2} = {\rm{VaR}}_{{{\rm{\Lambda }}_2}}^ - ({{Y_2}})$ . By Lemma 1, we have $ {\mathbb{P}}(X_1>y_1)\le\overline{\Lambda _1}(y_1)$ and ${\mathbb{P}}(X_2>y_2) \le\overline{\Lambda _2}(y_2)$ . Thus,

\begin{align*} \frac {1}{2}={\mathbb{P}}(X>y_1+y_2)\le\sum^2_{i=1}{\mathbb{P}}(X_i>y_i)\le\sum^2_{i=1}\overline{\Lambda _i}(y_i) < \frac {1}{2},\end{align*}

which is a contradiction.

Theorem 2. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow (0, 1)$ be decreasing for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2). If ${\Lambda ^*}({-}\infty) \gt 0$ , then

(4.7) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

The proof of Theorem 2 requires the following lemma, Lemma 5, whose proof is similar to that of Lemma 4 by using cash invariance of VaR and Theorem 1 in [Reference Liu, Mao, Wang and Wei21].

Lemma 5. Let ${\lambda _i} \in [{0,1}]$ and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ . For any $(y_1, \ldots, y_m)\in \mathbb{R}^m$ , we have

(4.8) \begin{align}\inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^{\kappa_i}_{\lambda_i}(X_i)\vee y_i\right\}= \mathop{\Box}\limits_{i=1}^m {\rm VaR}^{\kappa_i}_{\lambda_i}(X)\vee\sum_{i=1}^m y_i,\quad X\in L^0.\end{align}

Here, the ${\kappa _i}$ and the ${\lambda _i}$ are chosen to avoid the appearance of ${\rm{VaR}}_0^ - \square{\rm{VaR}}_1^ + $ in (4.8).

Proof of Theorem 2. The proof is similar to that of Theorem 1. By Lemma 3, for $X \in {L^0}$ , we have

(4.9) \begin{align}\mathop{\Box}\limits_{i=1}^m {\rm VaR}_{\Lambda _i}^+(X) &= \inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m{\rm VaR}_{\Lambda _i}^+(X_i) \notag \\ & =\inf_{(X_1,\dots, X_m)\in\mathbb{A}_m(X)} \sum_{i=1}^m \inf_{y_i\in\mathbb{R}} \!\left\{{\rm VaR}^+_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\ \inf_{(X_1,\dots, X_m)\in \mathbb{A}_m(X)}\!\left\{\sum_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X_i)\vee y_i\right\} \notag\\ &=\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m}\ \left\{\mathop{\Box}\limits_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X)\vee\sum_{i=1}^m y_i \right\} \end{align}
(4.10) \begin{align} &\qquad =\inf_{(y_1,\dots,y_m)\in\mathbb{R}^m} \!\left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee\sum_{i=1}^m y_i \right\} ,\end{align}

where (4.9) follows from Lemma 5, and (4.10) follows from Theorem 1 of [Reference Liu, Mao, Wang and Wei21], which implies that

(4.11) \begin{align} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^+_{\Lambda _i(y_i)}(X)={\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X), \quad X\in L^0,\end{align}

since ${{\rm{\Lambda }}_i}\!\left( {{y_i}} \right) \lt 1$ for $i \in [m]$ . Here we use the convention that ${\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X) =-\infty$ when $1-\sum_{i=1}^m\overline{\Lambda_i}(y_i)<0$ . Note that

(4.12) \begin{align} \inf_{(y_1,\dots,y_m)\in\mathbb{R}^m} & \left \{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee \sum_{i=1}^m y_i \right\} \notag\\ &=\inf_{y\in\mathbb{R}}\ \inf_{(y_1,\dots,y_m)\in\mathbb{R}^m, \sum_{i=1}^m y_i=y} \left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\vee y\right\} \notag\\ &= \inf_{y\in\mathbb{R}} \!\left\{{\rm VaR}^+_{\Lambda ^{\ast}(y)}(X)\vee y\right\} \end{align}
(4.13) \begin{align} \,\,= {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),\qquad\qquad\qquad\qquad\qquad\qquad\end{align}

where (4.13) is due to Lemma 3, and (4.12) follows since

(4.14) \begin{align} \inf\!\left\{{\rm VaR}^+_{1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)}(X)\,:\,\ (y_1,\dots,y_m)\in\mathbb{R}^m, \sum_{i=1}^m y_i=y \right\} ={\rm VaR}^+_{\Lambda ^{\ast}(y)}(X).\end{align}

More detail is given on (4.14) as follows. Denote by LHS the left-hand side of (4.14). Obviously, ${\rm{LHS}} \ge {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}(y)}^ + (X)$ since ${\rm{VaR}}_\lambda ^ + $ is increasing in $\lambda $ and $1-\sum_{i=1}^m\overline{\Lambda _i}(y_i)\ge \Lambda ^{\ast}(y)$ . On the other hand, note that ${\rm{VaR}}_\lambda ^ + $ is right-continuous in $\lambda $ . By (3.2), there exists a sequence $\{(y_{1n}, \ldots, y_{mn})\}_{n\in{\mathbb{N}}}$ satisfying that $y=\sum^m_{k=1} y_{kn}$ and $1-\sum_{i=1}^m\overline{{\Lambda}_i}(y_{in}) \searrow {\Lambda}^{\ast}(y)$ as $n \to \infty $ . Thus, the lower bound ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}(y)}^ + (X)$ is attainable by LHS. Therefore, (4.14) is true.

It should be pointed out that (4.14) does not hold for ${\rm{Va}}{{\rm{R}}^ - }$ because ${\rm{Va}}{{\rm{R}}^ - }$ is left-continuous but not right-continuous in $\lambda $ . It is the reason the equality in (4.1) cannot be expected without additional conditions. In Theorem 2, it is required that ${{\rm{\Lambda }}_i} \lt 1$ for all $i \in [m]$ . If ${{\rm{\Lambda }}_{{i_0}}} \equiv 1$ and ${{\rm{\Lambda }}_{{j_0}}} \lt 1$ for some ${i_0},{j_0} \in [m]$ , then $\Box_{i=1}^m {\rm VaR}^+_{{\Lambda}_i(y_i)}(X) =+{\infty}> {\rm VaR}^+_{1-\sum_{i=1}^m\overline{{\Lambda}_i}(y_i)}(X)$ for $X \in {L^0}$ , violating (4.11), and hence (4.7) does not hold.

An explicit formula of the inf-convolution is also obtained in Theorem 3 for the case of a mixed collection of left and right Lambda VaRs. Its proof can be found in Appendix C.

Theorem 3. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow (0, 1]$ be decreasing and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2), with ${\Lambda ^*}({-}\infty) \gt 0$ . If ${\kappa _i} = + $ for at least one $i$ , and ${\Lambda _j} \lt 1$ whenever ${\kappa _j} = + $ , then

(4.15) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align}

As a special consequence of Theorems 1, 2 and 3, we get the following VaR inf-convolution formulas of ordinary VaRs:

Corollary 1. (1) [Reference Embrechts, Liu and Wang12, Corollary 2] For ${\lambda _1},{\lambda _2} \in [{0,1}]$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \gt 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ - \square{\rm{VaR}}_{{\lambda _2}}^ - (X) = {\rm{VaR}}_\lambda ^ - (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

(2) [Reference Liu, Mao, Wang and Wei21, Theorem 1] For ${\lambda _1},{\lambda _2} \in [{0,1})$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \ge 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ + \square{\rm{VaR}}_{{\lambda _2}}^ + (X) = {\rm{VaR}}_\lambda ^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

(3) [Reference Liu, Mao, Wang and Wei21, Theorem 1] For ${\lambda _1} \in [{0,1}],{\lambda _2} \in [{0,1})$ such that $\lambda = {\lambda _1} + {\lambda _2} - 1 \ge 0$ , we have

\begin{align*}{\rm{VaR}}_{{\lambda _1}}^ - \square{\rm{VaR}}_{{\lambda _2}}^ + (X) = {\rm{VaR}}_\lambda ^ + (X),{\rm{\;\;\;\;}}X \in {L^0}.\end{align*}

Theorems 1, 2 and 3 are established under the assumption ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ . In the end of this section, we present other results of inf-convolution of Lambda VaRs under the assumption ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ . All proofs are postponed to Appendix C. The proofs of Propositions 11, 12 and 13 are based on Lemmas 6 and 7.

Proposition 11. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1]$ be decreasing for each $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = - \infty $ for $X \in {L^0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,L,{\rm{ess\mbox{-}inf}}(X)} \right\},$ where

    \begin{align*}L\,:\!=\, \left\{x\in\mathbb{R}\,:\,\Lambda ^{\ast}(x)=0, \not\exists \{x_i\}\ such\ that\ \sum_{i=1}^mx_i = x \ and \ \sum_{i=1}^m\overline{\Lambda _i}(x_i)=1\right\}.\end{align*}

Proposition 12. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1)$ be decreasing for each $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = - \infty $ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,T,{\rm{ess\mbox{-}inf}}(X)} \right\}$ , where $T =\{x\in\mathbb{R}\,:\, \Lambda ^{\ast}(x) = 0\}$ .

Proposition 13. Let ${\Lambda}_i\,:\, {\mathbb{R}} \rightarrow [0, 1]$ be decreasing and ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ , and let ${\Lambda ^*}$ be defined by (3.2) with ${\Lambda ^*}({-}\infty) \le 0$ . Assume that ${\kappa _i} = + $ for at least one $i$ , and ${\Lambda _j} \lt 1$ whenever ${\kappa _j} = + $ .

  1. (1) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \lt 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = - \infty $ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{min}}\!\left\{ {{\rm{sup}}\,T,{\rm{ess\mbox{-}inf}}(X)} \right\}$ , where $T =\{x\in{\mathbb{R}}\,:\, {\Lambda}^{\ast}(x) = 0\}$ .

Lemma 6. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow [0,1]$ be decreasing for each $i \in [m]$ , and let $X \in {L^0}$ .

  1. (1) If $X \ge {x_0}$ , a.s., with ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = 0$ , and if there does exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i= x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(x_i)=1$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \ge {x_0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) \le {\rm{ess\mbox{-}inf}}(X)$ .

Lemma 7. Let ${\Lambda}_i\,:\, {\mathbb{R}} \to [0, 1)$ be decreasing for $i \in [m]$ , and let $X \in {L^0}$ .

  1. (1) If $X\ge x_0\in{\mathbb{R}}$ , a.s., and ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \ge {x_0}$ .

  2. (2) If ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \le 0$ , then $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) \le {\rm{ess\mbox{-}inf}}(X)$ .

5. Optimal risk sharing for Lambda VaRs

From the proof of Theorem 1 and Example 1, it is known that whether ${{\rm{\Lambda }}^{\rm{*}}}$ satisfies property $(\mathrm{P}_1)$ in Proposition 7 plays an important role in establishing the equality of (4.1). In this section, we consider the case ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ , and study optimal allocations of inf-convolution for several Lambda VaRs according to whether ${{\rm{\Lambda }}^{\rm{*}}}$ satisfies $(\mathrm{P}_1)$ or $(\mathrm{P}_2)$ in Proposition 7.

5.1. Left Lambda VaRs

Theorem 4. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0, 1]$ be decreasing for $i \in [m]$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ - (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda_i}(x_i)$ , then

(5.1) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X)\end{align}

Moreover, there exists an optimal allocation $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ satisfying ${x_i} = {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - ({X_i})$ for $i \in [m]$ .

Proof. Note that $x_0={\rm VaR}^-_{{\Lambda}^{\ast}}(X)\in{\mathbb{R}}$ since ${{\rm{\Lambda }}^{\rm{*}}}({-}\infty) \gt 0$ . First, assume that $\overline {{{\rm{\Lambda }}^{\rm{*}}}} ({{x_0}}) = 0$ . By the definition of ${{\rm{\Lambda }}^{\rm{*}}}$ , ${{\rm{\Lambda }}_i} \equiv 1$ for all $i \in [m]$ and, hence, ${{\rm{\Lambda }}^{\rm{*}}} \equiv 1$ . In view of Theorem 1, we conclude that (5.1) holds and $\left( {X,0, \ldots ,0} \right)$ is an optimal allocation of $X$ .

Next, consider the case ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \lt 1$ . We will construct an optimal allocation of $X$ directly. Note that $\{ X \lt {x_0}\} $ , $\left\{ {X = {x_0}} \right\}$ and $\{ X \gt {x_0}\} $ constitute a partition of ${\rm{\Omega }}$ . Construct an allocation ${\textbf{X}}{ \in \mathbb{A}_m}(X)$ as follows: On the set $\{ X \lt {x_0}\} $ , define ${X_k} = {x_k}$ for $k \in [{m - 1}]$ and $X_m= X-\sum_{i=1}^{m-1}x_i$ . On the set $\left\{ {X = {x_0}} \right\}$ , define ${X_j} = {x_j}$ for $j \in [m]$ . On the set $\{ X \gt {x_0}\} $ , let $\left\{ {{C_1}, \ldots ,{C_m}} \right\}$ be a partition of $\{ X \gt {x_0}\} $ , satisfying that

\begin{align*} {\mathbb{P}}(C_j)={\mathbb{P}}(X>x_0)\cdot \frac{\overline{\Lambda _j}(x_j)}{\sum_{i=1}^m \overline{\Lambda _i}(x_i)},\quad j\in [m]. \end{align*}

Then, define ${X_j} = X - {x_0} + {x_j}$ on ${C_j}$ and ${X_j} = {x_j}$ on $\{ X \gt x\} \backslash {C_j}$ for $j \in [m]$ . Therefore, $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ has the following representation:

(5.2) \begin{align}X_k &= x_k+(X-x_0)\,1_{C_k},\quad k\in [m-1], \nonumber\\ X_m &= X-\sum_{i=1}^{m-1}X_i = x_m + (X-x_0)\, 1_{C_m} + (X-x_0)\, 1_{\{X\le x_0\}}.\end{align}

By Lemma 1 and Proposition 8, ${x_0} = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X)$ implies that ${\mathbb{P}}(X>x_0)\le \Lambda ^{\ast}(x_0)$ . Also, by construction, it follows that

(5.3) \begin{align}{\mathbb{P}}(X_j > x_j) ={\mathbb{P}}(C_j)=\overline{\Lambda _j}(x_j) \cdot \frac{{\mathbb{P}}(X>x_0)}{\overline{\Lambda ^{\ast}}(x_0)}\le \overline{\Lambda _j}(x_j),\quad j\in [m],\end{align}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}}) \le {x_j}$ . Thus, $\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_i) \le x_0 = {\rm VaR}_{\Lambda ^{\ast}}^-(X)$ . By Theorem 1, we conclude that (5.1) holds and ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

Theorem 4 states that Property $(\mathrm{P}_1)$ in Proposition 7 is a sufficient condition for (5.1). In Theorem 5, we show that, under Property $(\mathrm{P}_2)$ in Proposition 7, ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ is a necessary and sufficient condition for (5.1) to hold. We will consider the inf-convolution of left Lambda VaRs in Theorem 10 when ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ .

Theorem 5. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i\in [m]$ with $ {\Lambda}^{\ast}({-}{\infty})\gt 0$ . For any $ X\in L^0$ , denote $ x_0={\rm VaR}^-_{{\Lambda}^{\ast}}(X)$ . It there does not exist $ (x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $ \sum_{i=1}^mx_i=x_0$ and $ \overline{{\Lambda}^{\ast}}(x_0)= \sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ , then (5.1) holds if and only if $ {\rm VaR}^-_{{\Lambda^\ast}}(X)={\rm VaR}^+_{{\Lambda^\ast}}(X)$ . Furthermore, under the condition $ {\rm VaR}^-_{{\Lambda}^{\ast}}(X)={\rm VaR}^+_{{\Lambda^\ast}}(X)$ , if $ {\mathbb{P}}(X \gt x_0) \lt \overline{{\Lambda}^{\ast}}(x_0)$ , an optimal allocation of $ X$ exists; if $ {\mathbb{P}}(X \gt x_0) =\overline{{\Lambda}^{\ast}}(x_0)$ , no optimal allocation of $X$ exists, while there exists a sequence of asymptotically optimal allocations.

Proof. Necessity: We prove it by contradiction. Assume on the contrary that ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ . Under this assumption, from (5.1) it follows that there exists $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^my_i =y <{\rm VaR}^+_{\Lambda ^{\ast}}(X)$ , where ${y_j} = {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}})$ for $j \in [m]$ . Note that

\begin{align*} \left\{X\ge{\rm VaR}^+_{\Lambda ^{\ast}}(X) \right\} \subset \left \{X>\sum^m_{i=1} y_i\right \}\subset \bigcup_{i=1}^m\!\left\{X_i > y_i\right\}\!.\end{align*}

By Lemma 1, we have ${\mathbb{P}}(X_j > y_j) \le \overline{{\Lambda}_j}(y_j)$ for $j \in [m]$ . Hence we obtain that

(5.4) \begin{align}{\mathbb{P}}\big(X \ge {\rm VaR}^+_{\Lambda ^{\ast}}(X) \big) \le \sum_{i=1}^m{\mathbb{P}}(X_i > y_i)\le \sum^m_{i=1}\overline{\Lambda _i}(y_i).\end{align}

By Proposition 7, we have ${{\rm{\Lambda }}^{\rm{*}}}(y) = {{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) = {{\rm{\Lambda }}^{\rm{*}}}({-}\infty)$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)<\overline{\Lambda ^{\ast}}(x_0)$ . On the other hand, note that ${\mathbb{P}}(X>x_0)\ge \overline{\Lambda ^{\ast}}(x_0)$ (Otherwise, if ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X) \le {x_0}$ , a contradiction.) Therefore,

\begin{align*} {\mathbb{P}}\big(X\ge{\rm VaR}^+_{\Lambda ^{\ast}}(X) \big) &={\mathbb{P}}\big(X>{\rm VaR}^-_{\Lambda ^{\ast}}(X) \big)={\mathbb{P}}(X>x_0) \ge \overline{\Lambda ^{\ast}}(x_0)> \sum_{i=1}^m\overline{\Lambda _i}(y_i),\end{align*}

which contradicts (5.4). This proves the necessity.

Sufficiency: Suppose that ${\rm VaR}^-_{\Lambda ^{\ast}}(X)={\rm VaR}^+_{\Lambda ^{\ast}}(X)$ . First, we consider the case ${\mathbb{P}}(X > x_0) < \overline{\Lambda^{\ast}}(x_0)$ . In this case, there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^mx_i=x_0$ and $\sum_{i=1}^m \overline{\Lambda _i}(x_i)\in\left({\mathbb{P}}(X>x_0), \overline{\Lambda ^{\ast}}(x_0)\right)$ . Let $\boldsymbol{{X}}\in \mathbb{A}_m(X)$ be as defined by (5.2). Then, ${\mathbb{P}}(X_j>x_j)={\mathbb{P}}(C_j)< \overline{\Lambda _j}(x_j)$ for $j \in [m]$ , implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ + ( {{X_j}}) \le {x_j}$ for $j \in [m]$ . Thus,

\begin{align*} \mathop{\Box}\limits_{i=1}^m {\rm VaR}^-_{\Lambda _i}(X)\le \sum_{i=1}^m{\rm VaR}^-_{\Lambda _i}(X_i)\le \sum_{i=1}^mx_i = {\rm VaR}^-_{\Lambda ^{\ast}}(X).\end{align*}

This, together with Theorem 1, implies our desired statement (5.1). Moreover, ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}}) = {x_j}$ for $k \in [m]$ , and ${\boldsymbol{{X}}}$ is an optimal allocation of $X$ .

Next, consider the case ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ . In this case we show that no optimal allocation exists, but that there exists a sequence of allocation $(X_{1n},\dots,X_{mn})\in \mathbb{A}_m(X)$, $n\in {\mathbb{N}}$ , such that $\sum_{i=1}^m{\rm VaR}^-_{\Lambda _i}(X_{i,n})\to x_0$ .

Assume on the contrary that there exists an optimal allocation of $X$ , say, ${\boldsymbol{{X}}} = \left( {{X_1}, \ldots ,{X_m}} \right)$ . Denote ${x_j}\,:\!=\, {\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ( {{X_j}})$ for $j \in [m]$ . Then we have $\sum_{i=1}^m x_i= x_0$ and

(5.5) \begin{align}\overline{\Lambda ^{\ast}}(x_0)={\mathbb{P}}(X>x_0) \le \sum_{i=1}^m{\mathbb{P}}(X_i>x_i)\le \sum_{i=1}^m\overline{\Lambda _i}(x_i).\end{align}

However, by Proposition 7, $\sum_{i=1}^m\overline{\Lambda _i}(x_i)<\overline{\Lambda ^{\ast}}(x_0)$ , which contradicts (5.5). Therefore, no optimal allocation exists. In order to find a sequence of admissible allocations of $X$ approaching the lower bound of the inf-convolution, we consider the following two cases.

Case 1: Suppose that ${\mathbb{P}}(X> x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any ${\epsilon}>0$ . Denote $\delta_n={\mathbb{P}}(X>x_0+1/n)$ . There exists a sequence $\{(x_{1n},\dots,x_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum_{i=1}^m x_{in}= x_0$ and $\sum_{i=1}^m\overline{{\Lambda}_i}(x_{in})\in \left(\delta_n,\overline{{\Lambda}^\ast}(x_0) \right)$ . In an atomless probability space, let $\{C_{1n},\dots, C_{mn}\}$ be a partition of $\left\{X >x_0+1/n \right\}$ , satisfying

\begin{align*} {\mathbb{P}}(C_{jn}) =\delta_n\frac{\overline{\Lambda _j}(x_{jn})}{\sum_{i=1}^m\overline{\Lambda _i}(x_{in})},\quad j\in [m].\end{align*}

Define

(5.6) \begin{align}{X_{k,n}} = {x_{kn}} + \frac{1}{{nm}} + \left( {X - {x_0} - \frac{1}{n}} \right){\rm{\;}}{1_{{C_{kn}}}},{\rm{\;\;\;\;}}k \in [{m - 1}],\end{align}

and $X_{mn} = X-\sum_{i=1}^{m-1} X_{in}$ . Then,

\begin{align*} {\mathbb{P}}\Big(X_{jn}> x_{jn}+\frac{1}{mn}\Big) = {\mathbb{P}}(C_{jn}) = \delta_n\cdot \frac{\overline{\Lambda _j}(x_{jn})}{ \sum_{i=1}^m\overline{\Lambda _i}(x_{in})} <\overline{\Lambda _j}(x_{jn}),\end{align*}

implying ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^ - ({X_{jn}}) \le {x_{jn}} + 1/\left( {mn} \right)$ . Thus,

(5.7) \begin{align}\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_{in}) \le \sum_{i=1}^m x_{in}+\frac{1}{n} = x_0 +\frac{1}{n}.\end{align}

Case 2: Suppose that ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ . In this case, from ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , it follows that ${{\rm{\Lambda }}^{\rm{*}}}({{x_0}}) \gt {{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right)$ for any $\epsilon \gt 0$ . By Proposition 7, there exists a sequence $\{(x_{1n},\ldots,x_{mn})\}_{n\in{\mathbb{N}}}$ such that $\sum_{i=1}^m x_{in} = x_0+1/{n}$ and $\sum_{i=1}^m \overline{\Lambda _i}(x_{in})= \overline{\Lambda ^{\ast}}(x_0+1/n)$ . Then,

\begin{align*} \overline{\Lambda _1}\Big(x_{1n}-\frac{1}{n}\Big) + \sum_{i=2}^m \overline{\Lambda _i}(x_{in}) \le \overline{\Lambda ^{\ast}}(x_0).\end{align*}

In an atomless probability space, let $\left( {{C_{1n}}, \ldots ,{C_{mn}}} \right)$ be a partition of the set $\{ X \gt {x_0}\} $ , satisfying

\begin{align*} {\mathbb{P}}(C_{1n})=\overline{\Lambda _1}(x_{1n})-\frac {1}{2}\bigg[\sum_{i=1}^m\overline{\Lambda _i}(x_{in})-\overline{\Lambda ^{\ast}}(x_0)\bigg] =\overline{\Lambda _1}(x_{1n})-\frac {1}{2}\bigg[\overline{\Lambda ^{\ast}}\Big(x_0+\frac {1}{n}\Big) -\overline{\Lambda ^{\ast}}(x_0)\bigg],\end{align*}

and

\begin{align*} {\mathbb{P}}(C_{kn}) =\left(\overline{\Lambda ^{\ast}}(x_0) - {\mathbb{P}}(C_{1,n})\right) \frac{\overline{\Lambda _k}(x_{kn})}{\sum_{i=2}^{m}\overline{\Lambda _i}(x_{in})}, \quad k=2, \ldots, m.\end{align*}

It is easy to see that ${\mathbb{P}}(C_{1n})+\sum_{i=2}^m\overline{\Lambda _i}(x_{in})> \overline{\Lambda ^{\ast}}(x_0)$ , which implies that

\begin{align*} {\mathbb{P}}(C_{1n})\in \left(\overline{\Lambda _1}\Big (x_{1n}-\frac {1}{n}\Big ), \overline{\Lambda _1}(x_{1n}) \right)\end{align*}

and ${\mathbb{P}}(C_{kn})\lt \overline{{\Lambda}_k}(x_{kn})$ for $k\in [m]$ . Construct a sequence of admissible allocations $(X_{1n},\dots,X_{mn})\in \mathbb{A}_m(X)$ as follows:

(5.8) \begin{align}X_{1n} & = x_{1n}-\frac{1}{n} + (X-x_0)\, 1_{C_{1n}},\nonumber\\ X_{jn} &= x_{jn} + (X-x_0)\, 1_{C_{jn}},\quad j=2, \ldots, m-1,\\ X_{mn} &= X-\sum_{i=1}^{m-1} X_{in}.\nonumber\end{align}

Note that ${\mathbb{P}}(X_{1n}> x_{1n}-1/n)= {\mathbb{P}}(C_{1n})<\overline{{\Lambda}_1}(x_{1n})$ and ${\mathbb{P}}(X_{kn}> x_{kn}) = {\mathbb{P}}(C_{kn}) < \overline{{\Lambda}_k}(x_{kn})$ for $k\ge 2$ . Hence, ${\mathrm{VaR}}_{{\Lambda}_i}^-(X_{in})\le x_{in}$ for $i\in [m]$ . Therefore,

(5.9) \begin{align}\sum_{i=1}^m{\rm VaR}_{\Lambda _i}^-(X_{in}) \le \sum_{i=1}^m x_{in}= x_0 + \frac{1}{n}.\end{align}

In view of Theorem 1, we conclude our desired statement from (5.7) and (5.9).

Corollary 2 in Embrechts et al. [Reference Embrechts, Liu and Wang12] is a special case of Theorem 4 with ${{\rm{\Lambda }}_i} \equiv {\lambda _i}$ for $i \in [m]$ satisfying ${\Lambda}^\ast\equiv \sum^m_{i=1}{\lambda}_i -m+1>0$ . Also, Proposition 9 gives a sufficient condition on the ${{\rm{\Lambda }}_i}$ under which property $(\mathrm{P}_1)$ of Proposition 7 holds. An immediate consequence of Theorem 4 is the following corollary.

Corollary 2. Let ${\Lambda}_i\,:\,{\mathbb{R}} \rightarrow (0, 1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . If for any $j \in [m]$ there exists $x_j\in{\mathbb{R}}$ such that ${\Lambda _j}\!\left( {{x_j}} \right) = {\Lambda _j}({+}\infty)$ , then $\mathop {\mathop \square\limits_{i = 1} }\limits^m \mathrm{VaR}_{{\Lambda _i}}^ - (X) = \mathrm{VaR}_{{\Lambda ^*}}^ - (X),$ for which an optimal allocation exists.

5.2. Right Lambda VaRs

Theorem 6. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0,1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0}\,:\!=\, \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then

(5.10) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ + (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , and ${\mathbb{P}}(X>x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ , then an optimal allocation exists.

  3. (3) Suppose that ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ and ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ and that ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ for any $\epsilon \gt 0$ .

    • If ${{\rm{\Lambda }}_j}\!\left( {{x_j} + \epsilon } \right) \lt {{\rm{\Lambda }}_j}\!\left( {{x_j}} \right)$ for any $\epsilon \gt 0$ and $j \in [m]$ , then an optimal allocation exists.

    • If, for any $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^m y_i=x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)= \overline{\Lambda ^{\ast}}(x_0)$ , there always exists some ${\tau _0} \gt 0$ such that ${{\rm{\Lambda }}_k}\!\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\!\left( {{y_k} + {\tau _0}} \right)$ for some $k \in [m]$ , then no optimal allocation exists.

Moreover, if an optimal allocation exists, then there exists $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ such that ${\mathrm{VaR}}_{{\Lambda}_i}^+(X_i)=x_i$ for $i\in [m]$ . If no optimal allocation exists, then there exists a sequence of allocations $(X_{1n},\dots, X_{mn})\in \mathbb{A}_m(X)$ such that ${\mathrm{VaR}}^{+}_{{\Lambda}_j}(X_{jn})\to x_j$ as $n\to{\infty}$ for $j\in [m]$ , and $\sum_{i=1}^m {\mathrm{VaR}}^+_{{\Lambda}_i}(X_{in})\to x_0$ .

Theorem 7. Let ${\Lambda}_i\,:\, {\mathbb{R}}\rightarrow (0,1)$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ . For any $X \in {L^0}$ , denote ${x_0}\,:\!=\, \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there does not exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then (5.10) holds. Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

In Theorems 6 and 7, the range of the ${{\rm{\Lambda }}_i}$ cannot be weakened from $\left( {0,1} \right)$ to be $\left( {0,1} \right]$ as shown by the following counterexample.

Example 2. Let ${{\rm{\Lambda }}_1}(x) = {1_{\{ x \lt 2\} }} + \left( {4/5} \right){\rm{*}}{1_{\left\{ {x \ge 2} \right\}}}$ and ${{\rm{\Lambda }}_2}(x) = \left( {4/5} \right){\rm{*}}{1_{\{ x \lt 0\} }} + \left( {1/2} \right){\rm{*}}{1_{\left\{ {x \ge 0} \right\}}}$ . From (3.2), it follows that ${{\rm{\Lambda }}^{\rm{*}}}(x) = \left( {1/2} \right){\rm{*}}{1_{\{ x \lt 2\} }} + \left( {3/10} \right){\rm{*}}{1_{\left\{ {x \ge 2} \right\}}}$ . Let $X$ be a $\left( {0,1} \right)$ -uniformly distributed random variable. Then ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X) = {x_0} = 1/2$ , ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ , and $\overline {{{\rm{\Lambda }}^{\rm{*}}}} \left( {1/2} \right) = \overline {{{\rm{\Lambda }}_1}} \!\left( {1/2} \right) + \overline {{{\rm{\Lambda }}_2}} (0)$ . If Theorem 6 holds, then there exist $(X_1,X_2) \in \mathbb{A}_2(X)$ and ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + \left( {{X_1}} \right) = 1/2$ . However, from the definition of ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + $ , it follows that ${\rm{VaR}}_{{{\rm{\Lambda }}_1}}^ + (Y) \ge 2$ for any random variable $Y \in {L^0}$ . This is a contradiction. Thus, Theorem 6 does not hold in this case.

5.3. Mixed Lambda VaRs

Theorem 8. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i \in [m],$ with ${\Lambda ^*}({-}\infty) \gt 0$ , and let ${\kappa _i} \in \left\{ { - , + } \right\}$ for $i \in [m]$ such that $K\,:\!=\, \left\{ {j\,:\,{\kappa _j} = + ,j \in [m]} \right\} \ne \emptyset $ . Assume that ${\Lambda _j} \lt 1$ for $j \in K$ . For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there exists $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)= \sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then

(5.11) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}(X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , then an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , and ${\mathbb{P}}(X>x_0+{\epsilon})<{\mathbb{P}}(X>x_0)$ for any $\epsilon \gt 0$ , then an optimal allocation exists.

  3. (3) Suppose that ${\mathbb{P}}(X>x_0)=\overline{{\Lambda}^\ast}(x_0)$ and ${\mathbb{P}}(X>x_0+{\epsilon}_0)={\mathbb{P}}(X>x_0)$ for some ${\epsilon _0} \gt 0$ , and that ${{\rm{\Lambda }}^{\rm{*}}}\!\left( {{x_0} + \epsilon } \right) \lt {{\rm{\Lambda }}^{\rm{*}}}({{x_0}})$ for any $\epsilon \gt 0$ .

    • If ${{\rm{\Lambda }}_j}\!\left( {{x_j} + \epsilon } \right) \lt {{\rm{\Lambda }}_j}\!\left( {{x_j}} \right)$ for any $\epsilon \gt 0$ and $j \in K$ , then an optimal allocation exists.

    • If, for any $(y_1,\dots,y_m)\in\mathbb{R}^m$ satisfying $\sum_{i=1}^m y_i=x_0$ and $\sum_{i=1}^m\overline{\Lambda _i}(y_i)= \overline{\Lambda ^{\ast}}(x_0)$ , there always exists some ${\tau _0} \gt 0$ such that ${{\rm{\Lambda }}_k}\!\left( {{y_k}} \right) = {{\rm{\Lambda }}_k}\!\left( {{y_k} + {\tau _0}} \right)$ for some $k \in [m]$ , then no optimal allocation exists.

Moreover, if an optimal allocation exists, then there exists $(X_1,\dots,X_m)\in \mathbb{A}_m(X)$ such that ${\rm{VaR}}_{{{\rm{\Lambda }}_i}}^{{\kappa _i}}({X_i}) = {x_i}$ for $i \in [m]$ . If no optimal allocation exists, then there exists a sequence of allocations $(X_{1n},\dots, X_{mn})\in \mathbb{A}_m(X)$ such that ${\rm{VaR}}_{{{\rm{\Lambda }}_j}}^{{\kappa _j}}({X_{jn}}) \to {x_j}$ as $n \to \infty $ for $j \in [m]$ and $\sum_{i=1}^m \mathrm{VaR}^{\kappa_i}_{{\Lambda}_i}(X_{in})\to x_0$ as $n \to \infty $ .

Theorem 9. Let the ${\Lambda _i}$ be the same as those in Theorem 8. For any $X \in {L^0}$ , denote ${x_0} = \mathrm{VaR}_{{\Lambda ^*}}^ + (X)$ . If there does not exist $(x_1,\dots,x_m)\in\mathbb{R}^m$ such that $\sum_{i=1}^m x_i=x_0$ and $\overline{\Lambda ^{\ast}}(x_0)=\sum_{i=1}^m \overline{\Lambda _i}(x_i)$ , then (5.11) holds. Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

In Theorem 5, we establish (5.1) under the assumption ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ and $(\mathrm{P}_2)$ in Proposition 7. How about the explicit formula of $\square_{i = 1}^m{\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X)$ under the assumption ${\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ - (X) \lt {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X)$ and $(\mathrm{P}_2)$ ? By a similar argument to that in the proof of Theorems 7, we have the next result.

Theorem 10. Let ${\Lambda}_i\,:\, {\mathbb{R}}\to (0, 1]$ be decreasing for $i\in [m]$ with ${\Lambda}^\ast({-}{\infty})>0$ . For any $X\in L^0$ , denote $x_0=\mathrm{VaR}^-_{{\Lambda}^\ast}(X)$ . Assume that there does not exist $(x_1,\dots,x_m)\in{\mathbb{R}}^m$ such that $\sum_{i=1}^mx_i=x_0$ and $\overline{{\Lambda}^\ast}(x_0)= \sum_{i=1}^m \overline{{\Lambda}_i}(x_i)$ . If $\mathrm{VaR}^-_{{\Lambda}^\ast}(X) < \mathrm{VaR}^+_{{\Lambda}^\ast}(X)$ , then

(5.12) \begin{align}\mathop {\mathop \square\limits_{i = 1} }\limits^m {\rm{VaR}}_{{{\rm{\Lambda }}_i}}^ - (X) = {\rm{VaR}}_{{{\rm{\Lambda }}^{\rm{*}}}}^ + (X).\end{align}

Furthermore,

  1. (1) If ${\mathbb{P}}(X>x_0)<\overline{\Lambda ^{\ast}}(x_0)$ , an optimal allocation exists.

  2. (2) If ${\mathbb{P}}(X>x_0)=\overline{\Lambda ^{\ast}}(x_0)$ , no optimal allocation exists, while there exists a sequence of asymptotically optimal allocations.

6. Comonotonic inf-convolution of Lambda VaRs

In this section, we consider inf-convolution of Lambda VaRs constrained to comonotonic allocations, that is, allocations are constrained to be comonotonic. Comonotonicity, an extremal form of positive dependence, was introduced and has been widely used in economics, financial mathematics and actuarial science over the last two decades. The formal definition and its characterization can be found in Dhaene et al. [Reference Dhaene, Denuit, Goovaerts, Kaas, Tang and Vynche10, Reference Dhaene, Denuit, Goovaerts, Kaas and Vyncke11]. Random variables ${X_1}, \ldots ,{X_m}$ are said to be comonotonic if there exist a random variable $Z$ and increasing functions ${g_1}, \ldots ,{g_m}$ such that ${X_i} = {g_i}\!\left( Z \right)$ almost surely for $i \in [m]$ . Comonotonicity of more than two random variables is equivalent to pair-wise comonotonicity. In the sequel, when ${X_1}, \ldots ,{X_m}$ are comonotonic, we denote by $X_i/\!/\sum^m_{k=1} X_k$ for $i \in [m]$ .

It is well-known that ordinary VaRs possess comonotonic additivity on ${L^0}$ , that is, the VaR of a sum of comonotonic random variables is simply the sum of the VaRs of the marginal distributions [Reference Dhaene, Denuit, Goovaerts, Kaas, Tang and Vynche10, Theorem 5]. However, this property is not true for Lambda VaRs. In the next proposition, we prove that Lambda VaRs possess comonotonic subadditivity on $L_ + ^0$ . The property of comonotonic subadditivity was first proposed by Song and Yan [Reference Song and Yan24] and further investigated by Song and Yan [Reference Song and Yan25].

Proposition 14. Let ${\Lambda}\,:\, {\mathbb{R}}_+\to [0,1]$ be decreasing, and let ${X_1}$ and ${X_2}$ be nonnegative comonotonic random variables. Then

(6.1) \begin{align}{\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1} + {X_2}} \right) \le {\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}})\end{align}

and

(6.2) \begin{align}{\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1} + {X_2}} \right) \le {\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}).\end{align}

Proof. Denote by $x = {\rm{VaR}}_{\rm{\Lambda }}^\kappa \!\left( {{X_1} + {X_2}} \right)$ for $\kappa \in \left\{ { - , + } \right\}$ and set $X = {X_1} + {X_2}$ . Without loss of generality, assume that $0 \lt x \lt \infty $ . First note that $x \gt \mathrm{ess\mbox{-}sup}(X)$ occurs only for ${\rm{VaR}}_{\rm{\Lambda }}^ + $ and $x = {\rm{sup}}\!\left\{ {t\,:\,{\rm{\Lambda }}\!\left( t \right) = 1} \right\} \gt \mathrm{ess\mbox{-}sup}(X)$ . Thus, ${\rm{VaR}}_{\rm{\Lambda }}^ + ({X_i}) = x$ for $i = 1,2$ and, hence, ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) + {\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}) = 2x \gt x = {\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1} + {X_2}} \right).$ That is, (6.2) holds when $x \gt \mathrm{ess\mbox{-}sup}(X)$ .

Next, assume that $x \le \mathrm{ess\mbox{-}sup}(X)$ . Then there exists $\lambda \in \left[ {{\rm{\Lambda }}\!\left( {x + } \right),{\rm{\Lambda }}(x{-})} \right]$ and $\alpha \in [{0,1}]$ such that ${\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1} + {X_2}} \right) = x$ , where ${\rm{VaR}}_\lambda ^\alpha = \left( {1 - \alpha } \right){\rm{VaR}}_\lambda ^ - + \alpha {\rm{VaR}}_\lambda ^ + $ . Since ${X_1}$ and ${X_2}$ are comonotonic, it follows that ${\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1} + {X_2}} \right) = {\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1}} \right) + {\rm{VaR}}_\lambda ^\alpha ({{X_2}})$ . Denote by ${x_1} = {\rm{VaR}}_\lambda ^\alpha \!\left( {{X_1}} \right)$ and ${x_2} = {\rm{VaR}}_\lambda ^\alpha ({{X_2}})$ . Now consider two cases.

Case 1. Consider the right Lambda VaR, and assume that ${x_i} \lt x$ for $i = 1,2$ (otherwise, (6.2) is trivial). Then, for any $\epsilon \gt 0$ , $ {\mathbb{P}}(X_1\le x_1-{\epsilon}) \le {\lambda}\le {\Lambda}(x{-})\le {\Lambda} (x_1-{\epsilon}),$ implying ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) \ge {x_1} - \epsilon $ . Since $\epsilon $ is arbitrary, we have ${\rm{VaR}}_{\rm{\Lambda }}^ + \left( {{X_1}} \right) \ge {x_1}$ . Similarly, ${\rm{VaR}}_{\rm{\Lambda }}^ + ({{X_2}}) \ge {x_2}$ . So we get (6.2) when ${x_i} \lt x$ for $i = 1,2$ .

Case 2. Consider the left Lambda VaR, and assume that ${x_i} \lt x$ for $i = 1,2$ .

  1. (i) If ${\rm{\Lambda }}(x{-}) \gt \lambda $ , then ${\mathbb{P}}(X_1\le x_1-{\epsilon}) \le {\lambda}<{\Lambda}(x{-})\le {\Lambda} (x_1-{\epsilon})$ for any $\epsilon \gt 0$ , implying ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1} - \epsilon $ and, hence, ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1}$ . Similarly, we have ${\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}}) \ge {x_2}$ . Therefore, we conclude (6.1) when ${\rm{\Lambda }}(x{-}) \gt \lambda $ .

  2. (ii) If ${\rm{\Lambda }}(x{-}) = \lambda $ and ${\rm{\Lambda }}({x-\epsilon}) \gt {\rm{\Lambda }}(x{-})$ for any $\epsilon \gt 0$ , then ${\mathbb{P}}(X_1\le x_1-{\epsilon}) \le \lambda \le \Lambda (x{-})< \Lambda (x_1-{\epsilon})$ , implying ${\rm{VaR}}_{\rm{\Lambda }}^ - \left( {{X_1}} \right) \ge {x_1}$ . Similarly, we have ${\rm{VaR}}_{\rm{\Lambda }}^ - ({{X_2}}) \ge {x_2}$ . We also obtain (6.1) in subcase (ii).

  3. (iii) If