Hostname: page-component-7bb8b95d7b-nptnm Total loading time: 0 Render date: 2024-09-28T13:30:54.777Z Has data issue: false hasContentIssue false

Measuring the suboptimality of dividend controls in a Brownian risk model

Published online by Cambridge University Press:  07 June 2023

Julia Eisenberg*
Affiliation:
Technische Universität Wien
Paul Krühner*
Affiliation:
Wirtschaftsuniversität Wien
*
*Postal address: Wiedner Hauptstraße 8, 1040 Wien, Austria. Email address: jeisenbe@fam.tuwien.ac.at
**Postal address: Welthandelsplatz 1, 1020 Wien, Austria. Email address: peisenbe@wu.ac.at
Rights & Permissions [Opens in a new window]

Abstract

We consider an insurance company modelling its surplus process by a Brownian motion with drift. Our target is to maximise the expected exponential utility of discounted dividend payments, given that the dividend rates are bounded by some constant. The utility function destroys the linearity and the time-homogeneity of the problem considered. The value function depends not only on the surplus, but also on time. Numerical considerations suggest that the optimal strategy, if it exists, is of a barrier type with a nonlinear barrier. In the related article of Grandits et al. (Scand. Actuarial J. 2, 2007), it has been observed that standard numerical methods break down in certain parameter cases, and no closed-form solution has been found. For these reasons, we offer a new method allowing one to estimate the distance from an arbitrary smooth-enough function to the value function. Applying this method, we investigate the goodness of the most obvious suboptimal strategies—payout on the maximal rate, and constant barrier strategies—by measuring the distance from their performance functions to the value function.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

A company’s dividend payments are among the most important factors for analytic investors to consider when deciding whether to invest in the firm. Furthermore, dividends serve as a sort of litmus paper, indicating the financial health of the company. Indeed, the reputation, and consequently the commercial success, of a company with a long record of dividend payments will be negatively affected if the company drops the payments. On the other hand, new companies can strengthen their position by declaring dividends. For the sake of fairness, it should be mentioned that there are also some serious arguments against dividend payouts; for example, for tax reasons it might be advantageous to withhold dividend payments. A discussion of the pros and cons of dividend distributions is beyond the scope of the present manuscript. We refer to surveys on the topic by Avanzi [Reference Avanzi6] or Albrecher and Thonhauser [Reference Albrecher and Thonhauser4].

Because of its importance, the value of expected discounted dividends has long been, and still remains, one of the most popular risk measures in the actuarial literature. Modelling the entire surplus of an insurance company by a Brownian motion, a compound Poisson process, or a general Lévy process with an infinite or finite time horizon—many papers have been written on maximising expected discounted dividends. The papers of Gerber [Reference Gerber13], Bühlmann [Reference Bühlmann10], Azcue and Muler [Reference Azcue and Muler7], and Albrecher and Thonhauser [Reference Albrecher and Thonhauser3] contain just a few of the results obtained since de Finetti’s groundbreaking paper [Reference De Finetti11]. Shreve, Lehoczky and Gaver [Reference Shreve, Lehoczky and Gaver20] considered the problem for a general diffusion process, where the drift and the volatility fulfil some special conditions. Asmussen and Taksar [Reference Asmussen and Taksar5] modelled the surplus process via a Brownian motion with drift and found the optimal strategy to be a constant barrier.

All the papers mentioned above deal with linear dividend payments, in the sense that the lump-sum payments or dividend rates are not skewed by a utility function. Hubalek and Schachermayer [Reference Hubalek and Schachermayer16] apply various utility functions to the dividend rates before accumulation. Their result differs significantly from the classical result described in Asmussen and Taksar [Reference Asmussen and Taksar5].

An interesting question is to consider the expected ‘present utility’ of the discounted dividend payments. This means the utility function will be applied to the value of the accumulated discounted dividend payments up to ruin. In this way, one considers as a risk measure the utility of the present value of dividends. The dividend payments are not attributed to a specific owner (the shareholders); they serve as the only cash-flow stream used to evaluate the company’s financial health. Therefore, the present utility of the accumulated payments accounts for the company’s risk aversion by exercising a dividend payment strategy. The fact that the considerations are stopped at ruin indicates that the negative surplus is considered as a high risk. A higher utility of the present value of future dividend payments makes the company more attractive for potential investors. Early ruin will of course lead to a smaller utility of the present value of dividends. Thus, the event of ruin is a technical feature and does not mean that the company actually goes out of business.

For strategic and reputational reasons, some big companies (such as Munich Re; see [Reference Gould14]) do not reduce their dividends even during periods of crisis. Recently, researchers have started to investigate the problem of non-decreasing dividend payments; some examples are Albrecher et al. [Reference Albrecher, Azcue and Muler1, Reference Albrecher, Bäuerle and Bladt2]. In this case, even with a linear utility function, the problem becomes two-dimensional. Adding a nonlinear utility function to this setting further complicates the solution to the problem.

Modelling the surplus by a Brownian motion with drift, Grandits et al. [Reference Grandits, Hubalek, Schachermayer and Zigo15] applied an exponential utility function to the value of unrestricted discounted dividends. In other words, they considered the expected utility of the present value of dividends and not the expected discounted utility of the dividend rates. In [Reference Grandits, Hubalek, Schachermayer and Zigo15], the existence of the optimal strategy was not shown. We will investigate the related problem where the dividend payments are restricted to a finite rate. Note that using a nonlinear utility function increases the dimension of the problem. Therefore, tackling the problem via the Hamilton–Jacobi–Bellman (HJB) approach in order to find an explicit solution seems to be an unsolvable task. Of course, one can prove the value function to be the unique viscosity solution to the corresponding HJB equation and then try to solve the problem numerically. However, on this path one faces two problems that are not easy to tackle. First, the proof that the value function is a (unique) viscosity solution to the corresponding HJB equation can be very complex, time-consuming, and space-consuming. In particular, if one chooses a nonlinear and non-exponential utility function, the value function will depend on three variables: the time t, the surplus x, and the accumulated dividend payments prior to the starting time t. Using an exponential utility allows one to get rid of the third variable. This is also one of the reasons why an exponential utility is considered in the present paper. Having just two variables to consider allows us to represent the proposed method in a clearer way, avoiding unnecessary details.

Second, if the maximal allowed dividend rate is quite big, then the standard numerical methods such as finite differences and finite elements break down. We discuss some numerical problems in Section 5.

In this paper, we offer a new approach. Instead of proving the value function to be the unique viscosity solution to the corresponding HJB equation, we investigate the ‘goodness’ of suboptimal strategies. In this way, one avoids both problems described above. There is no need to prove that the value function is a classical or a viscosity solution to the HJB equation, and no need to solve the HJB equation numerically. One simply chooses an arbitrary control with an easy-to-calculate return function and compares its performance, or rather an approximation of its performance, against the unknown value function.

The method is based on sharp occupation bounds which we find by a method developed for sharp density bounds in Baños and Krühner [Reference Baños and Krühner8]. This enables us to make an educated guess and to check whether our pick is indeed almost as good as the optimal strategy. This approach differs drastically from the procedures normally used to calculate the value function, in two ways. First, unlike in most numerical schemes, there is no convergence to the value function; i.e. one gets only a bound for the performance of one given strategy, but no straightforward procedure to get better strategies. Second, our criterion has almost no dependence on the dimension of the problem and is consequently directly applicable in higher dimensions.

The paper is organised as follows. In the next section, we motivate the problem and derive some basic properties of the value function. In Section 3, we consider the case of the maximal constant dividend rate strategy, the properties of the corresponding return function, and the goodness of this strategy (a bound for the distance from the return function to the unknown value function). Section 4 investigates the goodness of a constant barrier strategy. In Section 5, we consider examples illustrating the classical and the new approach. Finally, in the appendix we gather technical proofs and establish occupation bounds.

2. Motivation

We consider an insurance company whose surplus is modelled as a Brownian motion with drift

\[X_t=X_0+\mu t+\sigma W_t,\quad t\geq 0,\]

where $\mu,\sigma, X_0 \in\mathbb{R}$ . We will use the Markov property of X. To be exact, we mean that $(\Omega,\mathfrak A)$ is a measurable space, $\mathbb{P}_{(t,x)}$ , $x\in\mathbb{R}$ , $t\geq 0$ is a family of measures, $X,W\,:\,\Omega\times\mathbb{R}_+\rightarrow\mathbb{R}$ are continuous sample path processes, and under $\mathbb{P}_{(t,x)}$ we have that $(W_{s+t})_{s\geq 0}$ is a standard Brownian motion $W_u=0$ , $u\in[0,t]$ , $X_s = x + \mu \max\{t-s,0\}+ W_s$ , $s\geq 0$ , and $(\mathcal F_t)_{t\geq 0}$ is the right-continuous filtration generated by X. In particular, we have $\mathbb{P}_{(t,x)}(X_t=x)=1$ . Note that the process X is defined for all time points $s\geq 0$ , but we have $\mathbb{P}_{(t,x)}(X_s=x)=1$ for $0\leq s\leq t$ , which basically means that X is constant, equal to its starting value x, before its starting time t. We denote by $\mathbb{E}_{(t,x)}$ the expectation corresponding to $\mathbb{P}_{(t,x)}$ ; also we use the notation $\mathbb{E}_x\,:\!=\,\mathbb{E}_{(0,x)}$ .

Further, we assume that the company has to pay out dividends, characterised by a dividend rate. Denoting the dividend rate process by C, we can write the ex-dividend surplus process as

\[X^C_t=x+\mu t+\sigma W_t-\int_0^t C_s\textrm{d}s.\]

In the present manuscript we only allow dividend rate processes C which are progressively measurable and satisfy $0\le C_s\le \xi$ for some maximal rate $\xi>0$ at any time $s\geq 0$ . We call these strategies admissible. Let $U(x)=\frac1\gamma-\frac1\gamma e^{-\gamma x}$ , $\gamma>0$ , be the underlying utility function and $\tau^C\,:\!=\,\inf\{s\ge t\,:\,X_s^C<0\}$ the ruin time corresponding to the strategy C under the measure $\mathbb{P}_{(t,x)}$ . Our objective is to maximise the expected exponential utility of the discounted dividend payments until ruin. Since we can start our observation at every time point $t\in\mathbb{R}_+$ , the target functional is given by

\[V^C(t,x)\,:\!=\,\mathbb{E}_{(t,x)}\bigg[U\bigg(\int_t^{\tau^C} e^{-\delta s}C_s\textrm{d}s\bigg)\bigg].\]

Here, $\delta>0$ denotes the preference rate of the insurer, helping to transfer the dividend payments to the starting time t. Further, we assume that the dividend payout up to t equals 0; for a rigorous simplification see [Reference Grandits, Hubalek, Schachermayer and Zigo15], or simply note that with already paid dividends $\bar C$ up to time t we have

$$ \mathbb{E}_{(t,x)}\Bigg[U\Bigg(\bar C+ \int_t^{\tau^C} e^{-\delta s}C_s\textrm{d}s\Bigg)\Bigg] = U(\bar C)+e^{-\gamma \bar C}V^C(t,x). $$

The corresponding value function V is defined by

\[V(t,x)\,:\!=\,\sup\limits_{C}\mathbb{E}_{(t,x)}\Bigg[U\Bigg(\int_t^{\tau^C} e^{-\delta s}C_s\textrm{d}s\Bigg)\Bigg],\]

where the supremum is taken over all admissible strategies C. Note that $V(t,0)=0$ , because ruin will happen immediately owing to the oscillation of Brownian motion, i.e. $\tau^C = \min\{s\geq t\,:\, X_s^C=0\}$ for any strategy C under $\mathbb{P}_{(t,x)}$ . The HJB equation corresponding to the problem can be found similarly as in [Reference Grandits, Hubalek, Schachermayer and Zigo15]; for general explanations see for instance [Reference Schmidli19]:

(1) \begin{equation}V_t+\mu V_x+\frac{\sigma^2}{2}V_{xx}+\sup\limits_{0\le y\le\xi}\Big[y\Big({-}V_x+e^{-\delta t}(1-\gamma V)\Big)\Big]=0.\end{equation}

We would like to stress at this point that we show neither that the value function solves the HJB in some sense, nor that a good-enough solution is the value function. In fact, our approach of evaluating the goodness of a given strategy compared to the unknown optimal strategy does not assume any knowledge about the optimal strategy or its existence.

Assuming that the HJB equation has a classical solution (i.e. that it is smooth enough), one would expect that an optimal strategy $C^*$ is the maximiser in the HJB equation at any given point of time, which would depend on the state of the optimal strategy, i.e.

\[C^*\big(s,X_s^*\big)=\begin{cases}0 &\textrm{if} \;\; -V_x\big(s,X_s^*\big)+e^{-\delta s}\big(1-\gamma V\big(s,X_s^*\big)\big)<0,\\\in[0,\xi]& \textrm{if}\;\; -V_x\big(s,X_s^*\big)+e^{-\delta s}\big(1-\gamma V\big(s,X_s^*\big)\big)=0,\\\xi & \textrm{if}\;\; -V_x\big(s,X_s^*\big)+e^{-\delta s}\big(1-\gamma V\big(s,X_s^*\big)\big)>0, \;\end{cases}\]

$\mathbb{P}_{(t,x)}$ -almost surely for any $s\geq t$ .

Remark 2.1. For every dividend strategy C it holds that

\begin{equation*}V^C(t,x)= \mathbb{E}_{(t,x)}\Bigg[U\Bigg(\int_t^{\tau^C}C_s e^{-\delta s}\textrm{d}s \Bigg)\Bigg]\le U\bigg(\xi\int_t^{\infty} e^{-\delta s}\textrm{d}s \bigg)= U\bigg(\frac\xi\delta e^{-\delta t} \bigg).\end{equation*}

We conclude that

\[\lim\limits_{x\to\infty}V(t,x)\le U\bigg(\frac\xi\delta e^{-\delta t}\bigg),\]

and V is a bounded function. Consider now a constant strategy $C_t\equiv\xi$ ; i.e. we always pay on the rate $\xi$ . The ex-dividend process becomes a Brownian motion with drift $\mu-\xi$ and volatility $\sigma$ . Define further, for $n\ge 1$ ,

(2) \begin{equation}\eta_n\,:\!=\,\frac{\xi-\mu-\sqrt{(\xi-\mu)^2+2\delta\sigma^2 n}}{\sigma^2}<0, \end{equation}

and let $\tau^\xi\,:\!=\,\inf\{s\ge t\,:\, x+(\mu-\xi) s+\sigma W_s\le 0\}$ ; i.e. $\tau^\xi$ is the ruin time under the strategy $\xi$ . Here and in the following we define

(3) \begin{equation}\Delta\,:\!=\, \xi\gamma/\delta. \end{equation}

With help of a change-of-measure technique (see for example [Reference Schmidli19, p. 216]), we can calculate the return function $V^\xi$ of the constant strategy $C_t\equiv\xi$ by using the power series representation of the exponential function:

(4) \begin{align}V^\xi(t,x)&= \mathbb{E}_{x}\bigg[U\bigg(\xi\int_t^{\tau^\xi}e^{-\delta s}\textrm{d}s\bigg)\bigg]=\frac 1\gamma-\frac 1\gamma\mathbb{E}_{x}\Bigg[e^{-\Delta \Big(e^{-\delta t}-e^{-\delta\big(t+\tau^\xi\big)}\Big)}\Bigg]\nonumber\\&=\frac 1\gamma-\frac 1\gamma e^{-\Delta e^{-\delta t}}\mathbb{E}_{x}\bigg[e^{\Delta e^{-\delta\big(t+\tau^\xi\big)}}\bigg]=\frac 1\gamma -\frac { e^{-\Delta e^{-\delta t}}}\gamma\sum\limits_{n=0}^\infty \frac{e^{-\delta tn}\Delta^n}{n!}\mathbb{E}_x\Big[e^{-\delta \tau^\xi n}\Big]\nonumber\\&=\frac 1\gamma -\frac { e^{-\Delta e^{-\delta t}}}\gamma-\frac { e^{-\Delta e^{-\delta t}}}\gamma\sum\limits_{n=1}^\infty \frac{e^{-\delta tn}\Delta^n}{n!}e^{\eta_n x}.\end{align}

It is obvious that in the above power series $\lim\limits_{x\to\infty}$ and summation can be interchanged, yielding $\lim\limits_{x\to\infty} V^\xi(t,x)=U\bigg(\frac\xi\delta e^{-\delta t}\bigg)$ . In particular, we can now conclude

\[\lim\limits_{x\to\infty}V(t,x)=\frac 1\gamma-\frac 1\gamma\exp\!\big({-}\Delta e^{-\delta t}\big)=U\bigg(\frac\xi\delta e^{-\delta t}\bigg)\]

uniformly in $t\in[0,\infty)$ .

Next we show that for some special values of the maximal rate $\xi$ , with positive probability the ex-dividend surplus process remains positive up to infinity.

Remark 2.2. Let C be an admissible strategy, where $X^C$ is the process under the strategy C. Further, let $X^\xi$ be the process under the constant strategy $\xi$ ; i.e. $X^\xi$ is a Brownian motion with drift $(\mu-\xi)$ and volatility $\sigma$ . Then it is clear that

\[X_s^\xi\le X^C_s.\]

If $\mu>\xi$ , then it holds (see for example [Reference Borodin and Salminen9, p. 223]) that $\mathbb{P}_{(t,x)}[\tau^C=\infty]\ge\mathbb{P}_{(t,x)}[\tau^\xi= \infty]>0$ .

Finally, we give one structural property of the value function, which, however, is not used later.

Theorem 2.1. The value function is Lipschitz-continuous, strictly increasing in x, and decreasing in t.

Proof. • Let $h>0$ , $\varepsilon>0$ be arbitrary but fixed. Further, let C be an $\varepsilon$ -optimal strategy for $(t,x)\in\mathbb{R}_+^2$ , i.e. $V(t,x)\le V^C(t,x)+\varepsilon$ . Define the strategy $\tilde C$ for $(t,x+h)$ in the following way:

\[\tilde C_s=\begin{cases}C_s & \mbox{if $t\leq s<\tau^C$},\\ \xi & \mbox{ otherwise}.\end{cases}\]

Then $\tilde C$ is an admissible strategy and is actually the same as the strategy C until the process $X^{\tilde C}$ reaches the level h. Afterwards it pays at maximal rate until ruin, which is strictly later $\tau^C<\tau^{\tilde C}$ . Note that $U(x+y) = U(x)+e^{-\gamma x}U(y)$ , and hence we have

\begin{align*}V(t,x+h)-&V(t,x)\ge V^{\tilde C}(t,x+h)-V^C(t,x)-\varepsilon\\&=\mathbb{E}_{(t,x+h)}\Bigg[U\Bigg(\int_t^{\tau^{\tilde C}}\tilde C_s e^{-\delta s}\textrm{d}s\Bigg)\Bigg]- \mathbb{E}_{(t,x)}\Bigg[U\Bigg(\int_t^{\tau^{ C}} C_s e^{-\delta s}\textrm{d}s\Bigg)\Bigg]-\varepsilon\\&=\mathbb{E}_{(t,x+h)}\Bigg[e^{-\gamma \int_t^{\tau^C} \tilde C_s e^{-\delta s} \textrm{d}s} U\left(\int_{\tau^C}^{\tau^{\tilde C}} \xi e^{-\delta s} \textrm{d}s \right)\Bigg]-\varepsilon\\&\ \ge\ \mathbb{E}_{(t,x+h)}\Bigg[e^{-\gamma \int_t^{\infty} \xi e^{-\delta s} \textrm{d}s} U\left(\int_{\tau^C}^{\tau^{\tilde C}} \xi e^{-\delta s} \textrm{d}s \right)\Bigg]-\varepsilon\\&\ge K_h -\varepsilon,\end{align*}

where $K_h>0$ and can be chosen independent of the strategy C. Thus we find that $V(t,x+h)-V(t,x) \geq K_h$ .

• Consider further (t, 0) with $t\in\mathbb{R}_+$ . Let $h,\varepsilon>0$ and let C be an arbitrary admissible strategy. Let $\tau^0$ be the ruin time for the strategy which is constant and zero. Define

(5) \begin{equation}\varrho_n\,:\!=\,\frac{\sqrt{\mu^2+2\sigma^2 \delta n}}{\sigma^2}, \quad\theta_n\,:\!=\,\frac{-\mu}{\sigma^2}+\varrho_n, \quad \mbox{and}\quad\zeta_n\,:\!=\,\frac{-\mu}{\sigma^2}-\varrho_n \;\end{equation}

for any $n\in\mathbb{N}$ . Using $\mathbb{E}_{h}\big[e^{-\delta \tau^0}\big]=e^{\zeta_1 h}$ (see for instance [Reference Borodin and Salminen9, p. 295]), $X^0_s\ge X_s^C$ , and the convexity of the exponential function, $U(x)=\frac{1-e^{-\gamma x}}\gamma\le x$ , it follows that

(6) \begin{align}V^C(t,h)&=\mathbb{E}_{(t,h)}\Bigg[U\Bigg(\int_{t}^{\tau^C}e^{-\delta s}C_{s}\textrm{d}s\Bigg)\Bigg] \le \mathbb{E}_{h}\Bigg[U\Bigg(\xi\int_{t}^{t+\tau^0}e^{-\delta s}\textrm{d}s\Bigg)\Bigg]\nonumber \\&=\mathbb{E}_{h}\bigg[U\bigg(\frac\xi\delta e^{-\delta t}\Big(1-e^{-\delta \tau^0}\Big)\bigg)\bigg]\le \frac\xi\delta e^{-\delta t}\big(1-e^{\zeta_1 h}\big)\le -\frac\xi\delta \zeta_1 h .\end{align}

Let $h\geq 0$ and let $\tau^0$ be the ruin time for the strategy which is constant and zero. Let $(t,x)\in\mathbb{R}_+^2$ be arbitrary, and let C be an admissible strategy which is $\varepsilon$ -optimal for the starting point $(t,x+h)$ , i.e. $V(t,x+h)-V^C(t,x+h)\le \varepsilon$ . Define further $\tilde\tau\,:\!=\,\inf\{s\ge t\,:\, \; X^C_s=h\}$ . Then $X_s^C\geq 0$ for $s\in [t,\tilde\tau]$ under $\mathbb{P}_{(t,x)}$ because $X_s^C\geq h$ for $s\in[t,\tilde\tau]$ under $\mathbb{P}_{(t,x+h)}$ . Then the strategy C, up to $\tilde \tau$ , is an admissible strategy for (t, x) satisfying

\[V^C(t,x)=\mathbb{E}_{(t,x)}\bigg[U\bigg(\int_{t}^{\tilde \tau} e^{-\delta s} C_{s} \textrm{d}s\bigg)\bigg]=\mathbb{E}_{(t,x+h)}\bigg[U\bigg(\int_{t}^{\tilde \tau} e^{-\delta s} C_{s} \textrm{d}s\bigg)\bigg].\]

Note that $X^C_{\tilde\tau}=h$ , and hence we have

\begin{align*} \tau^C-\tilde\tau &= \inf\big\{u\geq 0\,:\, X^C_{u+\tilde\tau} = 0\big\} = \inf\bigg\{u\geq 0\,:\, h+(X_{u+\tilde\tau}-X_{\tilde\tau})-\int_{\tilde\tau}^u C_r\ \textrm{d}r = 0 \bigg\} \\ &\leq \inf\big\{u\geq 0\,:\, h +\big(X_{u+\tilde\tau}-X_{\tilde\tau}\big) = 0 \big\}\, =\!:\,\beta^0,\end{align*}

where $\mathbb{P}_{(t,x+h)}^{\beta_0} = \mathbb{P}_{t,h}^{\tau^0}$ . Here, $ \mathbb{P}_{t,h}^{\tau^0}$ denotes the law of $\tau^0$ under $\mathbb{P}_{t,h}$ ; analogously, $\mathbb{P}_{(t,x+h)}^{\beta_0}$ is the law of $\beta_0$ under $\mathbb{P}_{t,x+h}$ . Since U satisfies $U(a+b)\le U(a)+U(b)$ for any $a,b\geq 0$ , we have

\begin{align*}V(t,x+h)&\le V^{C}(t,x+h)+\varepsilon=\mathbb{E}_{(t,x+h)}\bigg [U\bigg(\int_{t}^{\tau^C} e^{-\delta s} C_{s}\ \textrm{d}s\bigg)\bigg]+\varepsilon\\&=\mathbb{E}_{(t,x+h)}\bigg[U\bigg(\int_{t}^{\tilde \tau} e^{-\delta s} C_{s} \textrm{d}s+\int_{\tilde\tau}^{\tau^C} e^{-\delta s} C_{s} \textrm{d}s\bigg)\bigg]+\varepsilon\\&\le \mathbb{E}_{(t,x+h)}\bigg[U\bigg(\int_{t}^{\tilde \tau} e^{-\delta s} C_{s} \textrm{d}s\bigg)\bigg]+\mathbb{E}_{(t,x+h)}\bigg[U\bigg(\int_{\tilde\tau}^{\tau^C} e^{-\delta s} C_{s} \textrm{d}s\bigg)\bigg]+\varepsilon\\&\le V^C(t,x)+\mathbb{E}_{(t,x+h)}\bigg[U\bigg(\frac\xi\delta\Big(e^{-\delta \tilde \tau}-e^{-\delta\tau^C}\Big)\bigg)\bigg]+\varepsilon\\&\le V(t,x)+\mathbb{E}_{(t,x+h)}\bigg[U\bigg(\frac\xi\delta\Big(1-e^{-\delta(\tau^C-\tilde\tau)}\Big)\bigg)\bigg]+\varepsilon\\&\le V(t,x)+ \mathbb{E}_{h}\bigg[U\bigg(\frac\xi\delta\Big(1-e^{-\delta\tau^0}\Big)\bigg)\bigg]+\varepsilon.\end{align*}

Because $\varepsilon$ was arbitrary, and by (6), we find

\[0\le V(t,x+h)-V(t,x)\le -\frac\xi\delta \zeta_1 h.\]

Consequently, V is Lipschitz-continuous in the space variable x with Lipschitz constant at most $-\frac\xi\delta \zeta_1$ .

• Next we consider the properties of the value function related to the time variable.

Because $\delta>0$ , it is clear that V is strictly decreasing in t. First we show that the value function is strictly decreasing in time. To this end let $(t,x)\in \mathbb{R}_+^2$ , let $h>0$ , and let C be an admissible strategy which is constant and zero before time $t+h$ , with $\tau$ its ruin time. Since C is measurable with respect to the $\sigma$ -algebra $\sigma(X_s\,:\,s\geq t)$ , we can find a measurable function $c\,:\,\mathbb{R}_+\times C(\mathbb{R}_+,\mathbb{R})\rightarrow \mathbb{R}$ such that $C_{s}(\omega) = c(s-t,(X_{t+u})_{u\geq 0})$ . Defining $\tilde C_s \,:\!=\, c(s-(t+h),(X_{t+h+u})_{u\geq 0}$ , we see that the law of $(X_s,C_s)_{s\geq t}$ under $\mathbb{P}_{(t,x)}$ equals the law of $(X_{s+h},\tilde C_{s+h})_{s\geq t}$ under $\mathbb{P}_{(t+h,x)}$ .

The stopping time $\tilde \tau \,:\!=\, \inf\{s\geq t+h\,:\, X^{\tilde C}_s = 0\}$ is the corresponding ruin time:

\begin{align*} V^C(t,x) &= \mathbb{E}_{(t,x)}\left[ U\bigg(\int_{t}^\tau C_s e^{-\delta s} \textrm{d}s\bigg) \right] \\ &= \mathbb{E}_{(t+h,x)}\left[ U\bigg(\int_{t+h}^{\tilde \tau} \tilde C_s e^{-\delta (s-h)} \textrm{d}s\bigg) \right]. \end{align*}

Taking the supremum over all strategies yields

$$ V(t,x) = \sup_{\tilde C} \mathbb{E}_{(t+h,x)}\left[ U\left(e^{\delta h} \int_{t+h}^{\tilde \tau^{\tilde C}} \tilde C_s e^{-\delta s} \textrm{d}s \right) \right] > V(t+h,x). $$

Further, let $(t,x)\in\mathbb{R}_+^2$ , let $h>0$ , and let C be an admissible strategy. Then the strategy $\tilde C$ with $\tilde C_s\,:\!=\,C_{s-h}1\mkern-5mu{\textrm{I}}_{\{s\geq h\}}$ is admissible. Since U is concave, we have

\begin{align*}V(t+h,x)&\ge V^{\tilde C}(t+h,x)= \mathbb{E}_{(t+h,x)}\Bigg[U\Bigg(\int_{t+h}^{\tau^C+h}e^{-\delta s}C_{s-h}\ \textrm{d}s\Bigg)\Bigg]\\&=\mathbb{E}_{(t,x)}\Bigg[U\Bigg(e^{-\delta h}\int_{t}^{\tau^C}e^{-\delta s}C_{s}\ \textrm{d}s\Bigg)\Bigg]\ge e^{-\delta h}V^C(t,x).\end{align*}

Building the supremum over all admissible strategies on the right side of the above inequality and using Remark 2.1 yields

\[0\ge V(t+h,x)-V(t,x)\ge V(t,x)(e^{-\delta h}-1)\ge - U\Big(\frac\xi\delta\Big)\delta h;\]

consequently, V is Lipschitz-continuous as a function of t, with constant $\delta U(\xi/\delta)$ .

2.1. Heuristics

Heuristically, our approach to compare a given feedback strategy C with the unknown optimal strategy $C^*$ works as follows:

  1. 1. We start with the performance function $V^C$ corresponding to some feedback strategy $C_t = c\big(t,X^C_t\big)$ . If smooth enough, $V^C$ satisfies

    $$ V^C_t + \mu V^C_x + \frac{\sigma^2}2V^C_{xx} + c\big\{{-}V^C_x + e^{-\delta t}\big(1-\gamma V^C\big)\big \} = 0, \quad V^C(t,0)=0. $$
  2. 2. However, sometimes $V^C$ is not smooth enough or is not known explicitly. In this case one uses a replacement H (simply any $\mathcal C^{1,2}$ function from $\mathbb{R}_+\times\mathbb{R}$ to $\mathbb{R}$ with $H(t,0)=0$ ) and defines the mismatch

    $$ \Psi \,:\!=\, H_t + \mu H_x + \frac{\sigma^2}2H_{xx} + c\big\{{-}H_x + e^{-\delta t}(1-\gamma H)\big\}, $$
    where $\Psi$ as close to zero as possible is desirable.
  3. 3. We consider another strategy $C^*$ and the corresponding controlled process $X^*=X^{C^*}$ , as well as their performance $V^* \,:\!=\, V^{C^*}$ . Its ruin time is denoted by $\tau$ , and we obtain from Itô’s formula, using $X^*_\tau=0$ and $H(t,0)\equiv 0$ ,

    \begin{align*} 0 &= e^{-\gamma\int_t^{\tau}e^{-\delta u}C^*_u\,\textrm{d}u}\cdot H\big(\tau,X_{\tau}^*\big) \\&= H\big(t,X_t^*\big) + \int_t^\tau e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot H_x \textrm{d}W_s \\ &\quad {}+ \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot \left(H_t + \mu H_x+\frac{\sigma^2}2H_{xx}+C^*_s\big\{\!{-}H_x-\gamma e^{-\delta s}H\big\}\right) \textrm{d}s \\ &= H\big(t,X_t^*\big) + \int_t^\tau e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot H_x \textrm{d}W_s \\ &\quad {}+ \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot \left(H_t + \mu H_x+\frac{\sigma^2}2H_{xx}+c({-}H_x + e^{-\delta s}(1-\gamma H))\right) \textrm{d}s \\ &\quad {} + \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot\big(C^*_s-c\big)\big\{\!{-}H_x + e^{-\delta s}(1-\gamma H)\big\} \textrm{d}s \\ &\quad{}- \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot e^{-\delta s}C^*_s \textrm{d}s \\ &= H\big(t,X_t^*\big) + \int_t^\tau e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot H_x \textrm{d}W_s + \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u} \cdot \Psi \textrm{d}s \\ &\quad{}+ \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\big(C^*_s-c\big)\big\{\!{-}H_x + e^{-\delta s}(1-\gamma H)\big\} \textrm{d}s \\ &\quad {}- U\bigg(\int_t^\tau e^{-\delta u} C^*_u \textrm{d}u\bigg).\end{align*}
  4. 4. Taking $\mathbb{P}_{(t,x)}$ -expectation (assuming the local martingale from the $\textrm{d}W_s$ -integral is a martingale) and bringing the expectation of the U term on the other side yields

    \begin{align*} V^*(t,x) &= H(t,x) + \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot \Psi \textrm{d}s \right] \\ &+ \mathbb{E}_{(t,x)} \left[ \int_t^{\tau}e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u}\cdot\big(C^*_s-c\big)\big\{{-}H_x + e^{-\delta s}(1-\gamma H)\big\}\textrm{d}s \right]. \end{align*}
  5. 5. Up to here, this is all standard. The performance function $V^*$ is expressed in terms of a new function H plus two error terms which could have negative sign. Several other stochastic control problems can lead to similar equations. The first error term corresponds to the usage of a function other than the performance function of our initial feedback control C. The second error term corresponds to the suboptimality of the feedback control C compared to the control $C^*$ , measured relatively by the function H.

  6. 6. Now we need to control the error terms despite the appearance of the unknown optimal control. The first error term is simply bounded by

    $$ \bigg|\mathbb{E}_{(t,x)} \left[ \int_t^{\tau} e^{-\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u} \Psi \textrm{d}s \right]\bigg| \leq \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} |\Psi\big(s,X^*_s\big)| \textrm{d}s\right], $$
    where one has to deal with the unknown process $X^*$ but its control has disappeared. This is the point where occupation bounds as in the appendix yield explicit upper bounds.
  7. 7. The appearance of $C^*$ in the second error term can be suppressed by maximising the integrand over all possible values of $C^*$ . Since $C=C^*$ is a possible value, this maximum is positive and we obtain

    \begin{align*} \mathbb{E}_{(t,x)} &\left[ \int_t^{\tau} \exp\bigg({-}\gamma\int_t^{s}e^{-\delta u}C^*_u\textrm{d}u\bigg)\big(C^*_s-c\big)\big({-}H_x + e^{-\delta s}(1-\gamma H)\big) \textrm{d}s\right] \\ &\leq \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} \sup_{y\in[0,\xi]}\left((y-c)\big({-}H_x + e^{-\delta s}(1-\gamma H)\big)\right) \textrm{d}s \right]. \end{align*}
  8. 8. Putting these together we obtain

    \begin{align*} V^*(t,x) &\leq H(t,x) + \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} |\Psi\big(s,X^*_s\big)| \textrm{d}s\right] \\ &+ \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} \sup_{y\in[0,\xi]}\left((y-c)\big({-}H_x\big(s,X^*_s\big) + e^{-\delta s}\big(1-\gamma H\big(s,X^*_s\big)\big)\big)\right) \textrm{d}s \right].\end{align*}
  9. 9. If we have a common upper bound $\Upsilon_{t,x}\geq 0$ for $\mathbb{E}_{(t,x)} \left[ \int_t^{\tau} |\Psi\big(s,X^*_s\big)| \textrm{d}s\right]$ , then we may take the supremum over all strategies on the left-hand side and obtain

    \begin{align*} V(t,x) &\leq H(t,x) + \Upsilon_{t,x} \\ &+ \mathbb{E}_{(t,x)} \left[ \int_t^{\tau} \sup_{y\in[0,\xi]}\left((y-c)\big({-}H_x\big(s,X^*_s\big) + e^{-\delta s}\big(1-\gamma H\big(s,X^*_s\big)\big)\big)\right) \textrm{d}s \right].\end{align*}
    To obtain such a common upper bound, we will employ bounds for the expected occupation which are summarised in the appendix. Note that choosing the optimal y, depending on $(s,X^*_x)$ , allows us to employ these common upper bounds for the second summand as well.

Remark 2.3. We note that if $H=V^C$ (i.e. $\Psi=0$ ), and if in the maximisation in Item 8 above the maximiser is attained in C, then both error terms vanish and we find

$$ V^C \leq V \leq H = V^C,$$

which implies that all of these quantities are the same. This means that if a feedback control C is found such that its performance function $V^C$ satisfies the HJB equation

$$ \sup_{y\in[0,\xi]} \left( V^C_t + \mu V^C_x + \frac{\sigma^2}2V^C_{xx} + y\Big({-}V^C_x + e^{-\delta t}\Big(1-\gamma V^C\Big)\Big) \right) = 0, \quad V^C(t,0)=0, $$

then we have verified heuristically that $V^C=V$ .

3. Payout on the maximal rate

3.1. Could it be optimal to pay on the maximal rate up to ruin?

First we investigate the constant strategy $\xi$ , i.e. the strategy under which the dividends will be paid out at the maximal rate $\xi$ until ruin. In this section we find exact conditions under which this strategy is optimal. We already know from (4) that the corresponding return function is given by

\[V^\xi(t,x)=\frac1\gamma-\frac1\gamma e^{-\Delta e^{-\delta t}}-e^{-\Delta e^{-\delta t}}\sum\limits_{n=1}^\infty \frac{\Delta^n}{\gamma n!}e^{-\delta tn}e^{\eta_n x}.\]

It is obvious that $V^\xi$ is increasing and concave in x and decreasing in t. For further considerations we will need the following remark.

Remark 3.1. Consider $\eta_n$ , defined in (2), as a function of $\xi$ .

  1. 1. Since

    $$\frac{\textrm{d}}{{\textrm{d}}\xi}\eta_n=\frac{-\eta_n}{\sqrt{(\xi-\mu)^2+2\delta\sigma^2 n}},$$
    it is easy to see that $\eta_n(\xi)$ and $\frac{\eta_{n+1}(\xi)n}{\eta_n(\xi)(n+1)}$ are increasing in $\xi$ . Also, we have
    \[\lim\limits_{\xi\to\infty}\frac{\eta_{n+1}(\xi)n}{\eta_n(\xi)(n+1)}=1.\]
    We conclude that $\frac{\eta_{n+1}}{(n+1)}>\frac{\eta_{n}}{n}$ , as $\eta_n,\eta_{n+1}<0$ .
  2. 2. Further, we record that

    \[\lim\limits_{\xi\to\infty}\xi\eta_n(\xi)=-\delta n.\]
  3. 3. Also, we have

    \begin{align*}\frac{{\textrm{d}}}{\textrm{d}\xi}\big(\delta n+\xi\eta_n(\xi)\big)&=\eta_n\Bigg(1-\frac{\xi}{\sqrt{(\xi-\mu)^2+2\delta\sigma^2n}}\Bigg)\begin{cases}<0, & \xi<\frac{\mu^2+2\delta\sigma^2n}{2\mu},\\\ge 0, &\xi\ge\frac{\mu^2+2\delta\sigma^2n}{2\mu}.\end{cases}\end{align*}
    Thus, at $\xi=0$ the function $\xi\mapsto \delta n+\xi\eta_n(\xi)$ attains the value $\delta n>0$ , at its minimum point $\xi^*=\frac{\mu^2+2\delta\sigma^2n}{2\mu}$ we have
    \[\delta n+\xi^*\eta_n(\xi^*)=\delta n-\frac{\mu^2+2\delta\sigma^2n}{2\sigma^2}=-\frac{\mu^2}{2\sigma^2}<0,\]
    and, finally, by Item 2 above, for $\xi\to\infty$ it holds that $\lim\limits_{\xi\to 0}\delta n+\xi\eta_n(\xi)=0$ . Thus, for every $n\in\mathbb{N}$ the function $\xi \mapsto 1+\frac{\eta_n(\xi)\xi}{\delta n}$ has a unique zero at $\frac{\delta n\sigma^2}{2\mu}$ .

Furthermore, it is easy to check that in $V^\xi$ summation and differentiation can be interchanged. Differentiation with respect to x yields

\[V^\xi_x(t,x)=-e^{-\Delta e^{-\delta t}}\sum\limits_{n=1}^{\infty}\frac{\Delta^n}{\gamma n!}e^{-\delta tn}\eta_n e^{\eta_nx}.\]

In order to answer the optimality question, we have to look at the function $-V_x^\xi+e^{-\delta t}\big(1-\gamma V^\xi\big)$ appearing in the crucial condition in the HJB equation (1). If this expression is positive for all $(t,x)\in\mathbb{R}_+^2$ , the function $V^\xi$ becomes a candidate for the value function. For simplicity, we multiply the expression $-V_x^\xi+e^{-\delta t}\big(1-\gamma V^\xi\big)$ by $e^{\delta t}e^{\Delta e^{-\delta t}}$ and define

(7) \begin{align}\psi(t,x)&\,:\!=\,\frac{e^{\Delta t}}t\bigg\{\!{-}V^\xi_x\bigg(\frac{\ln\!(t)}{-\delta},x\bigg)+t\bigg(1-\gamma V^\xi\bigg(\frac{\ln(t)}{-\delta},x\bigg)\bigg)\bigg\}\nonumber\\&=\sum\limits_{n=0}^{\infty}t^{n}\frac{\Delta^n}{n!}\bigg\{\frac{\eta_{n+1}\xi}{\delta (n+1)}e^{\eta_{n+1}x}+e^{\eta_n x}\bigg\}.\end{align}

If $\psi\ge0$ on $[0,1]\times \mathbb{R}_+$ , then $V^\xi$ does solve the HJB equation, and as we will see, it is the value function in that case.

Theorem 3.1. $V^\xi$ is the value function if and only if $\xi\le \frac{\delta\sigma^2}{2\mu}$ . In that case $V^\xi$ is a classical solution to the HJB equation (1), and an optimal strategy is constant $\xi$ .

Proof. Since $\frac{\sigma^2}2\eta_n^2=(\xi-\mu)\eta_n+\delta n$ for all $n\ge 1$ , it is easy to check, using the power series representation of $V^\xi$ , that $V^\xi$ solves the differential equation

\[V^\xi_t+\mu V^\xi_x+\frac{\sigma^2}{2}V^\xi_{xx}+\xi\Big({-}V^\xi_x+e^{-\delta t}\big(1-\gamma V^\xi\big)\Big)=0.\]

We first assume that $\xi\leq \frac{\delta\sigma^2}{2\mu}$ and show that $V^\xi$ is the value function. Note that $\xi\leq \frac{\delta\sigma^2}{2\mu}$ is equivalent to $\frac{\eta_1\xi}{\delta}+1\ge 0$ . We have $\xi \le n\frac{\delta\sigma^2}{2\mu}$ for any $n\geq 1$ , and Item 1 of Remark 3.1 yields, for all $n\ge 2$ ,

$$ \eta_n\frac{\xi}{\delta n}+1>\eta_1\frac{\xi}{\delta}+1\ge 0. $$

This gives immediately, for all $(t,x)\in (0,1]\times \mathbb{R}_+$ ,

\begin{align*}\psi(t,x)\ge \sum\limits_{n=0}^{\infty}t^{n}\frac{\Delta^n}{n!}\bigg\{\frac{\eta_{n+1}\xi}{\delta(n+1)}+1\bigg\}e^{\eta_n x} \ge 0,\end{align*}

which is equivalent to

\[-V^\xi_x(t,x)+e^{-\delta t}\big(1-\gamma V^\xi(t,x)\big)\ge 0\]

for all $(t,x)\in\mathbb{R}_+^2$ . This means that $V^\xi$ solves the HJB equation (1) if $\xi\le \frac{\delta\sigma^2}{2\mu}$ .

Now let C be an arbitrary admissible strategy, $\tau$ its ruin time, and $\hat X_u\,:\!=\,X^C_u$ . Applying Itô’s formula yields, $\mathbb{P}_{(t,x)}$ -almost surely,

\begin{align*}&e^{-\gamma\int_t^{\tau\wedge s} e^{-\delta u}C_u\textrm{d}u}V^\xi\big(\tau\wedge s,\hat X_{\tau\wedge s}\big)=V^\xi(t,x)+\sigma\int_t^{\tau\wedge s} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}V^\xi_x\textrm{d}W_y\\&{}\quad+\int_t^{\tau\wedge s} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\bigg\{V^\xi_t+(\mu-C_y)V^\xi_x+\frac{\sigma^2}2V^\xi_{xx}-\gamma C_y e^{-\delta y} V^\xi\bigg\}\textrm{d}y.\end{align*}

Since $V_x^\xi$ is bounded, the stochastic integral above is a martingale with expectation zero. For the second integral one obtains, using the differential equation for $V^\xi$ ,

\begin{align*}&\int_t^{\tau\wedge s} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\bigg\{V^\xi_t+(\mu-C_y)V^\xi_x+\frac{\sigma^2}2V^\xi_{xx}-\gamma C_y e^{-\delta y} V^\xi\bigg\}\textrm{d}y\\&= \int_t^{\tau\wedge s} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\Big\{\big(C_y-\xi\big)\Big[-V_x^\xi+e^{-\delta y}\big(1-\gamma V^\xi\big)\Big]-C_y e^{-\delta y}\Big\}\textrm{d}y.\end{align*}

Building the expectations on the both sides and letting $s\to\infty$ , by interchanging the limit and expectation (using the bounded convergence theorem) we obtain

(8) \begin{align}0&=V^\xi(t,x)\nonumber\\& {}+ \mathbb{E}_{(t,x)}\bigg[\int_t^{\tau} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\big(C_y-\xi\big)\Big\{\!{-}V_x^\xi+e^{-\delta y}\big(1-\gamma V^\xi\big)\Big\}\textrm{d}y\bigg]\end{align}
(9) \begin{align} {}- \mathbb{E}_{(t,x)}\bigg[\int_t^{\tau} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\cdot C_y e^{-\delta y}\textrm{d}y\bigg].\qquad\quad\qquad\qquad\qquad\end{align}

Since $C_u\le \xi$ and $-V_x^\xi\big(y,\hat X_y\big)+e^{-\delta y}\big(1-\gamma V^\xi\big(y,\hat X_y\big)\big)\ge 0$ , the expectation in (8) is non-positive.

For (9) one has

\begin{align*}\mathbb{E}_{(t,x)}\bigg[\int_t^{\tau} e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}\cdot C_y e^{-\delta y}\textrm{d}y\bigg]&=- \mathbb{E}_{(t,x)}\Big[\int_t^{\tau} \textrm{d}\frac{e^{-\gamma\int_t^{y} e^{-\delta u}C_u\textrm{d}u}}\gamma \Big]\\&=\mathbb{E}_{(t,x)}\bigg[U\bigg(\int_t^{\tau} e^{-\delta u}C_u\textrm{d}u\bigg)\bigg]=V^C(t,x),\end{align*}

giving $V^C(t,x)\le V^\xi(t,x)$ for all admissible strategies C. Therefore, $V^\xi$ is the value function.

Now let $\xi> \frac{\delta\sigma^2}{2\mu}$ , and assume for the sake of contradiction that $V^\xi$ is the value function. Then we have $\psi(0,0)=1+\eta_1 \xi/\delta<0$ . This means in particular that the function $\psi$ is also negative for some $(t,x)\in(0,1]\times\mathbb{R}_+$ . Consequently, $V^\xi$ does not solve the HJB equation (1). However, $V^\xi$ is smooth enough and has a bounded x-derivative. Thus, classical verification results (see, for instance, [Reference Schmidli19, Section 2.5.1]) yield that $V^\xi$ solves the HJB equation—a contradiction.

In the following, we assume $\xi>\frac{\delta\sigma^2}{2\mu}$ .

3.2. The goodness of the strategy $\xi$

We now provide an estimate on the goodness of the constant payout strategy which relies only on the performance of the chosen strategy $\xi$ and on deterministic constants. Recall from (2) and (5) that

\begin{align*}&\eta_n = \frac{(\xi-\mu)-\sqrt{(\xi-\mu)^2+2n\delta\sigma^2}}{\sigma^2}, \\ &\theta_n = \frac{-\mu+\sqrt{\mu^2+2n\delta\sigma^2}}{\sigma^2},\quad \zeta_n = \frac{-\mu-\sqrt{\mu^2+2n\delta\sigma^2}}{\sigma^2}. \end{align*}

We first present the main inequality of this section, then, in the subsequent remark, discuss the finiteness of the sum.

Theorem 3.2. Let $t,x\geq 0$ . Then we have

\begin{align*}V(t,x) &\le V^\xi(t,x)\\&\quad {}+\xi e^{-\delta t}\sum\limits_{n=0}^{\infty} e^{-\delta t n} \frac{\Delta^n}{n!} \int_0^{\infty} \left(\frac{-\eta_{n+1}\xi}{\delta(n+1)}e^{\eta_{n+1}y}-e^{\eta_{n}y}\right)^+f_{n+1}(x,y)\textrm{d}y,\end{align*}

where

\begin{align*} f_n(x,y) &\,:\!=\, \frac{2\!\left(e^{\theta_n(x \wedge y)}-e^{\zeta_n(x\wedge y)}\right)e^{\eta_{n}(x-y)^+}}{\sigma^2\!\left((\theta_n-\eta_n)e^{y\theta_n}-(\zeta_n-\eta_n)e^{y\zeta_n}\right)},\quad y\geq 0. \end{align*}

Proof. We know that the return function $V^\xi\in \mathcal C^{1,2}$ . Let C be an arbitrary admissible strategy. Then, using Itô’s formula for $s>t$ under $\mathbb{P}_{(t,x)}$ , we have

\begin{align*}&e^{-\gamma\int_t^{s\wedge\tau^C} e^{-\delta u}C_u\textrm{d}u}\cdot V^\xi\Big(s\wedge\tau^C,X_{s\wedge\tau^C}^C\Big)\\&=V^\xi(t,x)+\int_t^{s\wedge\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot \bigg\{V^\xi_t+(\mu-C_r) V^\xi_x+\frac{\sigma^2}2 V^\xi_{xx}-\gamma e^{-\delta r}C_rV^\xi\bigg\}\textrm{d}r\\&\quad {}+\sigma\int_t^{s\wedge\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot V^\xi_x\textrm{d}W_r.\end{align*}

Using the differential equation for $V^\xi$ , one obtains, as in the last proof, using the definition of $\psi$ from (7),

\begin{align*}&e^{-\gamma\int_t^{s\wedge\tau^C} e^{-\delta u}C_u\textrm{d}u}\cdot V^\xi\Big(s\wedge\tau^C,X_{s\wedge\tau^C}^C\Big)\\&\quad = V^\xi(t,x)+\int_t^{s\wedge\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot (C_r-\xi)\cdot e^{-\delta r}e^{-\Delta e^{-\delta r}}\psi\big(e^{-\delta r},X^C_r\big)\textrm{d}r\\&\quad\quad {}-\int_t^{s\wedge\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot C_r e^{-\delta r}\textrm{d}r+\sigma\int_t^{s\wedge\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot V^\xi_x\textrm{d}W_r.\end{align*}

Building the $\mathbb{P}_{(t,x)}$ -expectations, letting $s\to\infty$ , and rearranging the terms, one has

\begin{align*}V^C(t,x)&=V^\xi(t,x)\\&{}\quad +\mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot (C_r-\xi) \cdot e^{-\delta r}e^{-\Delta e^{-\delta r}}\psi\big(e^{-\delta r},X^C_r\big)\textrm{d}r\Bigg].\end{align*}

Our goal is to find a C-independent estimate for the expectation on the right-hand side above, in order to gain a bound for the difference $V(t,x)-V^\xi(t,x)$ . Since $e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\leq 1$ , $e^{-\Delta e^{-\delta r}} \le 1$ , and $-(C_r-\xi)\le \xi$ , we have

\begin{align*}&\mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot (C_r-\xi)\cdot e^{-\delta r}e^{-\Delta e^{-\delta r}}\psi\big(e^{-\delta r},X^C_r\big)\textrm{d}r\Bigg]\\ &\le -\xi\mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau^C} e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}\cdot e^{-\delta r}e^{-\Delta e^{-\delta r}}\psi\big(e^{-\delta r},X^C_r\big)1\mkern-5mu{\textrm{I}}_{\{\psi\big(e^{-\delta r},X^C_r\big)<0\}}\textrm{d}r\Bigg]\;\\ &\le -\xi\mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau^C} e^{-\delta r}\psi\big(e^{-\delta r},X^C_r\big)1\mkern-5mu{\textrm{I}}_{\{\psi\big(e^{-\delta r},X^C_r\big)<0\}}\textrm{d}r\Bigg]. \end{align*}

Now, inserting the power series representation of $\psi$ from (7), one gets

\begin{align*} & -\xi \mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau^C} e^{-\delta r}\psi\big(e^{-\delta r},X^C_r\big)1\mkern-5mu{\textrm{I}}_{\{\psi\big(e^{-\delta r},X^C_r\big)<0\}}\textrm{d}r\Bigg]\; \\ & \le \xi \sum\limits_{n=0}^{\infty} e^{-\delta t (n+1)} \frac{\Delta^n}{n!}\mathbb{E}_{(t,x)}\Bigg[ \int_t^{\tau^C} e^{-\delta (r-t)(n+1)}\left(\frac{-\eta_{n+1}\xi}{\delta(n+1)}e^{\eta_{n+1}X_r^C}-e^{\eta_{n}X_r^C}\right)^+\textrm{d} r\Bigg]\; \\ & \le \xi e^{-\delta t}\sum\limits_{n=0}^{\infty} e^{-\delta t n} \frac{\Delta^n}{n!} \int_0^{\infty} \left(\frac{-\eta_{n+1}\xi}{\delta(n+1)}e^{y\eta_{n+1}}-e^{\eta_{n}y}\right)^+f_{n+1}(x,y)\textrm{d}y,\end{align*}

where the last inequality follows from Theorem A.1.

Remark 3.2. One might wonder whether the infinite sum appearing on the right-hand side of Theorem 3.2 is finite. In order to see its finiteness we try to find an upper bound of the form $A^n$ for the integral. To this end we split the integral into two parts: the part from 0 to x and the remaining part. Since $\theta_n\rightarrow \infty$ while $\eta_n,\zeta_n\rightarrow -\infty$ for $n\rightarrow \infty$ , we have for $0\leq y\leq x$ that

\begin{align*} f_n(x,y) &\,:\!=\, \frac{2\!\left(1-e^{(\zeta_n-\theta_n)y}\right)e^{\eta_{n}(x-y)}}{\sigma^2\left((\theta_n-\eta_n)-(\zeta_n-\eta_n)e^{(\zeta_n-\theta_n)y}\right)} \\ &\leq K_1 \frac{1}{\theta_n-\eta_n} \\ &\leq K_1 \end{align*}

for some suitable constant $K_1>0$ (not depending on n and y). For $0\leq x\leq y$ we find

\begin{align*} f_n(x,y) &\,:\!=\, \frac{2\left(e^{\theta_n (x-y)}-e^{\zeta_n x-\theta_n y}\right)}{\sigma^2\left((\theta_n-\eta_n)-(\zeta_n-\eta_n)e^{y(\zeta_n-\theta_n)}\right)} \\ &\leq K_2 e^{\theta_n (x-y)} \end{align*}

for some suitable constant $K_1>0$ (not depending on n and y). The bracket appearing inside the integral before $f_{n+1}$ is bounded by some constant $K_3>0$ . We find that

$$ \int_0^\infty \left(\frac{-\eta_{n+1}\xi}{\delta(n+1)}e^{\eta_{n+1}y}-e^{\eta_{n}y}\right)^+f_{n+1}(x,y)\textrm{d}y \leq x K_1K_3 + \frac{K_2K_3}{\theta_n} \leq K_4$$

for some suitable constant $K_4>0$ . Hence, the sum is bounded by

$$ \exp\!\big(\Delta e^{-\delta t}\big) K_4. $$

4. The goodness of constant barrier strategies

Shreve et al. [Reference Shreve, Lehoczky and Gaver20] and Asmussen and Taksar [Reference Asmussen and Taksar5] considered the problem of dividend maximisation for a surplus described by a Brownian motion with drift. The optimal strategy there turned out to be a barrier strategy with a constant barrier.

Let $q\in\mathbb{R}_+$ and let C be given by $C_s=\xi1\mkern-5mu{\textrm{I}}_{\{X^C_s>q\}}$ ; i.e., C is a barrier strategy with a constant barrier q and ruin time $\tau^C=\inf\{s\ge 0\,:\,X^C_s=0\}$ . By the Markov property of $X^C$ , the corresponding return function satisfies

\[V^C(t,x)=\frac1\gamma -\frac 1\gamma \mathbb{E}_{x}\bigg[e^{-\gamma \int_t^{t+\tau^C}e^{-\delta s} C_s\textrm{d}s}\bigg].\]

Note that for every $a>0$ we have

\begin{align*}\mathbb{E}_{x}\bigg[e^{a \int_t^{t+\tau^C}e^{-\delta s} C_s\textrm{d}s}\bigg]&\le e^{a \int_t^{\infty}e^{-\delta s} \xi\textrm{d}s}=e^{\frac{a\xi}\delta e^{-\delta t}}<\infty.\end{align*}

This means the moment generating function of $\int_t^{t+\tau^C}e^{-\delta s} C_s\,\textrm{d}s$ is infinitely often differentiable and all moments of $\int_t^{t+\tau^C}e^{-\delta s} C_s\textrm{d}s$ exist.

Aiming to find the performance function of a barrier strategy with a constant barrier q, we use the classical ansatz of calculating the performance ‘above the barrier’ and ‘below the barrier’ and putting these two solutions together via the smooth fit at the barrier (in our case a $\mathcal C^{(1,1)}$ fit). We define

\begin{align*}&M_n(q)\,:\!=\,\mathbb{E}_q\bigg[\bigg(\Delta-\gamma \int_0^{\tau^C}e^{-\delta s} C_s\textrm{d}s\bigg)^n\bigg]>0.\end{align*}

Since a barrier strategy depends on the surplus, but not on the time, we pretend to start at time 0, accounting for a different starting time $t>0$ by shifting the corresponding stopping times by t. Starting at $x>q$ , one pays at the maximal rate $\xi$ up to $\tau^{q,\xi}$ and then follows the barrier strategy with the starting value q. Starting at $x<q$ , one does not pay dividends until $\tau^{q,0}$ , i.e. until the surplus hits the level q or ruins. If the level q will be hit before ruin, then one follows the barrier strategy starting at q. This means in particular that after the level q is hit, the strategy will be exactly the same regardless of whether one starts at $x>q$ or at $x<q$ . We will use this fact to enforce a smooth fit ( $\mathcal C^{(1,1)}$ fit) at the barrier. A $\mathcal C^{(1,2)}$ fit can usually be achieved simply by a barrier strategy, which turns out to be the optimal strategy and whose performance function is the value function. See for instance [Reference Asmussen and Taksar5] and [Reference Schmidli19] for details; further explanations are given in Section 5.1. Figure 1 illustrates the $\mathcal C^{(1,1)}$ fit of the return function corresponding to the 5-barrier. The grey and black areas correspond to the ‘above the barrier’ and ‘below the barrier’ solutions. The right panel shows that the second derivative with respect to x of the performance function is not continuous at the barrier.

Figure 1. The return function corresponding to a 5-barrier strategy and its second derivative with respect to x.

For $F(t,x)\,:\!=\,V^C(t,x)$ , $x>q$ , and for $G(t,x)\,:\!=\,V^C(t,x)$ , $x<q$ , it holds that

(10) \begin{align}F(&t,x)=\frac1\gamma -\frac 1\gamma \mathbb{E}_{x}\bigg[e^{-\gamma\xi\int_t^{t+\tau^{q,\xi}} e^{-\delta s}\textrm{d}s-\gamma \int_{t+\tau^{q,\xi}}^{t+\tau^C}e^{-\delta s} C_s\textrm{d}s}\bigg]\nonumber\\ &= \frac1\gamma -\frac 1\gamma \mathbb{E}_{x}\Bigg[\!\exp\Bigg(e^{-\delta t}\Bigg({-}\Delta \big(1-e^{-\delta \tau^{q,\xi}}\big)-\gamma \int_{\tau^{q,\xi}}^{\tau^C}e^{-\delta s} C_s\textrm{d}s\Bigg)\Bigg)\Bigg]\nonumber\\ &= \frac1\gamma -\frac 1\gamma e^{-\Delta e^{-\delta t}}\mathbb{E}_{x}\Bigg[\exp\Bigg(e^{-\delta t}e^{-\delta \tau^{q,\xi}}(\Delta-\gamma \int_{0}^{\tau^C-\tau^{q,\xi}}e^{-\delta s} C_{s+\tau^{q,\xi}}\textrm{d}s)\Bigg)\Bigg]\nonumber\\ &=\frac 1\gamma-\frac 1\gamma e^{-\Delta e^{-\delta t}}-\frac 1\gamma e^{-\Delta e^{-\delta t}}\sum\limits_{n=1}^\infty \frac{e^{-\delta tn}}{n!}\mathbb{E}_{x}\big[e^{-\delta n\tau^{q,\xi}}\big]\mathbb{E}_q\bigg[\bigg(\Delta-\gamma\int_0^{\tau^C}e^{-\delta s} C_s\textrm{d}s\bigg)^n\bigg]\nonumber\\ &= \frac 1\gamma-\frac 1\gamma e^{-\Delta e^{-\delta t}}-\frac 1\gamma e^{-\Delta e^{-\delta t}}\sum\limits_{n=1}^\infty \frac{e^{-\delta tn}}{n!}e^{\eta_n(x-q)} M_n(q)\nonumber\\ &= -\frac 1\gamma \sum\limits_{n=1}^\infty \frac{e^{-\delta tn}}{n!}\sum\limits_{k=0}^n\binom{n}{k}({-}\Delta)^{n-k} M_k(q)e^{\eta_k(x-q)}, \end{align}
(11) \begin{align} G(t,x) &={\mathbb{E}}_{x}\Big[F\big(t+\tau^{q,0},q\big);\,X^0_{\tau^{q,0}}=q\Big]&\nonumber \\&=-\frac 1\gamma\sum_{n=1}^\infty \frac{e^{-\delta tn}}{n!}\cdot\frac{e^{\theta_nx}-e^{\zeta_n x}}{e^{\theta_nq}-e^{\zeta_n q}}\sum_{k=0}^n\binom{n}{k}({-}\Delta)^{n-k}M_k(q)\;.\end{align}

Here, for the fourth equality, we expand the first exponential function in the expectation into its power series and use the Markov property to see that the $\mathbb{P}_{0,x}$ -law given $\mathcal F_{\tau^{q,\xi}}$ of $\tau^C-\tau^{q,\xi}$ equals the $\mathbb{P}_{0,q}$ -law of $\tau^C$ . Also, for the last equality for G, we insert the formula for F and use the identities given in Borodin and Salminen [Reference Borodin and Salminen9, p. 309, Formula 3.0.5(b)]. The notation used in G means $\mathbb{E}_x[Y_t;\,A]=\mathbb{E}_x[Y_t1\mkern-5mu{\textrm{I}}_{A}]$ for some process Y.

In order to analyse the performance function of a barrier strategy, we will expand the performance function into integer powers of $e^{-\delta t}$ with x-dependent coefficients and truncate at some N. This will result in an approximation for the performance function which is much easier to handle, but which incurs an additional truncation error. Inspecting Equations (10) and (11) motivates the approximations

(12) \begin{align} F^N(t,x) &= \sum\limits_{n=1}^N e^{-\delta tn} \sum\limits_{k=0}^n A_{n,k} e^{\eta_k (x-q)}, \end{align}
(13) \begin{align}G^N(t,x) &\,:\!=\, \sum\limits_{n=1}^N D_n e^{-\delta tn}\frac{e^{\theta_nx}-e^{\zeta_n x}}{e^{\theta_nq}-e^{\zeta_n q}}, \end{align}

for $x,t\geq 0$ , where $\eta_0\,:\!=\,0$ . In order to achieve a $\mathcal C^{(1,1)}$ fit we choose $D_n \,:\!=\, \sum\limits_{k=0}^nA_{n,k}$ and

$$ A_{n,n} \,:\!=\, \frac{\sum\limits_{k=0}^{n-1}(\nu_n-\eta_k)A_{n,k}}{\eta_n-\nu_n},\quad \nu_n \,:\!=\, \frac{\theta_ne^{\theta_nq}-\zeta_ne^{\zeta_n q}}{e^{\theta_nq}-e^{\zeta_n q}}.$$

This leaves the choice for $A_{n,0},\dots,A_{n,k-1}$ open, which we now motivate by inspecting the dynamics equations for F, G; these should be

\begin{align*}&G_t(t,x)+\mu G_x(t,x)+\frac{\sigma^2}{2}G_{xx}(t,x) = 0, \\&F_t(t,x)+\mu F_x(t,x)+\frac{\sigma^2}{2}F_{xx}(t,x) = \xi\Big(F_x(t,x)+e^{-\delta t}(\gamma F(t,x)-1)\Big),\end{align*}

with boundary condition $G(t,0) = 0$ for $t,x\geq 0$ .

It is easy to verify that $G^N(t,0) = 0$ and $G^N_t(t,x)+\mu G^N_x(t,x)+\frac{\sigma^2}{2}G^N_{xx}(t,x) = 0$ . However, since $H_k(x)\,:\!=\,e^{\eta_k x}$ solves the equation

$$ \delta k H_k(x) = (\mu-\xi) \partial_xH_k(x)+\frac{\sigma^2}{2}\partial_x^2H_k(x), $$

we find that

\begin{align*}F^N_t(t,x)+(\mu-\xi) F^N_x(t,x)+\frac{\sigma^2}{2}F^N_{xx}(t,x) &= \sum\limits_{n=1}^N e^{-\delta t n} \sum\limits_{k=0}^{n-1} \delta(k-n)A_{n,k}e^{\eta_k (x-q)},\\e^{-\delta t}\xi(\gamma F^N(t,x)-1) &= -e^{-\delta t}\xi + \sum\limits_{n=2}^{N+1} e^{-\delta tn}\sum\limits_{k=0}^{n-1}\gamma\xi A_{n-1,k}e^{\eta_k(x-q)}.\end{align*}

We will treat the term

$$e^{-\delta t(N+1)}\xi\gamma\sum\limits_{k=0}^NA_{N,k}e^{\eta_k(x-q)}$$

as an error term and otherwise equate the two expressions above. This allows us to define the remaining coefficients, which are given by

\begin{align*} A_{n,k} &\,:\!=\, \frac{\gamma\xi A_{n-1,k}}{\delta(k-n)} = \Big({-}\frac{\gamma\xi}{\delta}\Big)^{n-k}\frac{A_{k,k}}{(n-k)!} = ({-}\Delta)^{n-k}\frac{A_{k,k}}{(n-k)!}, \\ A_{n,0} &\,:\!=\, \Big({-}\frac{\gamma\xi}{\delta}\Big)^{n-1}\frac{\xi}{\delta n!} = \frac{({-}\gamma)^{n-1}\xi^n}{\delta^n n!} = \frac{({-}\Delta)^n}{-\gamma n!}\end{align*}

for $n\geq k\geq 1$ ; the last line is also valid for $n=0$ .

The following lemma shows that $F^N$ solves ‘almost’ the same equation as F is thought to solve. We see an error term which, instead of being zero, converges faster than $e^{-\delta tN}$ for time going to infinity.

Lemma 4.1. We have

\begin{align*} G^N_t(t,x)+\mu G^N_x(t,x)+\frac{\sigma^2}{2}G^N_{xx}(t,x) &= 0, \\ F^N_t(t,x)+\mu F^N_x(t,x)+\frac{\sigma^2}{2}F^N_{xx}(t,x) +\xi\psi^N\big(e^{-\delta t},x\big) &= -e^{-\delta t(N+1)}\xi\gamma\sum\limits_{k=0}^NA_{N,k}e^{\eta_k(x-q)},\end{align*}

for any $t\geq 0$ , $x\geq q$ , where

$$ \psi^N\big(e^{-\delta t},x\big) \,:\!=\, -F_x^N(t,x)+e^{-\delta t}\big(1-\gamma F^N(t,x)\big). $$

Proof. The claim follows from inserting the definitions of $G^N$ and $F^N$ .

We define

(14) \begin{align}&V^N(t,x) \,:\!=\, 1\mkern-5mu{\textrm{I}}_{\{x\geq q\}} F^N(t,x) + 1\mkern-5mu{\textrm{I}}_{\{x<q\}} G^N(t,x),\\&\psi^N\big(e^{-\delta t},x\big) \,:\!=\, -V_x^N(t,x)+e^{-\delta t}\big(1-\gamma V^N(t,x)\big)\nonumber\end{align}

for any $t,x\geq 0$ . We now want to compare the approximate performance function $V^N$ for the barrier strategy with level q to the unknown value function. We proceed by first bounding $\psi^N$ in terms of a double power series in $e^{-\delta t}$ and x-dependent exponentials.

Lemma 4.2. With the preceding definitions, for $x\geq q$ we have

\begin{align*} &{}-\psi^N\big(e^{-\delta t},x\big)1\mkern-5mu{\textrm{I}}_{\{\psi^N\big(e^{-\delta t},x\big)<0\}} \\ &\leq \sum_{n=1}^{N+1} e^{-\delta tn}\left(\sum_{k=0}^{n}e^{\eta_k(x-q)}\big\{1\mkern-5mu{\textrm{I}}_{\{n=1,k=0\}}-1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}\eta_k A_{n,k}-1\mkern-5mu{\textrm{I}}_{\{n\neq 1,k\neq n\}}\gamma A_{n-1,k}\big\} \right)^+, \end{align*}

and for $0\leq x< q$ we have

\begin{align*} \psi^N\big(e^{-\delta t},x\big)1\mkern-5mu{\textrm{I}}_{\{\psi^N\big(e^{-\delta t},x\big)>0\}} \leq \sum_{n=1}^{N+1}e^{-\delta tn}\Big( 1\mkern-5mu{\textrm{I}}_{\{n=1\}}&-D_nh^{\prime}_n(x)1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}} \\&{}-\gamma D_{n-1}h_{n-1}(x) 1\mkern-5mu{\textrm{I}}_{\{n\neq 1\}} \Big)^+, \end{align*}

where

$$h_n(x) \,:\!=\, \frac{e^{\theta_n x}-e^{\zeta_n x}}{e^{\theta_n q}-e^{\zeta_n q}}.$$

Proof. Inserting the definition of $\psi^N$ and the definitions of $F^N$ and $G^N$ found in Equations (12) and (13) respectively, for $x\geq q$ (with $\eta_0=0$ ) we obtain

\begin{align*} \psi^N&\big(e^{-\delta t},x\big) \\ &= \sum_{n=1}^{N+1} e^{-\delta tn}\sum_{k=1}^{n}e^{\eta_k(x-q)}\left(1\mkern-5mu{\textrm{I}}_{\{n=1,k=0\}}-1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}\eta_k A_{n,k}-1\mkern-5mu{\textrm{I}}_{\{n\neq 1,k\neq n\}}\gamma A_{n-1,k} \right), \end{align*}

and for $0\leq x < q$ we obtain

\begin{align*} \psi^N\big(e^{-\delta t},x\big) &= \sum_{n=1}^{N+1}e^{-\delta tn}\left( 1\mkern-5mu{\textrm{I}}_{\{n=1\}}-D_nh^{\prime}_n(x)1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}-\gamma D_{n-1}h_{n-1}(x) 1\mkern-5mu{\textrm{I}}_{\{n\neq 1\}} \right). \end{align*}

Using the inequality $\left(\sum_{n=1}^N e^{-\delta t n}c_n\right)^+ \leq \sum_{n=1}^Ne^{-\delta t n}(c_n)^+$ for $c\in\mathbb{R}^N$ , for $x\geq q$ we obtain

\begin{align*} &{}-\psi^N\big(e^{-\delta t},x\big)1\mkern-5mu{\textrm{I}}_{\{\psi^N\big(e^{-\delta t},x\big)<0\}} \\ &\leq \sum_{n=1}^{N+1} e^{-\delta tn}\left(\sum_{k=1}^{n}e^{\eta_k(x-q)}\Big\{1\mkern-5mu{\textrm{I}}_{\{n=1,k=0\}}-1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}\eta_k A_{n,k}-1\mkern-5mu{\textrm{I}}_{\{n\neq 1,k\neq n\}}\gamma A_{n-1,k} \Big\}\right)^+ \end{align*}

and for $0\leq x< q$ we obtain

\begin{align*} \psi^N\big(e^{-\delta t},x\big)1\mkern-5mu{\textrm{I}}_{\{\psi^N\big(e^{-\delta t},x\big)>0\}} \leq \sum_{n=1}^{N+1}e^{-\delta tn}\Big( 1\mkern-5mu{\textrm{I}}_{\{n=1\}}&-D_nh^{\prime}_n(x)1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}} \\&{}-\gamma D_{n-1}h_{n-1}(x) 1\mkern-5mu{\textrm{I}}_{\{n\neq 1\}} \Big)^+, \end{align*}

as claimed.

We will employ the same method as in Section 3.2 and rely on the occupation bounds from Theorem A.1. We have in mind that $V^N \approx V^C \leq V$ . The three error terms appearing on the right-hand side of the following proposition are, in order, the error for behaving suboptimally above the barrier, the error for behaving suboptimally below the barrier, and the approximation error.

Proposition 4.1. We have

\begin{align*}&V(t,x) \le V^N(t,x)+ \sum\limits_{n=1}^{N+1} e^{-\delta tn} \xi\Bigg[\\&\Bigg(\sum_{k=0}^n\Big\{1\mkern-5mu{\textrm{I}}_{\{n=1,k=0\}}-1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}\eta_k A_{n,k}-1\mkern-5mu{\textrm{I}}_{\{n\neq 1,k\neq n\}}\gamma A_{n-1,k} \Big\}\cdot\int_q^\infty e^{\eta_k(y-q)} f_n(x,y) \;\textrm{d}y\Bigg)^+ \\ & {}+ \int_0^q \left({-}D_n\frac{\theta_ne^{\theta_n y}-\zeta_ne^{\zeta_ny}}{e^{\theta_n q}-e^{\zeta_nq}} +\Big(1\mkern-5mu{\textrm{I}}_{\{n=1\}}-\gamma1\mkern-5mu{\textrm{I}}_{\{n\neq 1\}}D_{n-1}\frac{e^{\theta_{n-1} y}-e^{\zeta_{n-1}y}}{e^{\theta_{n-1} q}-e^{\zeta_{n-1}q}}\right)^+ \cdot f_n(x,y)\; \textrm{d}y \Bigg]\\ &{} + e^{-\delta t(N+1)}\xi\gamma \int_0^\infty \sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k(y-q)} f_{N+1}(x,y)\;\textrm{d}y\end{align*}

for any $t,x\ge 0$ , where the $f_k$ are defined in Theorem 3.2.

Proof. Observe that $V^N$ is analytic outside the barrier q and $\mathcal C^{(1,\infty)}$ on $\mathbb{R}_+\times\mathbb{R}_+$ , and the second space derivative is a bounded function. Thus, we can apply the change-of-variables formula; see [Reference Peskir17].

Choose an arbitrary strategy $\bar C$ and denote its ruin time by $\tau$ . Following the heuristics from Section 2.1 up to Step 4 with $H=V^N$ for the strategy $\bar C$ yields

\begin{align*}&V^{\bar C}(t,x) = V^N(t,x) + \mathbb{E}_{(t,x)}\bigg[\int_t^{\tau} e^{-\gamma\int_t^{r} e^{-\delta u}\bar C_u\textrm{d}u}\bigg(\bar C_r-\xi1\mkern-5mu{\textrm{I}}_{\big\{X^{\bar C}_r>q\big\}}\bigg)\psi^N\big(e^{-\delta r},X^{\bar C}_r\big)\;\textrm{d}r\bigg]\\&{}- \xi\gamma\mathbb{E}_{(t,x)}\Bigg[\int_t^\tau e^{-\gamma\int_t^{r} e^{-\delta u}\bar C_u\textrm{d}u}e^{-\delta r(N+1)}1\mkern-5mu{\textrm{I}}_{\big\{X^{\bar C}_r>q\big\}}\sum\limits_{k=0}^NA_{N,k}e^{\eta_k\big(X^C_r-q\big)}\;\textrm{d}r\Bigg]\\&\le V^N(t,x)+ \xi\gamma\mathbb{E}_{(t,x)}\Bigg[\int_t^\tau e^{-\delta r(N+1)}1\mkern-5mu{\textrm{I}}_{\big\{X^{\bar C}_r>q\big\}}\sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k\big(X^C_r-q\big)}\;\textrm{d}r\Bigg]\\& {}+ \mathbb{E}_{(t,x)}\bigg[\int_t^{\tau} \bigg({-}\xi1\mkern-5mu{\textrm{I}}_{\big\{X^{\bar C}_r>q,\psi^N\big(e^{-\delta r},X^{\bar C}_r\big)<0\big\}} + \xi1\mkern-5mu{\textrm{I}}_{\big\{X^{\bar C}_r<q,\psi^N\big(e^{-\delta r},X^{\bar C}_r\big)>0\big\}}\bigg)\psi^N\big(e^{-\delta r},X^{\bar C}_r\big)\;\textrm{d}r\bigg],\end{align*}

where we used that $0\le \bar C_r\le \xi$ . Applying Lemma 4.2 to the last summand, pulling out the sum, and applying Theorem A.1 yields

\begin{align*}&V^{\bar C}(t,x) \le V^N(t,x)\\&\quad {}+ \sum\limits_{n=1}^{N+1} e^{-\delta tn} \xi\Bigg[ \Bigg(\sum_{k=0}^n\Big\{1\mkern-5mu{\textrm{I}}_{\{n=1,k=0\}}-1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}\eta_k A_{n,k}-1\mkern-5mu{\textrm{I}}_{\{n\neq 1,k\neq n\}}\gamma A_{n-1,k} \Big\}\\&\hspace{7.5cm}\times\int_q^\infty e^{\eta_k(y-q)} f_n(x,y) \textrm{d}y \Bigg)^+\\&\quad {}+\int_0^q \left( 1\mkern-5mu{\textrm{I}}_{\{n=1\}}-D_nh^{\prime}_n(y)1\mkern-5mu{\textrm{I}}_{\{n\neq N+1\}}-\gamma D_{n-1}h_{n-1}(y) 1\mkern-5mu{\textrm{I}}_{\{n\neq 1\}} \right)^+ f_n(x,y) \textrm{d}y \Bigg]\\ &\quad{}+ e^{-\delta t(N+1)}\xi\gamma \int_q^\infty \sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k(y-q)} f_{N+1}(x,y)\textrm{d}y,\end{align*}

where

$$h_n(y) \,:\!=\, \frac{e^{\theta_n y}-e^{\zeta_n y}}{e^{\theta_n q}-e^{\zeta_n q}}.$$

Since $\bar C$ was an arbitrary strategy and the right-hand side does not depend on $\bar C$ , the claim follows.

Now we quantify the notion $V^N\approx V^C$ . Here, we see a single error term which corresponds to the approximation error (third summand) in Proposition 4.1.

Lemma 4.3. Let $t,x\geq 0$ . Then we have

\[|V^N(t,x)-V^C(t,x)| \le e^{-\delta t(N+1)} \xi\gamma \int_q^\infty \sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k(y-q)} f_{N+1}(x,y)\textrm{d}y.\]

Proof. By following the lines of the proof of Proposition 4.1 with the specific strategy $\bar C_t=C_t = \xi 1\mkern-5mu{\textrm{I}}_{\{X_t^C>q\}}$ until estimates are used, we obtain

\begin{align*}V^C(t,x) &= V^N(t,x) + \mathbb{E}_{(t,x)}\Bigg[\int_t^{\tau} e^{-\gamma\int_t^{r} e^{-\delta u}C_u \textrm{d}u}\Big( C_r-\xi1\mkern-5mu{\textrm{I}}_{\big\{X^{ C}_r>q\big\}}\Big)\psi^N\big(r,X^{ C}_r\big)\textrm{d}r\Bigg] \\ &\quad - \xi\gamma\mathbb{E}_{(t,x)}\Bigg[\int_t^\tau e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}e^{-\delta r(N+1)}1\mkern-5mu{\textrm{I}}_{\big\{X^{ C}_r>q\big\}}\sum\limits_{k=0}^NA_{N,k}e^{\eta_k\big(X^C_r-q\big)}\textrm{d}r\Bigg] \\ &= V^N(t,x) \\&\quad {}- \xi\gamma\mathbb{E}_{(t,x)}\Bigg[\int_t^\tau e^{-\gamma\int_t^{r} e^{-\delta u}C_u\textrm{d}u}e^{-\delta r(N+1)}1\mkern-5mu{\textrm{I}}_{\big\{X^{ C}_r>q\big\}}\sum\limits_{k=0}^NA_{N,k}e^{\eta_k\big(X^C_r-q\big)}\textrm{d}r\Bigg].\end{align*}

Hence, we find

\begin{align*} |V^C(t,x)-V^N(t,x)| & \leq \xi\gamma\mathbb{E}_{(t,x)}\bigg[\int_t^\tau e^{-\delta r(N+1)}1\mkern-5mu{\textrm{I}}_{\big\{X^{ C}_r>q\big\}}\sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k\big(X^C_r-q\big)}\textrm{d}r\bigg] \\ & = \xi\gamma e^{-\delta t(N+1)} \int_{\mathbb{R}} 1\mkern-5mu{\textrm{I}}_{\big\{X^{ C}_r>q\big\}}\sum\limits_{k=0}^N|A_{N,k}|e^{\eta_k(y_r-q)} f_{N+1}(x,y) \textrm{d}y\end{align*}

by Theorem A.1.

5. Examples

Here, we consider two examples. The first one illustrates how the value function and the optimal strategy can be calculated using a straightforward approach under various unproven assumptions. In fact, we will assume (without proof) that the value function is smooth enough, that the optimal strategy is of barrier type, and that the barrier, the value function above the barrier, and the value function below the barrier have suitable power series representations. In [Reference Grandits, Hubalek, Schachermayer and Zigo15] it has been observed that similar power series—if they exist—have very large coefficients for certain parameter choices. This could mean that the power series do not converge or that insufficient computing power is available.

In the second subsection, we will illustrate the new approach and calculate the distance from the performance function of a constant barrier strategy to the value function. The key advantages of this approach are that we do not rely on properties of the value function, nor do we need to know what it looks like. From a practical perspective, if the value function cannot be found, one should simply choose any strategy with an easy-to-calculate return function. Then it is good to know how large the error to the optimal strategy is.

5.1. The straightforward approach

In this example we let $\mu = 0.15$ , $\delta = 0.05$ , $\gamma = 0.2$ , and $\sigma=1$ . We attempt to find the value function numerically. However, we do not know whether the assumptions which we will make do actually hold true for any possible parameters—or even for the parameters we chose.

We conjecture and assume that the optimal strategy is of a barrier type where the barrier is given by a time-dependent curve, say $\alpha$ ; the value function V(t, x) is assumed to be a $\mathcal C^{1,2}(\mathbb{R}_+^2)$ function, and we define

\begin{align*}h(t,x) &\,:\!=\, V(t,x),\quad t\geq 0,\ x\in[\alpha(t),\infty), \\g(t,x) &\,:\!=\, V(t,x),\quad t\geq 0,\ x\in[0,\alpha(t)].\end{align*}

This means we assume that h solves the HJB equation (1) on $\mathbb{R}_+\times [\alpha(t),\infty)$ and that g solves (1) on $\mathbb{R}_+\times [0,\alpha(t)]$ . In particular, the functions h and g satisfy

\begin{align*}&h_t+(\mu-\xi)h_x+\frac{\sigma^2}2h_{xx}+\xi e^{-\delta t}\big(1-\gamma h\big)=0,&& \lim\limits_{x\to\infty}h(t,x)=U\bigg(\frac{\xi e^{-\delta t}}{\delta}\bigg),\\& g_t+\mu g_x+\frac{\sigma^2}2g_{xx}=0, && g(t,0)\equiv 0.\end{align*}

Similarly to the derivations of the functions F and G in Section 4, we assume that

\begin{align*}&h(t,x)\,:\!=\,\frac 1\gamma-\frac 1\gamma e^{-\Delta e^{-\delta t}}+ e^{-\Delta e^{-\delta t}}\sum\limits_{n=1}^\infty J_n e^{-\delta tn} e^{\eta_n x},\\&g(t,x)\,:\!=\,\sum\limits_{n=1}^\infty L_ne^{-\delta tn} \big(e^{\theta_n x}-e^{\zeta_n x}\big),\\&\alpha(t)\,:\!=\,\sum\limits_{n=0}^\infty \frac{a_n}{n!}e^{-\delta tn}\end{align*}

for some coefficients. Note that we do not investigate the question of whether the functions h, g, and $\alpha$ have power series representations. We define further auxiliary coefficients $b_{k,n}$ , $p_{k,n}$ , and $q_{k,n}$ :

\begin{align*}e^{\eta_n \alpha(t)}\,=\!:\,\sum\limits_{k=0}^\infty\frac{b_{k,n}}{k!}e^{-\delta tk},\quad\quad e^{\theta_n \alpha(t)}\,=\!:\,\sum\limits_{k=0}^\infty\frac{p_{k,n}}{k!}e^{-\delta tk},\quad\quad e^{\zeta_n \alpha(t)}\,=\!:\,\sum\limits_{k=0}^\infty\frac{q_{k,n}}{k!}e^{-\delta tk}.\end{align*}

Since we assume that the value function is twice continuously differentiable with respect to x, we have, using smooth fit,

(15) \begin{align} &h(t,\alpha(t))=g(t,\alpha(t)),\quad g_x(t,\alpha(t))=h_x(t,\alpha(t)), \quad g_{xx}(t,\alpha(t))=h_{xx}(t,\alpha(t)).\end{align}

Note that (15) yields $h_t(t,\alpha(t))=g_t(t,\alpha(t))$ . Therefore, we can conclude that $h_x(t,\alpha(t))=e^{-\delta t}(1-\gamma h(t,\alpha(t)))$ . Alternatively to (15), one can consider at $(t,\alpha(t))$ the equations

(16) \begin{align}-h_x+e^{-\delta t}(1-\gamma h)=0,\quad -g_x+e^{-\delta t}(1-\gamma g)=0,\quad h=g.\end{align}

Thus, we can find the coefficients $a_n$ , $J_n$ , and $L_n$ from the three equations (16).

First, we calculate the coefficients of the power series resulting from the functions $e^{\eta_n \alpha(t)}$ , $e^{\theta_n \alpha(t)}$ , $e^{\zeta_n \alpha(t)}$ . This is done using the general Leibniz rule:

\begin{align*}&b_{k+1,n}=\eta_n\sum\limits_{j=0}^k\binom{k}{j}a_{k-j+1}b_{j,n},\quad b_{0,n}=e^{\eta_n\alpha(0)},\\&p_{k+1,n}=\theta_n\sum\limits_{j=0}^k\binom{k}{j}a_{k-j+1}p_{j,n},\quad p_{0,n}=e^{\theta_n\alpha(0)},\\&q_{k+1,n}=\zeta_n\sum\limits_{j=0}^k\binom{k}{j}a_{k-j+1}q_{j,n},\quad q_{0,n}=e^{\zeta_n\alpha(0)}.\end{align*}

Now, in order to calculate the coefficients in the power series representation of $h(t,\alpha(t))$ and $g(t,\alpha(t))$ and their derivatives, we define auxiliary coefficients for $m\in\{1,2\}$ :

\begin{align*}&X_{m,j}\,:\!=\,\sum\limits_{n=1}^jJ_n\eta_n^{m-1}\frac{b_{j-n,n}}{(j-n)!}, && Z_{m,k}\,:\!=\,\sum\limits_{j=1}^k \frac{\Delta^{k-j}}{(k-j)!}X_{m,j},\\&W_{m,k,j}\,:\!=\,L_j\big(\theta_j^{m-1}p_{k,j}-\zeta_j^{m-1}q_{k,j}\big), && Y_{m,k}\,:\!=\,\sum\limits_{n=1}^k \frac{W_{m,k-n,n}}{(k-n)!}.\end{align*}

Then we can write the functions g and h along with their derivatives as power series:

\begin{align*}&g(t,\alpha(t))=\sum\limits_{k=1}^\infty e^{-\delta tk}Y_{1,k}, && h(t,\alpha(t))=\sum\limits_{k=1}^\infty e^{-\delta tk} Z_{1,k}-\frac1\gamma\sum\limits_{k=1}^\infty ({-}\Delta)^k\frac{e^{-\delta tk}}{k!},\\&g_x(t,\alpha(t))=\sum\limits_{k=1}^\infty e^{-\delta tk}Y_{2,k}, && h_x(t,\alpha(t))=\sum\limits_{k=1}^\infty e^{-\delta tk}Z_{2,k}.\end{align*}

Equating coefficients yields

$$a_0 =\frac{\log\Big(\frac{\eta_1 - \zeta_1}{\eta_1 -\theta_1}\cdot \frac{\zeta_1}{\theta_1}\Big)}{\theta_1 - \zeta_1}, \qquad L_1 = \frac 1{\theta_1 e^{\theta_1 a_0} - \zeta_1e^{\zeta_1 a_0}}, \qquad J_1 = \frac{e^{-\eta_1 a_0}}{\eta_1},$$

and for $k\ge 2$ ,

(17) \begin{align} X_{2,k}=-\gamma X_{1,k-1},\quad\quad\quad Y_{2,k}=-\gamma Y_{1,k-1},\quad\quad\quad Y_{1,k}=Z_{1,k}-\frac{({-}\Delta)^k}{\gamma k!}. \end{align}

Note that the equations in (17) specify $L_k$ , $J_k$ , and $a_{k-1}$ at the kth step. The coefficients given above have a recursive structure. Because of this fact, the method presented turns out to be very time- and memory-consuming. Numerical calculations show that the above procedure yields well-defined power series for relative small values of $\xi$ (see Figure 2). However, for big $\xi$ the coefficients explode, which makes the calculations unstable and imprecise, especially for t close to zero.

In Figure 3 we see the functions h (black) and g (grey) meeting at the barrier $\alpha(t)$ in the left panel. The right panel illustrates the crucial functions $-h_x+e^{-\delta t}(1-\gamma h)$ (black) and $-g_x+e^{-\delta t}(1-\gamma g)$ (grey), along with the zero-plane (white). One sees that the zero-plane cuts $-h_x+e^{-\delta t}(1-\gamma h)$ and $-g_x+e^{-\delta t}(1-\gamma g)$ exactly along the curve $\alpha$ .

Figure 2. The optimal strategies for different values of $\xi$ . The dashed line corresponds to the Asmussen–Taksar strategy [Reference Asmussen and Taksar5] (unrestricted dividend case).

Figure 3. Left, the functions h(t, x) (black) and g(t, x) (grey); right, the functions $-h_x+e^{-\delta t}(1-\gamma h)$ (black), $-g_x+e^{-\delta t}(1-\gamma g)$ (grey), and 0 (white), for $\xi=1$ .

Note that the numerical procedure used here works well only for small values of $\xi$ . Because of the recursive structure of the coefficients, the bigger $\xi$ values cause the coefficients to explode and enforce an early truncation of the power series representations.

It should be noted here once again that the obtained functions h and g do not represent the value function. And the optimal strategy cannot yet be claimed to be of a barrier type with the barrier given by $\alpha$ ; first, one has to prove a verification theorem.

5.2. The distance to the value function

We use the same parameters as in the previous section, i.e. $\mu = 0.15$ , $\delta = 0.05$ , $\gamma = 0.2$ , and $\sigma=1$ . We illustrate the error bound given by Proposition 4.1 for $N=20$ summands and four different values of $\xi$ , namely $0.15$ , $0.17$ , $0.32$ , and 1. We will compare the unknown value function to the performance of the barrier strategy with barrier at

(18) \begin{align} q = \bigg(\frac{\log({-}\zeta_1)+\log(\zeta_1+\eta_1)-\log(\theta_1)-\log(\theta_1-\eta_1)}{\theta_1-\zeta_1}\bigg)^+; \end{align}

i.e. we employ the strategy $C_s = \xi1\mkern-5mu{\textrm{I}}_{\{X_s^C\geq q\}}$ . Recall the definition of $\eta_1$ , $\theta_1$ , and $\zeta_1$ from (2) and (5).

Mathematica code for the calculation of the coefficients ${\boldsymbol{J}}_{{\boldsymbol{n}}}$ , ${\boldsymbol{L}}_{{\boldsymbol{n}}}$ , and ${\boldsymbol{a}}_{{\boldsymbol{n}}}$ .

The barrier strategy with the barrier q has been shown to be optimal if no utility function is applied; see [Reference Schmidli19, p. 97]. In the case of $\xi=0.15$ one finds $q=0$ , i.e. we pay out at maximal rate all the time, which is optimal by Proposition 3.1. Therefore, this case is left with only an approximation error. For the other values of $\xi$ , it is non-optimal to follow a barrier strategy, and hence we do have a substantial error which cannot disappear in the limit. The corresponding panels in Figure 5 show this error, as for $N=20$ summands the approximation error is already several magnitudes smaller than the error incurred by following a suboptimal strategy.

Figure 4. The difference of the value function and an approximation of the performance function corresponding to a constant barrier strategy at $t=0$ : $V(0,x)-V^N(0,x)$ for $\xi=1$ with $V^N$ given in (14) and the barrier q given in (18).

Figure 5. The plots show numerical evaluations of the error bounds given in Proposition 4.1 for the barrier strategy with parameters $\xi = 0.15$ , $\xi=0.17$ , $\xi=0.32$ , and $\xi=1$ respectively, as indicated at the side of each panel. The error bound is shown at time $t=0$ , where it is largest, across several values of x.

Figure 4 illustrates for $\xi=1$ the difference between the value function V(x) and the approximation $V^N$ , given in (14), of the performance function corresponding to the barrier strategy with the barrier q given in (18) at $t=0$ . Note that the difference $V(0,x)-V^N(0,x)$ consists of three subfunctions:

\[V(0,x)-V^N(0,x)=\begin{cases}F(0,x)-F^N(0,x),&\mbox{ $x\ge q$, grey line in Figure}\ {4},\\F(0,x)-G^N(0,x),&\mbox{ $x\in[\alpha(0),q]$, dashed line in Figure}\ {4}, \\G(0,x)-G^N(0,x),&\mbox{ $x\in[0,\alpha[0]]$, solid black line in Figure}\ {4}.\end{cases}\]

It is clear that for any fixed x the maximal difference $V(t,x)-V^N(t,x)$ is attained at $t=0$ , as the curve $\alpha$ is increasing and converges to q for $t\to\infty$ . Thus, the difference $q-\alpha(t)$ attains its maximum at $t=0$ , leading to a bigger difference between the performance functions.

Appendix

In this appendix we provide deterministic upper bounds for the expected discounted occupation of a process whose drift is not precisely known. This allows us to derive an upper bound for the expect discounted and cumulated positive functional of the process. These bounds are summarised in Theorem A.1.

Let $a,b\in\mathbb{R}$ with $a\leq b$ , $I\,:\!=\,[a,b]$ , $\sigma>0$ , $\delta\geq 0$ , W a standard Brownian motion, and consider the process

$$ \textrm{d}X_t = C_t \textrm{d}t + \sigma \textrm{d}W_t $$

where C is some I-valued progressively measurable process. We recall that we denote by $\mathbb{P}_x$ a measure with $\mathbb{P}_x[X_0=x]$ . The local time of X at level y and time t is denoted by $L_t^y$ , and $\tau\,:\!=\,\inf\{t\geq 0\,:\, X_t = 0\}$ . Furthermore, for $x,y\geq 0$ we define

\begin{align*}&\alpha \,:\!=\, \frac{a+\sqrt{a^2+2\delta\sigma^2}}{\sigma^2}, \quad\quad \beta_+ \,:\!=\, \frac{\sqrt{b^2+2\delta\sigma^2}-b}{\sigma^2}, \quad\quad \beta_- \,:\!=\, \frac{-\sqrt{b^2+2\delta\sigma^2}-b}{\sigma^2}, \\ &f(x,y) \,:\!=\, \frac{2\left(e^{\beta_+(x \wedge y)}-e^{\beta_-(x\wedge y)}\right)e^{-\alpha(x-y)^+}}{\sigma^2\left((\beta_++\alpha)e^{y\beta_+}-(\beta_-+\alpha)e^{y\beta_-}\right)}. \end{align*}

Theorem A.1. We have $\mathbb{E}_x\left[ \int_0^\tau e^{-\delta s} \textrm{d}L_{s}^y\right] \le \sigma^2 f(x,y)$ . In particular, for any measurable function $\psi\,:\,\mathbb{R}_+\rightarrow\mathbb{R}_+$ we have

$$ \mathbb{E}_x\left[\int_0^\tau e^{-\delta s}\psi(X_s) \textrm{d}s \right] \leq \int_0^\infty \psi(y)f(x,y) \textrm{d}y. $$

The proof is given at the end of this section.

Lemma A.1. The function f is absolutely continuous in its first variable, with derivative

\begin{align*} f_x(x,y) &\,:\!=\, \begin{cases} \frac{2\left(\beta_+ e^{x\beta_+}-\beta_-e^{x\beta_-}\right)}{\sigma^2\left((\beta_++\alpha)e^{y\beta_+}-(\beta_-+\alpha)e^{y\beta_-}\right)}, & x \le y, \\ \frac{2\left({-}\alpha e^{y\beta_+}+\alpha e^{y\beta_-}\right)e^{-\alpha(x-y)}}{\sigma^2\left((\beta_++\alpha)e^{y\beta_+}-(\beta_-+\alpha)e^{y\beta_-}\right)}, & x>y. \end{cases} \end{align*}

For any $y\geq 0$ , the function $f_x(\cdot,y)$ is of finite variation, and

\begin{align*} \textrm{d}f_x(x,y) &= -\frac{2}{\sigma^2}\delta_y(\textrm{d} x) + \left(\frac{2\delta}{\sigma^2}f(x,y)-\frac{2(b1\mkern-5mu{\textrm{I}}_{\{x<y\}}+a1\mkern-5mu{\textrm{I}}_{\{x>y\}})}{\sigma^2}f_x(x,y)\right)\textrm{d}x, \end{align*}

where $\delta_y$ denotes the Dirac measure in y. Moreover, if we denote by $f_{xx}(x,y)$ the second derivative of f with respect to the first variable for $x\neq y$ , then we get

$$ \sup_{u\in[a,b]}\left(\frac{\sigma^2}{2}f_{xx}(x,y) + uf_x(x,y)-\delta f(x,y)\right) = 0,\quad x\neq y. $$

Proof. Obtaining the derivative and the associated measure is straightforward. If $\delta = 0$ , then the statement of the lemma is trivial. This is due to the fact that $\beta_+=0$ , $\beta_-=-\frac{2b}{\sigma^2}$ , $\alpha=\frac{2a}{\sigma^2}$ . The function f in this case satisfies $f_x(x,y)>0$ if $x<y$ and $f_x(x,y)<0$ if $x>y$ .

Now, assume that $\delta >0$ . We have $\alpha,\beta_+>0>\beta_-$ , which immediately yields $f_x(x,y)>0$ for $x<y$ and $f_x(x,y)<0$ for $x>y$ . The last equality follows.

Lemma A.2. Let $y\geq 0$ and assume that $C_t = a1\mkern-5mu{\textrm{I}}_{\{X_t>y\}} + b1\mkern-5mu{\textrm{I}}_{\{X_t\leq y\}}$ . Then

$$ \mathbb{E}_x\Big[\int_0^\tau e^{-\delta s}\textrm{d}L_{s}^y\Big] = \sigma^2f(x,y). $$

Proof. The Itô–Tanaka formula together with the occupation time formula yields

\begin{align*} f(X_{t\wedge \tau},y) &= f(x,y) + \int_0^t \sigma f_x(X_{s\wedge \tau},y) \textrm{d}W_s - \frac{1}{\sigma^2}L_{t\wedge\tau}^y \\&\quad+ \int_0^t C_sf_{x}(X_{s\wedge \tau},y) + \frac{\sigma^2}{2}f_{xx}(X_{s\wedge \tau},y)\textrm{d}s \\ &= f(x,y) + \int_0^t \sigma f_x(X_{s\wedge \tau},y) \textrm{d}W_s - \frac{1}{\sigma^2}L_{t\wedge\tau}^y + \delta \int_0^t f(X_{s\wedge \tau},y)\textrm{d}s. \end{align*}

Using the product formula yields

$$ e^{-\delta t} f(X_{t\wedge \tau},y) = f(x,y) + \int_0^t \sigma e^{-\delta s} f_x(X_{s\wedge \tau},y) \textrm{d}W_s -\frac1{\sigma^2} \int_0^{t\wedge \tau} e^{-\delta s} \textrm{d}L_{s}^y. $$

Since $f_x(\cdot,y)$ is bounded we see that the second summand is a martingale. If $\delta >0$ , then we find that

$$ \lim\limits_{t\rightarrow\infty} \mathbb{E}_x[ e^{-\delta t} f(X_{t\wedge\tau},y)] = 0. $$

If $\delta = 0$ and $a\leq 0$ , then $\tau < \infty$ $\mathbb{P}$ -almost surely, and the boundedness of f yields

$$ \lim_{t\rightarrow\infty} \mathbb{E}_x[ f(X_{t\wedge \tau},y)] = 0. $$

If $\delta = 0$ and $a > 0$ , then $\lim\limits_{t\to\infty}X_{t\wedge\tau}$ takes values in $\{0,\infty\}$ and $\lim\limits_{x\rightarrow\infty} f(x,y)=0$ ; thus the boundedness of f again yields

$$ \lim_{t\rightarrow\infty} \mathbb{E}_x[ f(X_{t\wedge \tau},y)] = 0. $$

Thus, we find by monotone convergence that

$$ 0 = f(x,y) -\frac1{\sigma^2} \lim_{t\rightarrow\infty} \mathbb{E}_x\left[\int_0^{t\wedge \tau} e^{-\delta s} \textrm{d}L_{s}^y\right] = f(x,y) -\frac1{\sigma^2} \mathbb{E}_x\left[\int_0^{\tau} e^{-\delta s} \textrm{d}L_{s}^y\right].$$

The next lemma is a simple variation of the occupation times formula.

Lemma A.3. Let $g\,:\,\mathbb{R}_+\rightarrow \mathbb{R}_+$ be continuous, let $\tau$ be a random time, and let $\psi\,:\,\mathbb{R}\rightarrow \mathbb{R}_+$ be Borel measurable. Then

$$ \int_0^\tau g(s) \psi(X_s) \sigma^2 \textrm{d}s = \int_\mathbb{R} \psi(y) Z^y \textrm{d}y, $$

where $Z^y \,:\!=\, \int_0^\tau g(s) \textrm{d}L^y_s$ .

Proof. If the claim is proved for bounded stopping times, then an arbitrary stopping time $\tau$ can be approximated by bounded stopping times via $\tau = \lim_{N\rightarrow \infty} \min\{N,\tau\}$ and monotone convergence yields the claim. For the remainder of the proof we assume that $\tau$ is a bounded stopping time. Additionally, we start with bounded and Lebesgue-integrable $\psi$ . Once the claim is proved for bounded and Lebesgue-integrable $\psi$ , it follows for the remaining $\psi$ by monotone convergence.

Let $\epsilon>0$ . Since g is continuous it is uniformly continuous on $[0,\tau]$ . Hence, there is $\delta>0$ such that $|g(x)-g(y)|<\epsilon$ for any $x,y\in[0,\tau]$ with $|x-y|<\delta$ . For an integer $N>\tau/ \delta$ we find, with

$$F_N \,:\!=\, \sum_{k=1}^N \int_{(k-1)\frac\tau N}^{k\frac\tau N}\left(g(s) - g((k-1)\tau /N)\right) \psi(X_s) \sigma^2 \textrm{d}s,$$

that

\begin{align*} \int_0^\tau g(s) \psi(X_s) \sigma^2 \textrm{d}s &= \sum_{k=1}^N \int_{(k-1)\frac\tau N}^{k\frac\tau N}g((k-1)\tau /N) \psi(X_s) \sigma^2 \textrm{d}s \\ &{} + \sum_{k=1}^N \int_{(k-1)\frac\tau N}^{k\frac\tau N}\left(g(s) - g((k-1)\tau /N)\right) \psi(X_s) \sigma^2 \textrm{d}s \\ &= \int_{\mathbb{R}} \psi(y) \sum_{k=1}^N g((k-1)\tau /N)(L^y_{k\frac\tau N}-L^y_{(k-1)\frac\tau N}) \textrm{d}y + F_N, \end{align*}

where we use [Reference Revuz and Yor18, Corollary VI.1.6] for the second equality. We have $|F_N| \leq \epsilon \int_0^\tau \psi(X_s)\sigma^2 \textrm{d}s$ by the choice of $\delta$ , and

$$ \sum_{k=1}^N g((k-1)\tau /N)(L^y_{k\frac\tau N}-L^y_{(k-1)\frac\tau N}) \rightarrow \int_0^\tau g(s) \textrm{d}L^y_s,\quad N\rightarrow \infty. $$

It holds that

$$\left|\sum_{k=1}^N g((k-1)\tau /N)(L^y_{k\frac\tau N}-L^y_{(k-1)\frac\tau N}) \right| \le \sup_{a\in[0,\tau]} g(a) \cdot L^y_\tau, $$

and hence, Lebesgue’s dominated convergence result yields

$$ \int_{\mathbb{R}} \psi(y) \sum_{k=1}^N g((k-1)\tau /N)(L^y_{k\frac\tau N}-L^y_{(k-1)\frac\tau N}) \textrm{d}y \rightarrow \int_{\mathbb{R}} \psi(y) Z^y \textrm{d}y$$

as required.

Proof of Theorem A.1. Fix $y\geq 0$ . For any progressively measurable process $\eta$ with values in I we define

\begin{align*}Y^\eta_t \,:\!=\, X_0 + \int_0^t \eta_s \textrm{d}s + \sigma W_t \quad\mbox{and}\quad V(x) \,:\!=\, \sup_{\eta} \mathbb{E}_x\left[ \int_0^{\tau} e^{-\delta s} \textrm{d}L^{y,\eta}_{s}\right],\end{align*}

where $\tau^{\eta} \,:\!=\, \inf\{t\geq 0\,:\, Y^\eta_t = 0\}$ and $L^{\cdot,\eta}$ denotes a continuous version of the local time of $Y^{\eta}$ . Clearly, we have

$$ \mathbb{E}_x\left[ \int_0^{\tau} e^{-\delta s} \textrm{d}L^{y,\eta}_{s} \right] \le V(x). $$

Moreover, the previous two lemmas yield that $Y^{\eta^*}$ with

$$\eta_t^* = a1\mkern-5mu{\textrm{I}}_{\{Y^{\eta^*}_t>y\}} + b1\mkern-5mu{\textrm{I}}_{\{Y^{\eta^*}_t\leq y\}}$$

is the optimally controlled process, and we get $V(x) = \sigma^2f(x,y)$ . (The process $\eta^*$ exists because the corresponding stochastic differential equation admits pathwise uniqueness; see [Reference Revuz and Yor18, Theorem IX.3.5].) This proves the inequality for the local time. The additional inequality follows now from Lemma A.3.

Acknowledgement

Both authors would like to thank the anonymous referees for their helpful comments and suggestions.

Funding information

The research of J. Eisenberg was funded by the Austrian Science Fund (FWF), Project No. V 603-N35.

Competing interests

There are no competing interests to declare.

References

Albrecher, H., Azcue, P. and Muler, N. (2020). Optimal ratcheting of dividends in insurance. SIAM J. Control Optimization 58, 18221845.Google Scholar
Albrecher, H., Bäuerle, N. and Bladt, M. (2018). Dividends: from refracting to ratcheting. Insurance Math. Econom. 83, 4758.Google Scholar
Albrecher, H. and Thonhauser, S. (2007). Dividend maximization under consideration of the time value of ruin. Insurance Math. Econom. 41, 163184.Google Scholar
Albrecher, H. and Thonhauser, S. (2009). Optimality results for dividend problems in insurance. RACSAM 103, 295320.Google Scholar
Asmussen, S. and Taksar, M. (1997). Controlled diffusion models for optimal dividend pay-out. Insurance Math. Econom. 20, 115.Google Scholar
Avanzi, B. (2009). Strategies for dividend distribution: a review. N. Amer. Actuarial J. 13, 217251.Google Scholar
Azcue, P. and Muler, N. (2005). Optimal reinsurance and dividend distribution policies in the Cramér–Lundberg model. Math. Finance 15, 261308.Google Scholar
Baños, D. and Krühner, P. (2016). Optimal density bounds for marginals of Itô processes. Commun. Stoch. Anal. 10, 131150.Google Scholar
Borodin, A. N. and Salminen, P. (2002). Handbook of Brownian Motion—Facts and Formulae, 2nd edn. Birkhäuser, Basel.Google Scholar
Bühlmann, H. (1970). Mathematical Methods in Risk Theory. Springer, New York.Google Scholar
De Finetti, B. (1957). Su un’impostazione alternativa della teoria collettiva del rischio. In Transactions of the XVth International Congress of Actuaries, Vol. II, Actuarial Society of America, New York, pp. 433–443.Google Scholar
Fleming, W. H. and Soner, H. M. (1993). Controlled Markov processes and viscosity solutions, 1st edn. Springer, New York.Google Scholar
Gerber, H. U. (1969). Entscheidungskriterien für den zusammengesetzten Poisson-Prozess. Mitt. Verein. Schweiz. Versicherungsmath. 69, 185228.Google Scholar
Gould, J. (2008). Munich Re pledges stable dividend after profit drop. Available at https://www.reuters.com/ article/sppage012-l5116107-oisbn-idUSL511610720080806.Google Scholar
Grandits, P., Hubalek, F., Schachermayer, W. and Zigo, M. (2007). Optimal expected exponential utility of dividend payments in Brownian risk model. Scand. Actuarial J. 2, 73107.Google Scholar
Hubalek, F. and Schachermayer, W. (2004). Optimization expected utility of dividend payments for a Brownian risk process and a peculiar nonlinear ode. Insurance Math. Econom. 34, 193225.Google Scholar
Peskir, G. (2005). A change-of-variable formula with local time on curves. J. Theoret. Prob. 18, 499535.Google Scholar
Revuz, D. and Yor, M. (2005). Continuous Martingales and Brownian Motion, 3rd edn. Springer, Berlin, Heidelberg.Google Scholar
Schmidli, H. (2008). Stochastic Control in Insurance. Springer, London.Google Scholar
Shreve, S. E., Lehoczky, J. P. and Gaver, D. P. (1984). Optimal consumption for general diffusions with absorbing and reflecting barriers. SIAM J. Control Optimization 22, 5575.Google Scholar
Figure 0

Figure 1. The return function corresponding to a 5-barrier strategy and its second derivative with respect to x.

Figure 1

Figure 2. The optimal strategies for different values of $\xi$. The dashed line corresponds to the Asmussen–Taksar strategy [5] (unrestricted dividend case).

Figure 2

Figure 3. Left, the functions h(t, x) (black) and g(t, x) (grey); right, the functions $-h_x+e^{-\delta t}(1-\gamma h)$ (black), $-g_x+e^{-\delta t}(1-\gamma g)$ (grey), and 0 (white), for $\xi=1$.

Figure 3

Figure 4. The difference of the value function and an approximation of the performance function corresponding to a constant barrier strategy at $t=0$: $V(0,x)-V^N(0,x)$ for $\xi=1$ with $V^N$ given in (14) and the barrier q given in (18).

Figure 4

Figure 5. The plots show numerical evaluations of the error bounds given in Proposition 4.1 for the barrier strategy with parameters $\xi = 0.15$, $\xi=0.17$, $\xi=0.32$, and $\xi=1$ respectively, as indicated at the side of each panel. The error bound is shown at time $t=0$, where it is largest, across several values of x.