1. Introduction
Problems of optimal stopping involving a Brownian bridge have a long history, dating back to the early days of modern optimal stopping theory. The first results were obtained by Dvoretzky [Reference Dvoretzky9] and Shepp [Reference Shepp24]. Both authors considered stopping of a Brownian bridge to maximise its expected value. Dvoretzky proved the existence of an optimal stopping time and Shepp provided an explicit solution in terms of the first time the Brownian bridge (pinned at zero at time $T=1$ ) exceeds a boundary of the form $t\mapsto a\sqrt{1-t}$ , for $t\in[0, 1]$ and a suitable $a>0$ .
A few years later, Föllmer [Reference Föllmer13] extended the study to the case of a Brownian bridge whose pinning point is random with normal distribution. He showed that the optimal stopping time is the first time the process crosses a time-dependent boundary, and the stopping set may lie either above or below the boundary, depending on the variance of the pinning point’s distribution.
More recently, Ekström and Wanntorp [Reference Ekström and Wanntorp11] studied optimal stopping of a Brownian bridge via the solution of associated free boundary problems. They recovered results by Shepp and extended the analysis by finding explicit solutions to some examples with more general gain functions than the linear case.
Optimal stopping of a Brownian bridge with random pinning point or random pinning time were also studied in [Reference Ekström and Vaicenavicius10] and [Reference Glover15], respectively. In [Reference Ekström and Vaicenavicius10], the authors considered more general versions of the problem addressed in [Reference Föllmer13] and, among other things, they gave general sufficient conditions for optimal stopping rules in the form of a hitting time to a one-sided stopping region. In [Reference Glover15], the author provided sufficient conditions for a one-sided stopping set and was able to solve the problem in closed form for some choices of the pinning time’s distribution.
Problems of optimal stopping for a Brownian bridge have attracted significant attention from the mathematical finance community thanks to their application to trading. Already in 1970, Boyce [Reference Boyce3] had proposed applications of Shepp’s results to bond trading. In that context the pinning effect of the Brownian bridge captures the well-known pull-to-par mechanism of bonds. Many other applications to finance have appeared in recent years, motivated by phenomena of stock pinning (see, e.g., [Reference Avellaneda and Lipkin1] and [Reference Jeannin, Iori and Samuel17] among many others). Explicit results for some problems of optimal double stopping of a Brownian bridge, also inspired by finance, were obtained in [Reference Baurdoux, Chen, Surya and Yamazaki2].
In our paper we study a problem that was posed by Ernst and Shepp in Section 3 of [Reference Ernst and Shepp12]. In particular, we are interested in finding the optimal stopping rule that maximises the expected value of the exponential of a Brownian bridge which is constrained to be equal to zero at time $T=1$ . Besides the pure mathematical interest, this problem is better suited to model bond/stock trading situations than its predecessors with linear gain function. Indeed, the exponential structure avoids the unpleasant feature of negative asset prices, whilst retaining the pinning effect discussed above. Questions concerning stopping the exponential of a Brownian bridge were also considered in [Reference Leung, Li and Li20] in a model inspired by financial applications. In fact, in [Reference Leung, Li and Li20] the authors considered a more general model than ours and allowed a random pinning point. However, the complexity of the model is such that the analysis was carried out mostly from a numerical point of view.
In this work we prove that the optimal stopping time for our problem is the first time the Brownian bridge exceeds a time-dependent optimal boundary $t\mapsto b(t)$ , which is non-negative, continuous, and non-increasing on [0, 1]. The boundary can be computed numerically as the unique solution to a suitable integral equation of Volterra type (see Subsection 5.1). The full analysis that we perform relies on four equivalent formulations of the problem (see (7), (11), (12) and (20)), which are of interest in their own right, and offer different points of view on the problem.
Our study reveals interesting features of the value function v. Indeed, we can prove that v is continuously differentiable on $[0, 1)\times{\mathbb{R}}$ , with respect to both time and space, with a second-order spatial derivative which is continuous up to the optimal boundary (notice that this regularity goes beyond the standard smooth-fit condition in optimal stopping). Notice, however, that the value function is not continuous at $\{1\}\times(-\infty,0)$ , due to the pinning behaviour of the Brownian bridge as $t\to 1$ .
We extend the existing literature in several directions. The exponential structure of the gain function makes it impossible to use scaling properties that are central in all the papers where explicit solutions were obtained (see, e.g., [Reference Baurdoux, Chen, Surya and Yamazaki2, Reference Ekström and Wanntorp11, Reference Glover15, Reference Shepp24]). For this reason we must deal directly with a stopping problem for a time-inhomogeneous diffusion. Optimal boundaries for such problems are hard to come by in the literature and, in order to prove monotonicity of the boundary (which is the key to the subsequent analysis), we have developed a method based on pathwise properties of the Brownian bridge and martingale theory (see Theorem 1). The task is challenging because there is no obvious comparison principle for sample paths of Brownian bridges $X^{t,x}$ and $X^{t',x}$ starting from a point $x\in{\mathbb{R}}$ at different instants of time $t\neq t'$ . Hence, our approach could be used in other optimal stopping problems involving time-inhomogeneous diffusions.
It is worth noticing that, in Section 5 of [Reference Ekström and Vaicenavicius10], the authors also obtained a characterisation of the optimal boundary via integral equations. However, in that case a time change of the Brownian bridge and linearity of the gain function were used to infer monotonicity of the boundary.
The paper is organised as follows. In Section 2 we provide some background notions on the Brownian bridge and formulate the stopping problem. In Section 3 we prove continuity of the value function and the existence of an optimal boundary. In Section 4 we prove that the boundary is monotonic non-increasing, continuous, and bounded on [0, 1], and find its limit at time $t=1$ . In Section 5 we find $C^1$ regularity for the value function and we derive the integral equation that uniquely characterises the optimal boundary. In Section 6 we solve the integral equation numerically using Picard’s iteration scheme, and we provide plots of the optimal boundary and of the value function. We also illustrate numerically the convergence of the algorithm for the boundary, and the dependence of the boundary on the pinning time of the Brownian bridge.
2. Problem formulation
We consider a complete filtered probability space $(\Omega, {\mathcal{F}}, ({\mathcal{F}}_t)_{t\ge 0}, \textrm{P})$ , equipped with a standard Brownian motion $W\coloneqq(W_t)_{t\ge 0}$ . With no loss of generality we assume that $({\mathcal{F}}_t)_{t\ge 0}$ is the filtration generated by W and augmented with $\textrm{P}$ -null sets. Further, we denote by $X\coloneqq(X_t)_{t\in[0, 1]}$ a Brownian bridge pinned at zero at time $T=1$ , i.e. such that $X_1=0$ . If the Brownian bridge starts at time $t\in[0, 1)$ from a point $x\in{\mathbb{R}}$ , we sometimes denote it by $(X^{t,x}_{s})_{s\in[t,1]}$ in order to keep track of the initial condition.
It is well known that, given an initial condition $X_t=x$ at time $t\in[0, 1)$ , the dynamics of X can be described by the following stochastic differential equation (SDE):
The unique strong solution of the SDE (1) is given by
The expression in (2) allows us to identify (in law) the process $X^{t,x}$ with the process $Z^{t,x}\coloneqq(Z^{t,x}_s)_{s\in[t,1]}$ given by
That is, we have
for any initial condition $(t,x)\in[0, 1]\times{\mathbb{R}}$ . In the rest of the paper we will often use the notations $\textrm{E}_{t,x}[\!\cdot\!]=\textrm{E}[\,\cdot \mid X_t=x]$ and, equivalently, $\textrm{E}_{t,x}[\!\cdot\!]=\textrm{E}[\,\cdot \mid Z_t=x]$ .
Using the abovementioned identity in law of X and Z, along with well-known distributional properties of the Brownian motion, it can be easily checked that
where $S_1\coloneqq\sup_{0\le s\le 1}|W_s|$ . The random variable $S_1$ will be used several times in what follows, and we denote
2.1. The stopping problem
Our objective is to study the optimal stopping problem
where $\tau$ is a random time such that $t+\tau$ is an $({\mathcal{F}}_s)_{s\geq t}$ -stopping time (in what follows we simply say that $\tau$ is an $({\mathcal{F}}_s)_{s\geq t}$ -stopping time, as no confusion shall arise). Thanks to (5), we can rely upon standard optimal stopping theory to give some initial results. In particular, we split the state space $[0, 1]\times{\mathbb{R}}$ into a continuation region ${\mathcal{C}}$ and a stopping region ${\mathcal{D}}$ , respectively given by
Then, for any $(t,x)\in[0, 1]\times{\mathbb{R}}$ , the smallest optimal stopping time for problem (7) is given by (see, e.g., [Reference Karatzas and Shreve18, Theorem D.12, Appendix D])
We will sometimes use the notation $\tau^*_{t,x}$ to keep track of the initial condition of the time-space process (t, X).
Moreover, standard theory on the Snell envelope also guarantees (see, e.g., [Reference Karatzas and Shreve18, Theorem D.9, Appendix D]) that the process $V\coloneqq(V_t)_{t\in[0, 1]}$ defined by $V_t\coloneqq v(t,X_t)$ is a right-continuous, non-negative, $\textrm{P}$ -super-martingale and that $V^*\coloneqq(V_{t\wedge\tau^*})_{t\in[0, 1]}$ is a right-continuous, non-negative, $\textrm{P}$ -martingale.
To conclude this section, we show two further formulations of problem (7) that will become useful in our analysis. The former uses (4) and the fact that, thanks to the above discussion, we only need to look for optimal stopping times in the class of entry times to measurable sets. Hence, we have
The second formulation instead uses ideas originally contained in [Reference Jaillet, Lamberton and Lapeyre16]. In particular, for any fixed $t\in[0, 1]$ and any $({\mathcal{F}}_s)_{s\ge t}$ -stopping time $\tau\in[0, 1-t]$ , we can define an $(\hat{{\mathcal{F}}}_s)_{0\leq s\leq 1}$ stopping time $\theta\in[0, 1]$ such that $\tau=\theta(1-t)$ and $\hat{{\mathcal{F}}}_s={\mathcal{F}}_{t+s(1-t)}$ . In addition to this, notice that
Therefore, problem (11) (hence problem (7)) can be rewritten as
This last formulation of the problem has the advantage that the domain of admissible stopping times $\theta$ is independent of the initial time t.
Remark 1. There is no loss of generality in our choice of a pinning time $T=1$ and a pinning point $\alpha=0$ . We could equivalently choose a generic pinning time $T>t\geq 0$ and a generic pinning point $\alpha\in{\mathbb{R}}$ and consider the dynamics
Then, the analysis in the next sections would remain valid up to obvious tweaks.
3. Continuity of the value function and existence of a boundary
In this section we prove some properties of the value function, including its continuity, and derive the existence of a unique optimal stopping boundary. It follows immediately from (5) that the value function is non-negative and uniformly bounded on compact sets. In particular, we have
where $c_1>0$ is given by (6).
Proposition 1. The map $x\mapsto v(t,x)$ is convex and non-decreasing. Moreover, for any compact set $K\subset {\mathbb{R}}$ there exists $L_K>0$ such that
Proof. The convexity of $x\mapsto v(t,x)$ follows from the linearity of $x\mapsto Z^{t,x}_s$ (see (3)), the convexity of the map $x\mapsto {\textrm{e}}^x$ , and the well-known inequality $\sup(a+b)\le \sup a+\sup b$ .
Monotonicity can be easily deduced by, e.g., the explicit dependence of (12) on $x\in{\mathbb{R}}$ . As for the Lipschitz continuity, the claim is trivial for $t=1$ since $v(1,x)=\textrm{e}^x$ . For the remaining cases, fix $t\in[0, 1)$ and let us fix $y\geq x$ . Denote $\tau_y\coloneqq\tau^*_{t,y}$ , then by the monotonicity of $v(t,\,\cdot\,)$ , the fact that $\tau_y$ is sub-optimal for v(t, x), and simple estimates, we obtain
Hence, the claim follows with $L_K\coloneqq c_1 \max_{x\in K}\textrm{e}^{|x|}$ . □
Next, we show that the value function is locally Lipschitz in time on $[0, 1)\times{\mathbb{R}}$ . However, it fails to be continuous at $\{1\}\times(-\infty,0)$ .
Proposition 2. For any $T<1$ and any $0\le t_1<t_2\le T$ , we have
with $c_2>0$ as in (6). Moreover,
Proof. For the proof of (14) we will refer to the problem formulation in (12). Fix $0\leq t_1<t_2\leq T<1$ and let $\theta_2\coloneqq\theta^*_{t_2,x}$ be the optimal stopping time for $v(t_2,x)$ . Then, given that $\theta_2$ is admissible and sub-optimal for the problem with value $v(t_1,x)$ , we have
Now, setting $\theta_1\coloneqq\theta^*_{t_1,x}$ we notice that $\theta_1$ is admissible and sub-optimal for the problem with value $v(t_2,x)$ . Then, arguments as above give
which, combined with (17), implies (14).
Finally, we show (15) and (16). Notice first that $v(1,x)=\textrm{e}^x$ and $v(t,x)\geq {\textrm{e}}^x$ for $t\in[0, 1)$ . Pick $x\geq 0$ , then by (11) we have ${\textrm{e}}^x\leq v(t,x)\leq {\textrm{e}}^x\textrm{E}\left[e^{S_{1-t}}\right]$ , which implies (15) by dominated convergence and using that $S_{1-t}\to0$ as $t\to 1$ . If $x<0$ instead, the sub-optimal strategy $\tau=1-t$ gives $v(t,x)\geq 1$ . Hence, $\liminf_{t\to 1} v(t,x)\geq 1> {\textrm{e}}^x=v(1,x)$ as in (16). □
As a corollary of the two propositions just stated, we have that ${\mathcal{C}}$ is an open set. Combining this fact with the martingale property (in ${\mathcal{C}}$ ) of the value function, we obtain that $v\in C^{1,2}({\mathcal{C}})$ and it solves the free boundary problem (see, e.g., arguments as in the proof of Theorem 7.7 in Chapter 2, Section 7 of [Reference Karatzas and Shreve18])
where $\partial_t$ , $\partial_x$ , and $\partial_{xx}$ denote the time derivative, the first spatial derivative, and the second spatial derivative, respectively.
For future reference, we also denote by ${\mathcal{L}}$ the second-order differential operator associated with X. That is,
3.1. Existence of an optimal boundary
In order to prove the existence of an optimal boundary it is convenient to perform a change of measure in our problem formulation (7). In particular, using the integral form of (1) (upon setting $B_\tau\coloneqq W_{t+\tau}-W_t$ ), we have
where
defines a new equivalent probability measure ${\widetilde{\textrm{P}}}$ on ${(\Omega,{\mathcal{F}})}$ and the associated expected value ${\widetilde{\textrm{E}}}$ . Under ${\widetilde{\textrm{P}}}$ , we have
with $X^{t,x}_t=x$ , and with $\widetilde W_t\coloneqq W_t-t$ defining a ${\widetilde{\textrm{P}}}$ -Brownian motion by Girsanov’s theorem.
Thanks to this transformation of the expected payoff, it is clear that solving problem (7) is equivalent to solving
Notice that, indeed, $v(t,x)={\textrm{e}}^x\tilde{v}(t,x)$ implies that
Moreover, since V is a $\textrm{P}$ -super-martingale and $V^*$ is a $\textrm{P}$ -martingale then, as a consequence of Girsanov’s theorem, the process $\widetilde{V}\coloneqq(\widetilde{V}_t)_{t\in[0, 1]}$ defined as
is a ${\widetilde{\textrm{P}}}$ -super-martingale and $\widetilde V^*\coloneqq(\widetilde{V}_{t\wedge\tau^*})_{t\in[0, 1]}$ is a ${\widetilde{\textrm{P}}}$ -martingale, with $\tau^*$ as in (10).
Using this formulation, we can easily obtain the next result.
Proposition 3. There exists a function $b\,:\,[0, 1]\to{\mathbb{R}}_{+} {\cup \{+\infty\}}$ such that
Proof. Thanks to the pathwise uniqueness of the Brownian bridge, it is clear that for any $x\leq x'$ we have, $\textrm{P}$ -a.s. (hence also ${\widetilde{\textrm{P}}}$ -a.s.),
Using such a comparison principle and (20), it is easy to show that $x\mapsto \tilde{v}(t,x)$ is non-increasing. This means, in particular, that if $(t,x)\in{\mathcal{D}}$ , then $(t,x')\in{\mathcal{D}}$ for all $x'\ge x$ . Then, we define
and (22) holds by continuity of the value function. For future reference, notice that (23) and (15)–(16) also give $b(1)=0$ .
It remains to show that $b(t)\ge 0$ for all $t\in[0, 1]$ . By choosing the stopping rule $\tau=1-t$ , one has $v(t,x)\geq 1>{\textrm{e}}^x$ for $x<0$ and any $t\in[0, 1)$ . Hence,
and the claim follows. □
As a straightforward consequence of the proposition above and (10), we have
4. Regularity of the optimal boundary
In this section we show that the optimal boundary is monotonic, continuous, and bounded. We will then, in the next section, use these properties to derive smoothness of the value function across the optimal boundary.
By an application of Dynkin’s formula we know that, given any initial condition $(t,x)\in[0, 1)\times{\mathbb{R}}$ , any stopping time $\tau\in[0, 1-t]$ , and a small $\delta>0$ we have
This shows that immediate stopping can never be optimal inside the set
and so ${\mathcal{Q}}\subseteq{\mathcal{C}}$ .
The next result, concerning monotonicity of the optimal boundary, is crucial for the subsequent analysis of the stopping set and of the value function. Monotonicity of optimal boundaries is relatively easy to establish in optimal stopping problems when the underlying diffusion is time homogeneous and the gain function is independent of time. In our case, the latter condition holds but our diffusion is time dependent, hence new ideas are needed in the proof of the theorem below. We also remark that, while in some stopping problems of a Brownian bridge (see, e.g., [Reference Ekström and Wanntorp11]) it is possible to rely upon a time change in order to formulate an auxiliary equivalent stopping problem for a time-homogeneous diffusion (see [Reference Pedersen and Peskir21]), this is not the case here, due to the exponential nature of the gain function.
Theorem 1. The optimal boundary $t\mapsto b(t)$ is non-increasing on [0, 1].
Proof. It is sufficient to show that, for any fixed $x\in{\mathbb{R}}$ , the map $t\mapsto v(t,x)$ is non-increasing on [0, 1). Indeed, the latter implies monotonicity of the boundary on [0, 1] by definition (23) and using that $b(t)\geq 0$ for all $t\in[0, 1)$ and $b(1)=0$ .
Recalling (18) and using the convexity of $x\mapsto v(t,x)$ , we obtain
and, in particular,
thanks to the fact that ${\mathcal{Q}}\subseteq {\mathcal{C}}$ (see (25)) and $\partial_x v\ge 0$ in ${\mathcal{C}}$ (Proposition 1).
Notice that if $(t,x)\in {\mathcal{D}}\setminus\partial{\mathcal{C}}$ then $v(t,x)={\textrm{e}}^x$ and $\partial_t v(t,x)=0$ . Since $t\mapsto v(t,x)$ is continuous on [0, 1), it only remains to prove that $\partial_t v(t,x)\leq 0$ for $(t,x)\in {\mathcal{C}}$ with $x>0$ . For that, we proceed in two steps.
Step 1. (Property of $t\mapsto X^{t,x}$ ). Consider $(t,x)\in {\mathcal{C}}$ with $x>0$ and $0<{\varepsilon}\leq t<1$ , for some ${\varepsilon}>0$ . For $s\in[0, 1-t]$ we denote
Since (t, x) is fixed, we simplify the notation and set $Y^{\varepsilon}_{t+s}\coloneqq Y^{t,x;{\varepsilon}}_{t+s}$ , for $s\in[0, 1-t]$ . Next, for some small $\delta>0$ , we let $t_\delta\coloneqq(1-t-\delta)>0$ and $\rho_\delta\coloneqq t_\delta\wedge\tau^0$ , where $\tau^0\coloneqq\tau^0_{t,x}\coloneqq\inf\{u\in[0, 1-t]\,:\,X_{t+u}^{t,x}\leq 0\}$ . Then, using the integral form of (1), for an arbitrary $s\in[0, 1-t]$ we have, $\textrm{P}$ -a.s.,
Let $[x]^+\coloneqq\max\{0,x\}$ . Since $Y^{\varepsilon}$ is a continuous process of bounded variation and $Y^{\varepsilon}_0=0$ , we have
where the final inequality follows from (27), upon observing that $X^{t,x}_{t+u}\geq 0$ for all $u\leq\rho_\delta$ . Then, $Y^{{\varepsilon}}_{t+s\wedge \rho_\delta}\leq 0$ for all $s\in[0, 1-t]$ . Furthermore, letting $\delta\to0$ , we obtain, by continuity of paths,
Hence, the process $X^{t,x}$ hits zero before the process $X^{t-{\varepsilon},x}$ does.
Step 2. ( $\partial_t v(t,x)\le 0$ ). Fix $(t,x)\in{\mathcal{C}}$ with $x>0$ . Using the same notation as in Step 1 above, let $\sigma\coloneqq\tau^*_{t,x}\wedge \tau^0_{t,x}$ . By the (super-)martingale property of the value function, noticing that $\tau^*$ is optimal in v(t, x) and sub-optimal in $v(t-{\varepsilon},x)$ , we have
Recalling (28), on the event $\{\tau^*\leq\tau^0\}\cap\{\tau^*<1-t\}$ we have $X^{t-{\varepsilon},x}_{t-{\varepsilon}+\tau^*}\geq X^{t,x}_{t+\tau^*}$ , and on the event $\{\sigma=1-t\}$ we have that $X^{t-{\varepsilon},x}_{1-{\varepsilon}}\geq X^{t,x}_{1}$ . Moreover, $x\mapsto v(t,x)$ is non-decreasing (Proposition 1). Thus, combining these facts with (29), we obtain
where the final inequality uses (26) and the fact that $\tau^0<1-t$ .
Finally, dividing both sides of (30) by ${\varepsilon}$ and letting ${\varepsilon}\to 0$ , we obtain $\partial_t v(t,x)\le 0$ as needed. □
It is well known in optimal stopping theory that monotonicity of the boundary leads to its right continuity (or left continuity). In our case we have a simple corollary.
Corollary 1. The boundary is right continuous, whenever finite.
Proof. Let $t\in[0, 1)$ be such that $b(t)<+\infty$ . Consider a sequence $(t_n)_{n\in {\mathbb{N}}}$ such that $t_n\downarrow t$ as $n\to\infty$ . By monotonicity of b and (22), we have that $b(t_n)<\infty$ and $(t_n,b(t_n))\in {\mathcal{D}}$ for all $n\in {\mathbb{N}}$ . Since ${\mathcal{D}}$ is a closed set and $(t_n,b(t_n))\to (t,b(t+))$ , then also $(t,b(t+))\in {\mathcal{D}}$ (the right limit $b(t+)$ exists by monotonicity). Hence, $b(t+)\geq b(t)$ (see (22)). However, by monotonicity $b(t)\geq b(t+)$ , which leads to $b(t)=b(t+)$ . □
We can now show that the optimal boundary is continuous and bounded on [0, 1].
Proposition 4. The optimal boundary $t\mapsto b(t)$ is continuous on [0, 1] and we have
The proof of Proposition 4 relies upon four lemmas. First we state and prove those lemmas and then we prove the proposition.
Lemma 1. For any $t\in[0, 1)$ we have ${\mathcal{D}}\cap ([t,1)\times {\mathbb{R}})\neq\varnothing$ .
Proof. Suppose by contradiction that this is not true and there exists $t\in[0, 1)$ such that ${\mathcal{D}}\cap ([t,1)\times {\mathbb{R}})=\varnothing$ . Then $\tau^*_{t',x}=1-t'$ , $\textrm{P}$ -a.s. for all $(t',x)\in[t,1)\times{\mathbb{R}}$ , which implies $v(t',x)=1$ . This, however, leads to a contradiction since immediate stopping gives $v(t',x)\ge {\textrm{e}}^x>1$ for $x>0$ and any $t'\in[t,1)$ . □
Notice that the lemma above implies that for any $t_1\in[0, 1)$ there exists $t_2\in(t_1,1)$ such that $b(t_2)<+\infty$ . This fact will be used in the next lemma.
Lemma 2. The boundary satisfies $b(t)<+\infty$ for all $t\in(0, 1]$ .
Proof. By contradiction, let us assume that there is $t\in(0, 1)$ such that $b(t)=+\infty$ . Then, thanks to Lemma 1 and Corollary 1 we can find $t'\in(t,1)$ such that $0\leq b(t') \eqqcolon b_0<+\infty$ and $(t,t')\times{\mathbb{R}}\subseteq{\mathcal{C}}$ . Let $\sigma_0\coloneqq\inf\{s\in [0, 1-t)\,:\,X_{t+s}^{t,x}\leq b_0\}\wedge (t'-t)$ ; then, recalling $\tau^*_{t,x}$ as in (24), we immediately see that $\textrm{P}(\tau^*_{t,x}\ge \sigma_0)=1$ . Using the martingale property of the value function (see (21)), we obtain
where we have used continuity of paths and the fact that on $\{\sigma_0=t'-t\}$ it must be $X_{t'}\ge b(t')=b_0$ , ${\widetilde{\textrm{P}}}_{t,x}$ -a.s.
Moreover, since $X_{t+s}^{t,x}\geq b_0$ for $s\leq \sigma_0$ , we have
where in the second inequality we have used (13) and $\tilde{v}(t,x)={\textrm{e}}^{-x}v(t,x)$ . Now, we let $x\to\infty$ and notice that
so that the first term on the right-hand side of (32) goes to zero. Similarly, given that $\lim_{x\to\infty} X^{t,x}_{t+s}=+\infty$ for any $s\in[0,t'-t]$ , the second term goes to zero as well by the reverse Fatou lemma. Then, recalling that $\tilde v\ge 1$ , we reach the contradiction
It follows that $b(t)<+\infty$ for all $t\in(0, 1]$ since, by definition, $b(1)=0$ . □
Lemma 3. We have $b(0)<+\infty$ .
Proof. Consider an auxiliary problem where the Brownian bridge is pinned at time $1+h$ , for some $h>0$ , and the time horizon of the optimisation is $1+h$ . That is, let us set
where $\widetilde{X}$ is a Brownian bridge (2) pinned at time $1+h$ .
By the same argument as in Section 2, it follows that $\mathsf{Law}(\widetilde{X}^{t,x})=\mathsf{Law}(\widetilde{Z}^{t,x})$ , where
Thus,
and, since $\mathsf{Law}(Z^{t,x}_s,\,s\in[t,1])=\mathsf{Law}(\widetilde{Z}^{t+h,x}_{s+h},\,s\in[t,1])$ (compare (11) with (33)), we also have that
By the same arguments as for the original problem, we obtain that there exists a non-decreasing, right-continuous optimal boundary $t\mapsto b^h(t)$ such that
Moreover, since the gain function ${\textrm{e}}^x$ does not depend on time, using (34) we obtain
In particular, $b(0)=b^h(h)$ and $b^h(h)<+\infty$ by applying the result in Lemma 2 to the auxiliary problem. □
Using ideas as in [Reference De Angelis4], we can also prove left continuity of the optimal boundary.
Lemma 4. The optimal boundary $t\mapsto b(t)$ is left continuous.
Proof. We first prove that the boundary is left continuous for all $t\in(0, 1)$ and then that its left limit at $t=1$ is zero, that is $b(1-)=0=b(1)$ .
Suppose, by contradiction, that there exists $t_0\in(0, 1)$ such that $b(t_0-)>b(t_0)$ and consider an interval $[x_1,x_2]\subset (b(t_0),b(t_0-))$ . By monotonicity of b, we have that $[0,t_0)\times[x_1,x_2]\subset {\mathcal{C}}$ . Now, pick an arbitrary, non-negative $\varphi\in{\mathcal{C}}_c^{\infty}([x_1,x_2])$ . Since (18) holds in $[0,t_0)\times[x_1,x_2]$ , then, for any $t<t_0$ , we have
where for the inequality we have used $\partial_{t}v\le 0$ (see the proof of Theorem 1) and in the final equality we have applied integration by parts and used the adjoint operator
Taking limits as $t\to t_0$ and using the dominated convergence theorem, we obtain
where we have used that $v(t_0,y)={\textrm{e}}^y$ and integration by parts in the final equality.
Finally, recalling that $x_1\geq b(t_0)>\frac{1-t_0}{2}$ , then (36) leads to a contradiction because the right-hand side of the expression is strictly negative (also $\varphi$ is arbitrary).
In order to prove that $b(1-)=b(1)=0$ , we need a slight modification of the argument above. In particular, suppose by contradiction that $b(1-)>0$ and consider an interval $[x_1,x_2]\subset (0,b(1-))$ . Then, replacing $\varphi$ in (35) with $\tilde{\varphi}(t,x)\coloneqq(1-t)\varphi(x)$ , and using the same arguments with $t_0=1$ , we reach a contradiction, i.e.
We are now able to prove Proposition 4.
Proof of Proposition 4. The proof of (31) follows immediately from Lemmas 2 and 3. Right continuity of the boundary follows from Corollary 1 and (31), whereas left continuity follows from Lemma 4. Thus, the optimal boundary is bounded and continuous on [0, 1]. □
5. Regularity of the value function and integral equations
Thanks to the monotonicity of the optimal boundary and to the law of iterated logarithm (combined with (3)), it is easy to see that, for any $(t,x)\in[0, 1)\times{\mathbb{R}}$ , we have $\tau^*_{t,x}=\tau^{\prime}_{t,x}$ , $\textrm{P}$ -a.s., where
That is, the first time the process reaches the optimal boundary it also goes strictly above it. (A proof of this claim can be found, e.g., in Lemma 5.1 of [Reference De Angelis and Ekström5]).
Moreover, combining (37) with continuity of the optimal boundary, we deduce that
where $\text{int}({\mathcal{D}})={\mathcal{D}}\setminus\partial{\mathcal{C}}$ is the interior of the stopping set. In particular, since $\tau_{t,x}^*=0$ $\textrm{P}$ -a.s. for any $(t,x)\in\partial{\mathcal{C}}$ , by its definition (10), this implies that $\tau^{\prime}_{t,x}=0$ $\textrm{P}$ -a.s. as well for $(t,x)\in\partial{\mathcal{C}}$ . This means that the boundary $\partial{\mathcal{C}}$ is regular for the interior of the stopping set in the sense of diffusions (see, e.g., [Reference De Angelis and Peskir7]).
It is therefore possible to prove (see, e.g., Corollary 6 in [Reference De Angelis and Peskir7] and Proposition 5.2 in [Reference De Angelis and Ekström5]) that for any $(t_0,x_0)\in\partial{\mathcal{C}}$ (i.e. $x_0 =b(t_0)$ ) and any sequence $(t_n,x_n)_{n\ge 1}\subseteq{\mathcal{C}}$ converging to $(t_0,x_0)$ as $n\to\infty$ , we have
Now we can use this property of the optimal stopping time and some related ideas from [Reference De Angelis and Peskir7] to establish $C^1$ regularity of the value function.
First, we give a lemma concerning the spatial derivative of v.
Lemma 5. For all $(t,x)\in([0, 1]\times{\mathbb{R}})\setminus\partial{\mathcal{C}}$ we have
Hence, we also have that
Proof. Recall that $v\in C^{1,2}({\mathcal{C}})$ (see the comment before (18)). Moreover, $v(t,x)={\textrm{e}}^x$ on ${\mathcal{D}}$ and $\partial_x v(t,x)={\textrm{e}}^x$ on ${\mathcal{D}}\setminus\partial{\mathcal{C}}$ as needed in (39). It remains to show that (39) holds for all $(t,x)\in {\mathcal{C}}$ .
Fix $(t,x)\in {\mathcal{C}}$ and take $\varepsilon>0$ . Recall the problem formulation in (11) with the explicit expression for Z (see (3)) and recall that we use the notation $\tau^*\coloneqq\tau^*_{t,x}$ (as in (10)). Since $\tau^*$ is admissible but sub-optimal for the problem with value $v(t,x+{\varepsilon})$ , we have
Hence, by the dominated convergence theorem and recalling that v is differentiable at $(t,x)\in{\mathcal{C}}$ , we obtain
By the same arguments, we also have that
which implies that
Combining (41) and (42) we obtain (39).
Now, the inequality in (40) follows easily by comparison of (39) and (11). □
Theorem 2. $v\in C^1([0, 1)\times{\mathbb{R}})$ .
Proof. We know from (18) that $\partial_x v$ and $\partial_t v$ exist and are continuous in ${\mathcal{C}}$ . Moreover, $v(t,x)={\textrm{e}}^x$ on ${\mathcal{D}}$ implies $\partial_x v(t,x)={\textrm{e}}^x$ and $\partial_t v(t,x)=0$ for $(t,x)\in{\mathcal{D}}\setminus\partial{\mathcal{C}}$ . Then, it remains to prove that $\partial_x v$ and $\partial_t v$ are continuous across the boundary $\partial{\mathcal{C}}$ . We do this in two steps.
Step 1. (Continuity of $\partial_x v$ ). Fix $(t_0,x_0)\in\partial{\mathcal{C}}$ with $t_0<1$ and recall (39). Then, for any sequence $(t_n,x_n)_{n\ge 1}\subseteq{\mathcal{C}}$ converging to $(t_0,x_0)$ as $n\to\infty$ , we can use the dominated convergence theorem, continuity of paths, and (38) to obtain
Step 2. (Continuity of $\partial_t v$ ). Let $(t,x)\in{\mathcal{C}}$ and $0<\varepsilon<1-t$ . Then, repeating the arguments used in (17) and recalling that $t\mapsto v(t,x)$ is non-increasing on [0, 1) (see the proof of Theorem 1), we obtain
where $\theta^*\coloneqq\theta^*_{t,x}$ is the optimal stopping time for v(t, x) (see (12)).
Dividing all terms above by ${\varepsilon}$ and letting ${\varepsilon}\to 0$ , we find that
The inequalities in (43) hold if we replace (t, x) by $(t_n,x_n)$ and $\theta^*$ by $\theta^*_n\coloneqq\theta^*_{t_n,x_n}$ , where the sequence $(t_n,x_n)_{n\ge 1}\subseteq{\mathcal{C}}$ converges to $(t_0,x_0)\in\partial{\mathcal{C}}$ as $n\to\infty$ .
Now we aim at letting $n\to\infty$ . Notice that (38) and the definition of $\theta$ in (12) imply that
Thus, using the dominated convergence theorem, we obtain
Theorem 2 has a simple corollary which shows the regularity of $\partial_{xx} v$ across the boundary. In particular, $\partial_{xx}v$ is continuous but for a (possible) jump along the optimal boundary.
Corollary 2. The second derivative $\partial_{xx}v$ is continuous on $([0, 1]\times{\mathbb{R}})\setminus\partial{\mathcal{C}}$ . Moreover, for any $(t_0,x_0)\in\partial{\mathcal{C}}$ with $t_0<1$ and any sequence $(t_n,x_n)_{n\ge 1}\subseteq{\mathcal{C}}$ converging to $(t_0,x_0)$ as $n\to\infty$ , we have
Proof. Since $v(t,x)={\textrm{e}}^x$ in ${\mathcal{D}}$ , then $\partial_{xx}v(t,x)=e^x$ in ${\mathcal{D}}\setminus\partial{\mathcal{C}}$ which is continuous. Moreover, $\partial_{xx}v \in C\big({\mathcal{C}}\big)$ and so $\partial_{xx}v$ is continuous on $([0, 1]\times{\mathbb{R}})\setminus\partial{\mathcal{C}}$ .
To show (44), it is sufficient to take limits in (18), that is,
where we used Theorem 2 to arrive at the final expression. The inequality in (44) follows from the fact that ${\mathcal{Q}}\subseteq{\mathcal{C}}$ (see (25)). □
5.1 Integral equation for the optimal boundary
The regularity of the value function proved in the previous section allows us to derive an integral equation for the optimal boundary. This follows well-known steps (see, e.g., [Reference Peskir and Shiryaev23]) which we repeat briefly below.
Theorem 3. For all $(t,x)\in[0, 1)\times{\mathbb{R}}$ , the value function has the representation
Moreover, the optimal boundary $t\mapsto b(t)$ is the unique continuous solution of the following non-linear integral equation for all $t\in[0, 1]$ :
with $b(1)=0$ and $b(t)\ge (1-t)/2$ .
Proof. Thanks to Theorem 2 and Corollary 2, we can find a mollifying sequence $(v_n)_{n\geq 0}\subseteq C^\infty([0, 1)\times{\mathbb{R}})$ for v such that (see Section 7.2 in [Reference Gilbarg and Trudinger14])
as $n\to\infty$ , uniformly on compact sets, and
We let $(K_m)_{m\geq 0}$ be a sequence of compact sets increasing to $[0, 1-\varepsilon]\times{\mathbb{R}}$ , and for $t<1-\varepsilon$ we define
By an application of Itô’s formula to $v_n$ and noticing that $\textrm{P}(X^{t,x}_{t+s}=b(t+s))=0$ for $s\in[0, 1-t)$ , we obtain
Now, since $(t+s,X_{t+s})_{s\le\tau_m}$ lives in a compact set, letting $n\to \infty$ and applying the dominated convergence theorem, by (47) and (48) we obtain
where in the second equality we have used (18) and the fact that $v(t,x)={\textrm{e}}^x$ in ${\mathcal{D}}$ .
Notice that $\tau_m\to 1-t-\varepsilon$ as $m\to \infty$ and the integrand on the right-hand side of the above expression is non-negative. Recalling (13) and letting $m\to\infty$ , we can apply the dominated convergence theorem and the monotone convergence theorem (for the integral term) in order to obtain
By the same arguments, letting $\varepsilon\to 0$ we obtain (45), i.e.
where in the second line we have used that, for $t_n<1$ ,
which follows from the problem formulation in (11) with $\tau_n^*\coloneqq\tau^*_{t_n,x_n}$ .
Now the integral equation (46) is obtained by setting $(t,x)=(t,b(t))$ in (45). The uniqueness of the solution to such an equation follows a standard proof in four steps that was originally developed in [Reference Peskir22]. The same proof has since been repeated in numerous examples, some of which are available in [Reference Peskir and Shiryaev23]. Therefore, here we only give a brief sketch of the key arguments of the proof.
Suppose there exists another continuous function $c\,:\,[0, 1]\to{\mathbb{R}}_+$ with $c(1)=0$ and such that, for all $t\in[0, 1]$ , $c(t)\geq (1-t)/2$ and
Then, define the function
i.e. the analogue of (45) but replacing $b(\!\cdot\!)$ therein with $c(\!\cdot\!)$ . Since $c(\!\cdot\!)$ is assumed continuous and the Brownian bridge admits a continuous transition density, it is not hard to show that $v^c$ is continuous on $[0, 1)\times {\mathbb{R}}$ . Moreover, it is clear that $v^c(1,x)=1$ for $x\in{\mathbb{R}}$ and, by (49), $v^c(t,c(t))={\textrm{e}}^{c(t)}$ for $t\in[0, 1]$ .
The main observation in the proof is that the process
is a $\textrm{P}_{t,x}$ -martingale for any $(t,x)\in[0, 1)\times {\mathbb{R}}$ and, moreover, it is a continuous martingale for $s\in[0, 1-t)$ . Using this martingale property and following [Reference Peskir22], one obtains, in order: (i) $v^c(t,x)={\textrm{e}}^x$ for all $x\geq c(t)$ with $t\in[0, 1]$ , and (ii) $v(t,x)\geq v^c(t,x)$ for all $(t,x)\in[0, 1]\times{\mathbb{R}}$ . Using (i) and (ii), the continuity of $b(\!\cdot\!)$ and $c(\!\cdot\!)$ , and, again, the martingale property of (50), one also obtains: (iii) $c(t)\leq b(t)$ for all $t\in[0, 1]$ , and (iv) $c(t)\geq b(t)$ for all $t\in[0, 1]$ . Hence, $c(t)=b(t)$ for all $t\in[0, 1]$ . (It is shown in [Reference De Angelis and Kitapbayev6] that continuity of the boundaries can be relaxed to right/left continuity.)□
6. Numerical results
In order to numerically solve the nonlinear Volterra integral equation (46), we apply a Picard scheme that we learned from [Reference Detemple, Kitapbayev and Zhang8].
First, notice that equation (46) can be rewritten as
where $p(t,x,t+s,y)\coloneqq\partial_y\textrm{P}(X^{t,x}_{t+s}\le y)$ is the transition density of the Brownian bridge.
Let $\Pi\coloneqq\{0\coloneqq t_0<t_1<\cdots<t_n\coloneqq 1\}$ be an equispaced partition of [0, 1] with mesh $h=1/n$ . The algorithm is initialised by setting $b^{(0)}(t_j)\coloneqq 0$ for all $j=0, 1,\ldots, n$ . Now, let $b^{(k)}(t_j)$ denote, for $j=0, 1,\ldots, n$ , the values of the boundary obtained after the kth iteration. Then, the values for the $(k+1)$ th iteration are computed, for all $j=0,\ldots, n$ , as
In particular, the inner integral with respect to ${\textrm{d}} y$ can be computed explicitly. Indeed, noticing that
with $\beta(x,t,s)\coloneqq x(1-t-s)/(1-t)$ and $\alpha(t,s)\coloneqq s(1-t-s)/(1-t)$ , we can now substitute this expression inside the integral. Then, tedious but straightforward algebra allows us to reduce the exponent of ${\textrm{e}}^yp(t_j,b^{(k)}(t_j),t_j\!+\!s,y)$ to an exact square plus a term independent of y. Thus, properties of the Gaussian distribution give
where $\Phi$ is the cumulative density function of a standard normal distribution, and
The integral with respect to the time variable (i.e. the one in ${\textrm{d}} s$ ) is computed by a standard quadrature method. Hence, (51) reduces to
where each $b^{(k)}(t_j+mh+\tfrac{h}{2})$ is computed by interpolation and we use the convention $\sum_{m=0}^{-1}=0$ for $j=n$ . Finally,
The algorithm stops when the numerical error $e_k\coloneqq \max_{j=0,\ldots,n} |b^{(k)}(t_j)-b^{(k-1)}(t_j)|$ fulfills the tolerance condition $e_k<{\varepsilon}$ , for some ${\varepsilon}>0$ . A numerical approximation of the optimal boundary is presented in Figure 1.
While a rigorous proof of the convergence of the scheme seems difficult and falls outside the scope of this work, in Figure 2 we show that the numerical error $e_k$ converges to zero as the number of iterations increases. Moreover, the convergence is monotone, which results in a good stability of the scheme.
Finally, in Figure 3 we plot the value function as a surface in the (t, x)-plane using (45). It is interesting to observe that, as predicted by the theory, the value function exhibits a jump at $\{1\}\times(-\infty,0)$ .
Remark 2. As noted in Remark 1, we could consider a Brownian bridge with a generic pinning time $T>t$ and nothing would change in our analysis. However, it may be interesting to observe that as $T\to\infty$ the Brownian bridge converges (in law) to a Brownian motion W. Thus, we also expect that the stopping problem (7) converges to the problem of stopping the exponential of a Brownian motion over an infinite time horizon. Since $t\mapsto \exp(x+W_t)$ is a sub-martingale, the optimal stopping rule is to never stop. This heuristics is confirmed by Figure 4 where we observe numerically that the continuation set expands as T increases and, in the limit as $T\to +\infty$ , the stopping set disappears.
Remark 3. Traditionally, integral equations as in (46) are solved by discretisation of the integral with respect to time and a backward procedure, starting from the terminal time (see, e.g., [Reference Peskir and Shiryaev23, Chapter VII, Section 27, pp. 432–433] for details in the case of the Asian option or [Reference Peskir and Shiryaev23, Chapter VIII, Section 30, p. 475] for another example; this method was developed in the seminal paper [Reference Kim19] and later extended). To the best of our knowledge, a rigorous proof of convergence for this ‘traditional’ numerical scheme is not available.
At each time step, the scheme must find the root of a highly non-linear algebraic equation, making the procedure slower than the Picard scheme that we implement, which requires no root finding (see Figure 5 for a comparison).
Another possibility is to use finite difference methods to solve directly the free boundary problem in (18) and (19). The finite difference method, however, requires discretisation of both time and space (whereas we only discretise time), which leads to discretisation errors both in time and space and generally to a slower convergence. Moreover, in our case the coefficient associated to the first-order partial derivative $\partial_x v$ is discontinuous at $t=1$ , which causes additional difficulties.
Remark 4. After we numerically solved equation (46) and we were able to plot the boundary function as in Figure 1, we began to investigate whether a suitable fit for the boundary could be found. It is known that in the stopping problem with linear payoff $\textrm{E}[X_\tau]$ and pinning time $T=1$ , the optimal boundary can be found explicitly and it takes the form $b(t)=B\sqrt{1-t}$ (see [Reference Shepp24]). Motivated by this result we considered candidate boundaries of the form $b_T(t)=A_T(1-\exp(B_T\sqrt{T-t}))$ , where T is the pinning time of the Brownian bridge, and $A_T$ and $B_T$ are parameters to be determined.
Using the ‘curve-fitting toolbox’ in Matlab to fit our ansatz to the boundaries obtained from the integral equations, we obtain an excellent fit for $T=1,5,10$ . The quality of the fit slightly deteriorates for larger T (e.g. for $T=20$ ). The results are illustrated in Figure 6. While these tests suggest that our problem might be amenable to an explicit solution, the question is more complex than in the linear payoff case (and its extensions in [Reference Ekström and Wanntorp11]) and remains open. The key difficulties are that (i) we must determine two parameters rather than one, and (ii) we do not have a good guess for the value function that would allow us to transform the free boundary problem in (18) and (19) into a solvable ordinary differential equation. Indeed, in the linear case one uses the identity in law
with $\widetilde{Z}^{x}_s\coloneqq (x+\sqrt{1-t}W_{s})/(1+s)$ , and obtains
With the exponential payoff, the same identity does not provide any useful insight.
A different approach based on finding parameters for which the guessed boundary $b_T(t)$ solves the integral equation (46) seems even harder.
Acknowledgements
T. De Angelis gratefully acknowledges support via EPSRC grant EP/R021201/1, ‘A probabilistic toolkit to study regularity of free boundaries in stochastic optimal control’. A. Milazzo gratefully acknowledges support from Imperial College’s Doctoral Training Centre in Stochastic Analysis and Mathematical Finance. Parts of this work were carried out while A. Milazzo was visiting the School of Mathematics at the University of Leeds. Both authors thank the University of Leeds for their hospitality. Finally, we thank an anonymous referee whose useful suggestions improved the quality of the paper, and in particular Section 6.