Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-01-20T02:40:50.537Z Has data issue: false hasContentIssue false

Steady-state solutions for a reaction–diffusion equation with Robin boundary conditions: Application to the control of dengue vectors

Published online by Cambridge University Press:  18 September 2023

Luis Almeida
Affiliation:
Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France
Pierre-Alexandre Bliman
Affiliation:
INRIA, Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France
Nga Nguyen*
Affiliation:
INRIA, Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France LAGA, CNRS UMR 7539, Institut Galilee, University Sorbonne Paris Nord, Villetaneuse, 93430, France
Nicolas Vauchelet
Affiliation:
LAGA, CNRS UMR 7539, Institut Galilee, University Sorbonne Paris Nord, Villetaneuse, 93430, France
*
Corresponding author: Nga Nguyen; Email: thiquynhnga.nguyen@math.univ-paris13.fr
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we investigate an initial-boundary value problem of a reaction–diffusion equation in a bounded domain with a Robin boundary condition and introduce some particular parameters to consider the non-zero flux on the boundary. This problem arises in the study of mosquito populations under the intervention of the population replacement method, where the boundary condition takes into account the inflow and outflow of individuals through the boundary. Using phase plane analysis, the present paper studies the existence and properties of non-constant steady-state solutions depending on several parameters. Then, we prove some sufficient conditions for their stability. We show that the long-time efficiency of this control method depends strongly on the size of the treated zone and the migration rate. To illustrate these theoretical results, we provide some numerical simulations in the framework of mosquito population control.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

The study of scalar reaction–diffusion equations $\partial _t p - \Delta p = f(p)$ with a given non-linearity $f$ has a long history. For suitable choices of $f$ , this equation can be used to model some phenomena in biology such as population dynamics (see e.g. [Reference Fife4, Reference Murray16, Reference Smoller and Wasserman25]). To investigate the structure of the steady-state solutions, the semilinear elliptic equation $\Delta p + f(p) = 0$ has been studied extensively.

Many results about the multiplicity of positive solutions for the parametrised version $\Delta p + \lambda f(p) = 0$ in a bounded domain are known. Here, $\lambda$ is a positive parameter. Various works investigated the number of solutions and the global bifurcation diagrams of this equation according to different classes of the non-linearity $f$ and boundary conditions. For Dirichlet problems, in [Reference Lions15], Lions used many ‘bifurcation diagrams’ to describe the solution set of this equation with several kinds of non-linearities $f$ and gave nearly optimal multiplicity results in each case. The exact number of solutions and the precise bifurcation diagrams with cubic-like non-linearities $f$ were given in the works of Korman et al. [Reference Korman, Li and Ouyang13, Reference Korman, Li and Ouyang14], Ouyang and Shi [Reference Ouyang and Shi18], and references therein. In these works, the authors developed a global bifurcation approach to obtain the exact multiplicity of positive solutions. In the case of one-dimensional space with a two-point boundary, Korman gave a survey of this approach in [Reference Korman12]. Another approach was given by Smoller and Wasserman in [Reference Smoller24] using phase plane analysis and the time mapping method. This method was completed and applied in the works of Wang [Reference Wang28, Reference Wang and Kazarinoff29]. While the bifurcation approach is convenient to solve the problem with more general cubic non-linearities $f$ , the phase plane method is more intuitive and easier to compute.

Although many results were obtained concerning the number of solutions for Dirichlet problems, relatively little seems to be known concerning the results for other kinds of boundary conditions. For the Neumann problem, the works of Smoller and Wasserman [Reference Smoller24], Schaaf [Reference Schaaf21], and Korman [Reference Korman11] dealt with cubic-like non-linearities $f$ in one dimension. Recently, more works have been done for Robin boundary conditions (see e.g. [Reference Daners3, Reference Shi and Li22, Reference Zhang, Li and Xue33]), Neumann–Robin boundary conditions (see e.g. [Reference Tsai, Wang and Huang27]), or even non-linear boundary conditions (see e.g. [Reference Goddard and Shivaji6, Reference Gordon, Ko and Shivaji7] and references therein). However, those works only focused on other types of non-linearities such as positive or monotone $f$ . An analogous problem with advection term was studied in [Reference Wang, Shi and Wang31, Reference Wang and Shi30] for cubic-like non-linearities, but in these works, they used a homogeneous non-symmetric Robin boundary condition to characterise the open or closed environment boundary. To the best of our knowledge, the study of inhomogeneous symmetric Robin problems with cubic-like non-linearities remains quite open.

In this paper, we study the steady-state solutions with values in $[0,1]$ of a reaction–diffusion equation in one dimension with inhomogeneous Robin boundary conditions:

(1.1a) \begin{align} \partial _t p^0 - \partial _{xx} p^0 = f(p^0), & \hspace{0.5 cm} & (t,x) \in (0,T) \times \Omega, \end{align}
(1.1b) \begin{align} \frac{\partial p^0}{\partial \nu } = -D(p^0 - p^{\mathrm{ext}}), & \hspace{0.5 cm}& (t,x) \in (0,T) \times \partial \Omega, \end{align}
(1.1c) \begin{align} p^0(0,x) = p^{\mathrm{init}}(x), &\hspace{0.5 cm} & x \in \Omega, \end{align}

where $\Omega = (\!-\!L,L)$ is a bounded domain in $\mathbb{R}$ , time $T \gt 0$ . The steady-state solutions satisfy the following elliptic boundary value problem:

(1.2a) \begin{align} -p''(x) = f(p(x)), & \hspace{0.5 cm} & x \in (\!-\!L,L), \end{align}
(1.2b) \begin{align} p'(L) = -D(p(L) - p^{\mathrm{ext}}), & \hspace{0.5 cm}& \end{align}
(1.2c) \begin{align} -p'(\!-\!L) = -D(p(\!-\!L) - p^{\mathrm{ext}}), & \hspace{0.5 cm}& \end{align}

where $L \gt 0$ , $D \gt 0$ and $p^{\mathrm{ext}} \in (0,1)$ are constants. The reaction term $f \;:\; [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1$ , with three roots $\{0, \theta, 1\}$ where $0 \lt \theta \lt 1$ (see Figure 1(a)). The dynamics of (1.1) can be determined by the structure of steady-state solutions which satisfy (1.2). Note that, by changing variable from $x$ to $y = x/L$ , then (1.2) becomes $p''(y) + L^2 f(p(y)) = 0$ on $(\!-\!1,1)$ with parameter $L^2$ . Thus, we study problem (1.2) with three parameters $L\gt 0, D \gt 0$ and $p^{\mathrm{ext}} \in (0,1)$ .

Figure 1. Sketch of the functions $f$ and $F$ .

The Robin boundary condition considered in (1.1) and (1.2) means that the flow across the boundary points is proportional to the difference between the surrounding density and the density just inside the interval. Here, we assume that $p^{\mathrm{ext}}$ does not depend on space variable $x$ nor time variable $t$ .

The existence of classical solutions for such problems was studied widely in the theory of elliptic and parabolic differential equations (see, e.g. [Reference Pao19]). In our problem, due to difficulties caused by the inhomogeneous Robin boundary condition and the variety of parameters, we cannot obtain the exact multiplicity of solutions. However, our main results in Theorems 2.2 and 2.3 show how the existence of solutions and their ‘shapes’ depend on parameters $D, p^{\mathrm{ext}}$ and $L$ . The idea of phase plane analysis and time mapping method as in [Reference Smoller24] are extended to prove these results.

Since the solutions of (1.2) are equilibria of (1.1), their stability and instability are the next problems that we want to investigate. The stability analysis of the non-constant steady-state solutions is a delicate problem, especially when the system under consideration has multiple steady-state solutions. In Theorem 2.5, we use the principle of linearised stability to give some sufficient conditions for stability. Finally, as a consequence of these theorems, we obtain Corollary 2.1 which provides a comprehensive result about existence and stability of the steady-state solutions when the size $L$ is small.

The main biological application of our results is the control of dengue vectors. Aedes mosquitoes are vectors of many vector-borne diseases, including dengue. Recently, a biological control method using an endosymbiotic bacterium called Wolbachia has gathered a lot of attention. Wolbachia helps reduce the vectorial capacity of mosquitoes and can be passed to the next generation. Massive release of mosquitoes carrying this bacterium in the field is thus considered as a possible method to replace wild mosquitoes and prevent dengue epidemics. Reaction–diffusion equations have been used in previous works to model this replacement strategy (see [Reference Barton and Turelli1, Reference Chan and Kim2, Reference Strugarek and Vauchelet26]). In this work, we introduce the Robin boundary condition to describe the migration of mosquitoes through the boundary. Since inflows of wild mosquitoes and outflows of mosquitoes carrying Wolbachia may affect the efficiency of the method, the study of existence and stability of steady-state solutions depending on parameters $D, p^{\mathrm{ext}}$ and $L$ as in (1.2), (1.1) will provide necessary information to maintain the success of the control method using Wolbachia under the effects of migration.

Problem (1.1) arises often in the study of population dynamics. $p^0$ is usually considered as the relative proportion of one population when there are two populations in competition. This is why, we only focus on solutions with values that belong to the interval $[0,1]$ . Problem (1.1) is derived from the idea in paper [Reference Strugarek and Vauchelet26], where the authors reduce a reaction–diffusion system modelling the competition between two populations $n_1$ and $n_2$ to a scalar equation on the proportion $p = \frac{n_1}{n_1 + n_2}$ . More precisely, they consider two populations with a very high fecundity rate scaled by a parameter $\epsilon \gt 0$ and propose the following system depending on $\epsilon$ for $t \gt 0, x \in \mathbb{R}^d$ :

(1.3a) \begin{align} \partial _t n_1^\epsilon - \Delta n_1^\epsilon = n_1^\epsilon\; f_1(n_1^\epsilon,n_2^\epsilon ), \end{align}
(1.3b) \begin{align} \partial _t n_2^\epsilon - \Delta n_2^\epsilon = n_2^\epsilon\; f_2(n_1^\epsilon,n_2^\epsilon ). \end{align}

The authors obtained that under some appropriate conditions, the proportion $p^\epsilon = \frac{n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }$ converges strongly in $L^2(0,T;\;L^2(\mathbb{R}^d))$ , and weakly in $L^2(0,T;\;H^1(\mathbb{R}^d))$ to the solution $p^0$ of the scalar reaction–diffusion equation $\partial _t p^0 - \Delta p^0 = f(p^0)$ when $\epsilon \rightarrow 0$ , where $f$ can be given explicitly from $f_1, f_2$ . Now, in order to describe and study the migration phenomenon, we aim here at considering system (1.3) in a bounded domain $\Omega$ and introduce the boundary conditions to characterise the inflow and outflow of individuals as follows;

(1.4a) \begin{align} \frac{\partial n_1^\epsilon }{\partial \nu } = -D(n_1^\epsilon - n_1^{\mathrm{ext},\epsilon }), & \hspace{0.5cm} & (t,x) \in (0,T) \times \partial \Omega, \end{align}
(1.4b) \begin{align} \frac{\partial n_2^\epsilon }{\partial \nu } = -D(n_2^\epsilon - n_2^{\mathrm{ext},\epsilon }), & \hspace{0.5cm} & (t,x) \in (0,T) \times \partial \Omega, \end{align}

where $n_1^{\mathrm{ext},\epsilon }, n_2^{\mathrm{ext},\epsilon }$ depend on $\epsilon$ but do not depend on time $t$ and position $x$ . Equation (1.4) models the tendency of the population to cross the boundary, with rates proportional to the difference between the surrounding density and the density just inside $\Omega$ . Reusing the idea in [Reference Strugarek and Vauchelet26], we prove in Section A that the proportion $p^\epsilon = \frac{n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }$ converges on any bounded time domain to the solution of (1.1) when $\epsilon$ goes to zero. Hence, we can reduce the system (1.3) and (1.4) to a simpler setting as in (1.1). The proof is based on a relative compactness argument that was also used in previous works about singular limits (e.g. [Reference Hilhorst, Iida, Mimura and Ninomiya8, Reference Hilhorst, Martin and Mimura9, Reference Strugarek and Vauchelet26]), but here, the use of the trace theorem is necessary to prove the limit on the boundary.

The outline of this work is the following. In the next section, we present the setting of the problem and the main results. In Section 3, we provide detailed proof of these results. Section 4 is devoted to an application to the biological control of mosquitoes. We also present numerical simulations to illustrate the theoretical results we obtained. Section A is devoted to proving the asymptotic limit of a 2-by-2 reaction–diffusion system when the reaction rate goes to infinity. Finally, we end this article with a conclusion and perspectives section.

2. Results on the steady-state solutions

2.1 Setting of the problem

In one-dimensional space, consider the system (1.1) in a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ . Let $D \gt 0$ , $p^{\mathrm{ext}} \in (0,1)$ be some constant and $p^{\mathrm{init}}(x) \in [0,1]$ for all $x \in (\!-\!L,L)$ . The reaction term $f$ satisfies the following assumptions:

Assumption 2.1 (bistability). Function $f \;:\; [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1([0,1])$ and $f(0) = f(\theta ) = f(1) = 0$ with $\theta \in (0,1)$ , $f(q) \lt 0$ for all $q \in (0,\theta )$ , and $f(q) \gt 0$ for all $q \in (\theta,1)$ . Moreover, $\displaystyle \int _{0}^{1} f(s)ds \gt 0$ .

Assumption 2.2 (convexity). There exist $\alpha _1 \in (0,\theta )$ and $\alpha _2 \in (\theta,1)$ such that $f'(\alpha _1) = f'(\alpha _2) = 0$ , $f'(q) \lt 0$ for any $q \in [0,\alpha _1) \cup (\alpha _2,1]$ , and $f'(q) \gt 0$ for $q \in (\alpha _1,\alpha _2)$ . Moreover, $f$ is convex on $(0,\alpha _1)$ and concave on $(\alpha _2,1)$ .

A function $f$ satisfying Assumptions 2.1 and 2.2 is illustrated in Figure 1(a).

Remark 2.1.

  1. (1) Due to Assumption 2.1 and the fact that $p^{\mathrm{ext}} \in (0,1), p^{\mathrm{init}}(x) \in [0,1]$ for any $x$ , one has that $0$ and $1$ are respectively sub- and super-solution of problem (1.1). Since $f$ is Lipschitz continuous on $(0,1)$ then by Theorem 4.1, Section 2.4 in [Reference Pao19], we obtain that problem (1.1) has a unique solution $p^0$ that is in $\mathcal{C}^{1,2}((0,T]\times \Omega )$ with $0 \leq p^0(t,x) \leq 1$ for all $x \in (\!-\!L,L), t \gt 0$ .

  2. (2) Again by Assumption 2.1, $0$ and $1$ are respectively sub- and super-solutions of (1.2). For fixed values of $D, p^{\mathrm{ext}}$ and $L$ , we use the same method as in [Reference Pao19] to obtain that there exists a $\mathcal{C}^2$ solution of (1.2) with values in $[0,1]$ . However, Assumptions 2.1 and 2.2 on $f$ are not enough to conclude the uniqueness of the solution. In the following section, we prove that the stationary problem (1.2) may have multiple solutions and their existence depends on the values of the parameters.

The following proposition shows that solutions of system (1.2) always have at least one extreme value in $(\!-\!L, L)$ .

Proposition 2.1. For any $p^{\mathrm{ext}} \in (0,1)$ and $p^{\mathrm{ext}} \neq \theta$ , system ( 1.2 ) does not have any non-constant monotone solution on the whole interval $(\!-\!L, L)$ .

Proof. Assume that (1.2) admits an increasing solution $p$ on $(\!-\!L,L)$ (the case when $p$ is decreasing on $(\!-\!L,L)$ is analogous). Thus, we have $p'(x) \geq 0$ for all $x \in [\!-\!L,L]$ and $p(L) \gt p(\!-\!L)$ . So thanks to the boundary condition of (1.2), one has

\begin{equation*} D p^{\mathrm {ext}} = p'(L) + Dp(L) \geq Dp(L) \gt Dp(\!-\!L) \geq -p'(\!-\!L) + Dp(\!-\!L) = D p^{\mathrm {ext}}, \end{equation*}

which is impossible. Therefore, we can deduce that the solutions of system (1.2) always admit at least one local extremum on the open interval $(\!-\!L,L)$ .

To study system (1.2), we define function $F$ (see Figure 1(b)) as follows:

(2.1) \begin{equation} F(q) = \displaystyle \int _{0}^{q} f(s)ds, \end{equation}

then $F'(q) = f(q)$ and $F(0) = 0$ . From Assumption 2.1, $F$ reaches the minimal value at $q = \theta$ and the (locally) maximal values at $q = 0$ and $q = 1$ . Since $\displaystyle \int _0^1 f(s) ds \gt 0$ , then $F(1) \gt F(0)$ , it implies that $F(1) = \displaystyle \max _{[0,1]} F;\; F(\theta ) = \displaystyle \min _{[0,1]}F$ . Moreover, since $F(\theta ) \lt F(0)$ and function $F$ is monotone in $(\theta,1)$ ( $F'(q) = f(q) \gt 0$ for any $q \in (\theta,1)$ ). Thus, there exists a unique value $\beta \in (\theta,1)$ such that

(2.2) \begin{equation} F(\beta ) = F(0) = 0. \end{equation}

The main results of the present work concern existence and stability of steady-state solutions of (1.1), that is, solutions of (1.2).

2.2 Existence of steady-state solutions

In our result, we first focus on two types of steady-state solutions defined as follows:

Definition 2.1. Consider a steady-state solution $p(x)$ ,

$p$ is called a symmetric-decreasing (SD) solution when $p$ is symmetric on $(\!-\!L,L)$ with values in $[0,1]$ , decreasing on $(0,L)$ and $p'(0) = 0$ (see Figure 2(a)).

Figure 2. Sketch of the symmetric steady-state solutions $p$ .

Similarly, $p$ is called a symmetric-increasing (SI) solution when $p$ is symmetric on $(\!-\!L,L)$ with values in $[0,1]$ , increasing on $(0,L)$ and $p'(0) = 0$ (see Figure 2(b)).

Any solution which is either SD or SI is called a symmetric-monotone (SM) solution.

The following theorems present the main result of the existence of SM solutions depending on the parameters. For each value of $p^{\mathrm{ext}} \in (0,1)$ and $D \gt 0$ , we find the critical values of $L$ such that (1.2) admits solutions.

Theorem 2.2. In a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the stationary problem (1.2). Assume that the reaction term $f$ satisfies Assumptions 2.1 and 2.2. Then, there exist two functions:

(2.3) \begin{equation} \begin{array}{c@{\quad}r@{\quad}c@{\quad}l} M_{d}, M_i: & (0,1) \times (0,+\infty ) & \longrightarrow & [0,+\infty ], \\[5pt] & (p^{\mathrm{ext}},D) & \longmapsto & M_d(p^{\mathrm{ext}},D), M_i(p^{\mathrm{ext}},D), \end{array} \end{equation}

such that for any $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , problem (1.2) admits at least one SD solution (resp., SI solution) if and only if $L \gt M_{d}(p^{\mathrm{ext}},D)$ (resp., $L \gt M_{i}(p^{\mathrm{ext}},D)$ ), and the values of these solutions are in $[p^{\mathrm{ext}}, 1]$ (resp., $[0,p^{\mathrm{ext}}]$ ). More precisely,

  1. (1) If $0 \lt p^{\mathrm{ext}} \lt \theta$ , then for any $D \gt 0$ , $M_{i}(p^{\mathrm{ext}},D) = 0$ and $M_{d}(p^{\mathrm{ext}},D) \in (0,+\infty )$ . Moreover, if $p^{\mathrm{ext}} \leq \alpha _1$ , the SI solution is unique.

  2. (2) If $\theta \lt p^{\mathrm{ext}} \lt 1$ , then for any $D \gt 0$ , $M_d(p^{\mathrm{ext}},D) = 0$ . If $\alpha _2 \leq p^{\mathrm{ext}}$ , the SD solution is unique. Moreover, consider $\beta$ as in (2.2),

    • if $p^{\mathrm{ext}} \leq \beta$ , then $M_i(p^{\mathrm{ext}},D) \in (0,+\infty )$ for any $D \gt 0$ ;

    • if $p^{\mathrm{ext}} \gt \beta$ , then there exists a constant $D_* \gt 0$ such that $M_i(p^{\mathrm{ext}},D) \in (0,+\infty )$ for any $D \lt D_*$ , and $M_i(p^{\mathrm{ext}}, D) = +\infty$ for $D \geq D_*$ .

  3. (3) If $p^{\mathrm{ext}} = \theta$ , then $M_d(\theta, D) = M_i(\theta, D)= 0$ . Moreover, there exists a constant solution $p \equiv \theta$ .

In the statement of the above result, $M_i = 0$ means that for any $L \gt 0$ , (1.2) always admits SI solutions. $M_i = +\infty$ means that there is no SI solution even when $L$ is large. The same interpretation applies for $M_d$ .

Besides, problem (1.2) can also admit solutions that are neither SD nor SI. The following theorem provides an existence result for those solutions.

Table 1. The existence of steady-state solutions corresponding to values of parameters

Theorem 2.3. In a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the stationary problem (1.2). Assume that the reaction term $f$ satisfies Assumptions 2.1 and 2.2. Then, there exists a function:

(2.4) \begin{equation} \begin{array}{c r c l} M_*: & (0,1) \times (0,+\infty ) & \longrightarrow & [0,+\infty ], \\[5pt] & (p^{\mathrm{ext}},D) & \longmapsto & M_*(p^{\mathrm{ext}},D), \end{array} \end{equation}

such that for any $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , problem (1.2) admits at least one solution which is not SM if and only if $L \geq M_{*}(p^{\mathrm{ext}},D)$ . Moreover,

  • If $p^{\mathrm{ext}} \leq \beta$ , then for any $D \gt 0$ , one has

    (2.5) \begin{equation} 0 \lt M_i(p^{\mathrm{ext}},D) + M_d(p^{\mathrm{ext}},D) \lt M_*(p^{\mathrm{ext}},D) \lt +\infty. \end{equation}
  • If $p^{\mathrm{ext}} \gt \beta$ , then for any $D \lt D_*$ , one has $0 \lt M_i(p^{\mathrm{ext}},D) \lt M_*(p^{\mathrm{ext}},D) \lt +\infty$ . Otherwise, for $D \geq D_*$ , $M_*(p^{\mathrm{ext}},D) = +\infty$ . Here, $D_*$ was defined in Theorem 2.2.

The construction of $M_i, M_d, M_*$ will be done in the proof in Section 3. The idea of the proof is based on a careful study of the phase portrait of (1.2). To make the results more reader-friendly, we present the types of steady-state solutions corresponding to different parameters in Table 1.

In the next section, we present a result about the stability and instability of steady-state solutions of (1.2).

2.3 Stability of steady-state solutions

The definition of stability and instability used in the present work comes from Lyapunov stability

Definition 2.4. A steady-state solution $p(x)$ of (1.1) is called stable if for any constant $\epsilon \gt 0$ , there exists a constant $\delta \gt 0$ such that when $||p^{\mathrm{init}} - p||_{ \infty } \lt \delta$ , one has

(2.6) \begin{equation} ||p^0(t,\cdot ) - p||_{ \infty } \lt \epsilon, \quad \text{ for all } t \gt 0 \end{equation}

where $p^0(t,x)$ is the unique solution of (1.1). If, in addition,

(2.7) \begin{equation} \displaystyle \lim _{t \rightarrow \infty }||p^0(t,\cdot ) - p||_{ \infty } = 0, \end{equation}

then $p$ is called asymptotically stable. The steady-state solution $p$ is called unstable if it is not stable.

The following theorem provides sufficient conditions for the stability of steady-state solutions given in Section 2.2.

Theorem 2.5. In the bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the problem (1.1) with the reaction term satisfying Assumptions 2.1 and 2.2. There exists a constant $\lambda _1 \in \left (0,\frac{\pi ^2}{4L^2} \right )$ such that for any steady-state solution $p$ of (1.1),

  • If $f'(p(x)) \gt \lambda _1$ for all $x \in (\!-\!L,L)$ , then $p$ is unstable.

  • If $f'(p(x)) \lt \lambda _1$ for all $x \in (\!-\!L,L)$ , then $p$ is asymptotically stable.

More precisely, $\lambda _1$ is the principal eigenvalue of the linear problem (2.8):

(2.8a) \begin{align} \;\;\;\;\;\;-\phi ''(x) = \lambda \phi (x)\quad x \in (\!-\!L,L), \end{align}
(2.8b) \begin{align} \phi '(L) = -D\phi (L),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\, \end{align}
(2.8c) \begin{align} \phi '(\!-\!L) = D\phi (\!-\!L),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\, \end{align}

where $\lambda$ is an eigenvalue with associated eigenfunction $\phi$ . It may be proved that its value is the smallest positive solution of equation $\sqrt{\lambda }\tan{\left (L\sqrt{\lambda }\right )} = D$ (see more details in Section 3).

Note that we cannot apply the first statement if $\displaystyle \sup _{q \in (0,1)} f'(q) \leq \lambda _1$ . However, due to the fact that $\lambda _1 \in \left (0,\frac{\pi ^2}{4L^2}\right )$ , when $L$ gets larger, the value of $\lambda _1$ gets closer to zero and the inequality in the first statement becomes valid.

Remark 2.2. By Assumption 2.2, $f'(q) \leq 0 \lt \lambda _1$ for all $q \in [0,\alpha _1] \cup [\alpha _2,1]$ , we can deduce that the steady-state solutions with values smaller than $\alpha _1$ or larger than $\alpha _2$ are asymptotically stable.

As a consequence of Theorems 2.2, 2.3 and 2.5, the following important result provides complete information about the existence and stability of steady-state solutions in some special cases.

Corollary 2.1. In the bounded domain $\Omega = (\!-\!L, L) \subset \mathbb{R}$ , consider the problem (1.1) with the reaction term satisfying Assumptions 2.1 and 2.2. Then for any $D \gt 0$ , we have

  • If $p^{\mathrm{ext}} \leq \alpha _1$ , for any $L \gt 0$ , there exists exactly one SI steady-state solution $p$ and it is asymptotically stable. Moreover, if $L \lt M_d(p^{\mathrm{ext}},D)$ , then $p$ is the unique steady-state solution of (1.1).

  • If $p^{\mathrm{ext}} \geq \alpha _2$ , for any $L \gt 0$ , there exists exactly one SD steady-state solution $p$ and it is asymptotically stable. Moreover, if $L \lt M_i(p^{\mathrm{ext}},D)$ , then $p$ is the unique steady-state solution of (1.1).

Remark 2.3. This corollary gives us a comprehensive view of the long-time behaviour of solutions of (1.1) when the size $L$ of the domain is small. In this case, the unique steady-state solution $p$ is symmetric, monotone on each half of $\Omega$ and asymptotically stable. Its values will be close to $0$ if $p^{\mathrm{ext}}$ is small and close to $1$ if $p^{\mathrm{ext}}$ is large. We discuss an essential application of this result in Section 4.

3. Proof of the theorems

3.1 Proof of existence

In this section, we use phase plane analysis to prove the existence of both SM and non-SM steady-state solutions depending on the parameters. The studies of SD and SI solutions will be presented, respectively, in Sections 3.1.1 and 3.1.2. Then, using these results, we prove Theorem 2.2. The proof of Theorem 2.3 will be presented after that using the same technique.

First, we introduce the following function:

(3.1) \begin{equation} E(p,p') = \frac{(p')^2}{2} + F(p). \end{equation}

Since $\frac{d}{dx} E(p,p') = p'(p'' + f(p)) = 0$ , then $E(p,p')$ is constant along the orbit of (1.2). From Proposition 2.1, we can deduce that there exists an $x_0 \in (\!-\!L,L)$ such that $p'(x_0) = 0$ , thus one has

(3.2) \begin{equation} E(p(x_0),0) = E(p(x),p'(x)), \end{equation}

for all $x \in (\!-\!L,L)$ . Therefore, the relation between $p'$ and $p$ is as follows:

(3.3) \begin{equation} p' = \pm \sqrt{2F(p(x_0)) - 2F(p)}. \end{equation}

According to this relation, one has a phase plane as in Figure 3(a), in which the curves illustrate the relation between $p'(x)$ and $p(x)$ in (3.3) with respect to different values of $p(x_0)$ . We can see that some curves do not end on the axis $p=0$ but wrap around the point $(\theta,0)$ . This is dues to the fact that for any $p_1 \in [\theta,\beta ]$ , there exists a value $p_2 \in [0,\theta ]$ such that $F(p_1) = F(p_2)$ . Thus, if the curve passes through the point $(p_1,0)$ , it will also pass through the point $(p_2,0)$ on the axis $p'=0$ . Moreover, those curves only exist if their intersection with the axis $p'=0$ has $p$ -coordinate less than or equal to $\beta$ . Besides, the two straight lines show the relation between $p'$ and $p$ at the boundary points. Solutions of (1.2) correspond to those orbits that connect the intersection of the curves with the line $p' = D(p-p^{\mathrm{ext}})$ to the intersection of the curves with the line $p' = -D(p-p^{\mathrm{ext}})$ .

Figure 3. Phase portraits of (1.2): straight lines illustrate the boundary conditions, and solid curves show relations between $p'$ and $p$ . Figure (a): curves $T_1, \ T_2$ and $T_3$ correspond to orbits of SD, SI and non-SM solutions, respectively. Figure (b): curve $T_4$ corresponds to an orbit of a non-SM solution.

In the phase plane in Figure 3(a), orbit $T_1$ describes an SD solution, while orbit $T_2$ corresponds to an SI solution. On the other hand, the solid curve $T_3$ shows the orbit of a steady-state solution that is not SM.

Remark 3.1. (Graphical interpretation of $D_*$ ) The SI solutions (see Figure 2(b)) have orbit as $T_2$ in Figure 3(a). This type of orbits only exists when the lines $p = \pm D(p - p^{\mathrm{ext}})$ intersect the curves wrapping around the point $(\theta,0)$ . In the case when $p^{\mathrm{ext}} \gt \beta$ , the constant $D_* \gt 0$ in Theorem 2.2 is the slope of the tangent line to the curve passing through $(\beta,0)$ as in Figure 3(b). Hence, if $D \gt D_*$ , there exists no SI solution. We construct explicitly the value of $D_*$ in Proposition 3.2 below.

Next, we establish some relations between the solution $p$ and the parameters based on the phase portrait above. For any $x \gt x_0$ , if $p$ is monotone on $(x_0,x)$ , we can invert $x \mapsto p(x)$ into function $p \mapsto X(p)$ . We obtain $X'(p) = \frac{\pm 1}{\sqrt{2F(p(x_0)) - 2F(p)}}$ . By integrating this equation, we obtain that

(3.4) \begin{equation} x - x_0 = \displaystyle \int _{p(x_0)}^{p(x)} \frac{ (\!-\!1)^k ds}{\sqrt{2F(p(x_0)) - 2F(s)}}, \end{equation}

where $k = 1$ if $p$ is decreasing and $k = 2$ if $p$ is increasing on $(x_0,x)$ . We can obtain the analogous formula for $x \lt x_0$ .

First, we focus on SM solutions for which $p'(0) = 0$ , then we analyse the integral in (3.4) with $x = L, x_0 = 0$ . For any $p^{\mathrm{ext}} \in (0,1)$ , using (3.3), we have

(3.5) \begin{equation} F(p(0)) = F(p(L)) + \frac{1}{2} D^2 \left ( p(L) - p^{\mathrm{ext}}\right )^2 = G(p(L)), \end{equation}

for $F$ defined in (2.1) and

(3.6) \begin{equation} G(q) \;:\!=\; F(q) + \frac{1}{2} D^2 (q - p^{\mathrm{ext}} )^2, \end{equation}

and from (3.4) with $x=L, x_0 = 0$ , we have

(3.7) \begin{equation} L = \displaystyle \int _{p(0)}^{p(L)} \frac{ (\!-\!1)^k ds}{\sqrt{2F(p(0)) - 2F(s)}}, \end{equation}

where $k = 1$ if $p$ is decreasing on $(0,L)$ , $k = 2$ if $p$ is increasing on $(0,L)$ .

Thus, the SM solution of (1.2) exists if there exist values $p(L)$ and $p(0)$ that satisfy (3.5) and (3.7). When such values exist, we can assess the value of $p(x)$ for any $x$ in $(\!-\!L,L)$ using (3.4).

Before proving the existence of such values of $p(0)$ and $p(L)$ , we establish some useful properties of the function $G$ defined in (3.6). It is continuous in $[0,1]$ and $G(q)\geq F(q)$ for all $q \in [0,1]$ . Moreover, the following lemma shows that $G$ has a unique minimum point.

Lemma 3.1. For any $p^{\mathrm{ext}} \in (0,1)$ , there exists a unique value $\overline{q} \in (0,1)$ such that $G'(\overline{q}) = 0$ , $G'(q) \lt 0$ for all $q \in [0,\overline{q})$ and $G'(q) \gt 0$ for all $q \in (\overline{q},1]$ . Particularly, $G(\overline{q}) = \displaystyle \min _{[0,1]} G$ .

Proof. We have $ G'(q) = f(q) + D^2(q-p^{\mathrm{ext}})$ . We consider the following cases.

Case 1: When $p^{\mathrm{ext}} = \theta$ , we have $G'(p^{\mathrm{ext}}) = G'(\theta ) = f(\theta ) = 0, G'(q) \lt 0$ for all $q \in (0,\theta )$ and $G'(q) \gt 0$ for all $q \in (\theta,1)$ . Thus $\overline{q} = \theta = p^{\mathrm{ext}}$ .

Case 2: When $p^{\mathrm{ext}} \lt \theta$ , we have $G'(q) \lt 0$ for all $q \in [0,p^{\mathrm{ext}}]$ and $G'(q) \gt 0$ for all $q \in [\theta,1]$ . So there exists at least one value $\overline{q} \in (p^{\mathrm{ext}},\theta )$ such that $G'(\overline{q}) = 0$ .

For any $\overline{q} \in (p^{\mathrm{ext}},\theta )$ such that $G'(\overline{q}) = 0$ , we have $f(\overline{q}) + D^2 (\overline{q} - p^{\mathrm{ext}}) = 0$ so that $D^2 = -\frac{f(\overline{q})}{\overline{q}- p^{\mathrm{ext}}}$ . We can prove that $G''(\overline{q})$ is strictly positive. Indeed, from Assumption 2.2, we have that $\alpha _1$ is the unique value in $(0,\theta )$ such that $f'(\alpha _1) = 0$ , thus $f(\alpha _1) = \displaystyle \min _{[0,\theta ]} f \lt 0$ .

If $\alpha _1 \leq \overline{q} \lt \theta$ then $f'(\overline{q}) \geq 0$ . One has $ G''(\overline{q}) = f'(\overline{q}) + D^2 \gt 0$ .

If $p^{\mathrm{ext}} \lt \overline{q} \lt \alpha _1$ , due to the fact that $f$ is convex in $(0,\alpha _1)$ one has $f'(\overline{q}) \geq \frac{f(\overline{q}) - f(p^{\mathrm{ext}})}{\overline{q} - p^{\mathrm{ext}}}$ . Since $f(p^{\mathrm{ext}}) \lt 0$ , one has $G''(\overline{q}) = f'(\overline{q}) + D^2 = f'(\overline{q}) - \frac{f(\overline{q})}{\overline{q} - p^{\mathrm{ext}}} \gt f'(\overline{q}) + \frac{f(p^{\mathrm{ext}}) -f(\overline{q})}{\overline{q} - p^{\mathrm{ext}}} \geq 0.$ One can deduce that $\overline{q}$ is the unique value in $(0,1)$ such that $G'(\overline{q}) = 0$ and $G(\overline{q}) = \displaystyle \min _{[0,1]} G$ , so it satisfies Lemma 3.1.

Case 3: When $p^{\mathrm{ext}} \gt \theta$ , the proof is analogous to case 2 but using the concavity of $f$ in $(\alpha _2,1)$ . We obtain that there exists a unique value $\overline{q}$ in $(\theta,p^{\mathrm{ext}})$ that satisfies Lemma 3.1.

When $p^{\mathrm{ext}} = \theta$ , it is easy to check that $p\equiv \theta$ is a solution of (1.2). We now analyse two types of SM solutions (see Figure 2) in the following parts.

3.1.1 Existence of SD solutions

In this part, the solution $p$ we study is symmetric on $(\!-\!L,L)$ and decreasing on $(0,L)$ (see Figure 2(a)). So $ p(L) \lt p(x) \lt p(0)$ for any $x \in (0,L)$ . But from (3.3), we have that $F(p(x)) \leq F(p(0))$ , so $F'(p(0)) \geq 0$ . It implies that $p(0) \in [\theta,1]$ . Next, we use two steps to study the existence of SD solutions:

Step 1: Rewriting as a non-linear equation on $p(L)$

For any $q \in (\theta,1)$ , we have $F'(q) = f(q) \gt 0$ so $F|_{(\theta,1)} \;:\; (\theta, 1) \longrightarrow \left (F(\theta ), F(1)\right )$ is invertible. Define $F_1^{-1} \;:\!=\; (F|_{(\theta,1)})^{-1} \;:\; \left ( F(\theta ), F(1)\right ) \longrightarrow (\theta,1)$ , and $F_1^{-1}(F(\theta )) = \theta, F^{-1}_1(F(1)) = 1$ . Then, $F^{-1}_1$ is continuous in $[F(\theta ),F(1)]$ . For any $y \in \left ( F(\theta ),F(1)\right )$ , one has $\left ( F^{-1}_1\right )'(y) = \frac{1}{F'\left ( F^{-1}_1(y)\right )} = \frac{1}{f\left ( F^{-1}_1(y)\right )} \gt 0$ , so $F^{-1}_1$ is an increasing function in $\left ( F(\theta ), F(1)\right )$ . From (3.5) and (3.7), since $p$ is decreasing in $(0,L)$ , we have $L = \displaystyle \int _{p(L)}^{p(0)} \frac{ ds}{\sqrt{2G(p(L)) - 2F(s)}}$ . Denote

(3.8) \begin{equation} \mathcal{F}_1(q) \;:\!=\; \displaystyle \int _{q}^{F_1^{-1}(G(q))}\frac{ds}{\sqrt{2G(q) - 2F(s)}}. \end{equation}

Hence, an SD solution $p$ of system (1.2) has $p(0) = F^{-1}_1(G(p(L)))$ , and $p(L)$ satisfies

(3.9) \begin{equation} L = \mathcal{F}_1(p(L)). \end{equation}

Moreover, one has $p'(x) \leq 0$ for all $x \in (0,L)$ thus $-D(p(L) - p^{\mathrm{ext}}) = p'(L) \leq 0$ . One can deduce that

(3.10) \begin{equation} p(L) \geq p^{\mathrm{ext}}. \end{equation}

Step 2: Solving (3.9) in $[p^{\mathrm{ext}},1]$

The following proposition states the existence of a solution of (3.9).

Proposition 3.1. For any $D \gt 0, p^{\mathrm{ext}} \in (0,1)$ , we have

  1. 1. If $0 \lt p^{\mathrm{ext}} \lt \theta$ , then there exists a constant $M_1 \gt 0$ such that equation (3.9) has at least one solution $p(L) \geq p^{\mathrm{ext}}$ if and only if $L \geq M_1$ .

  2. 2. If $ \theta \leq p^{\mathrm{ext}} \lt 1$ , then equation (3.9) admits at least one solution $p(L) \geq p^{\mathrm{ext}}$ for all $L \gt 0$ . If $p^{\mathrm{ext}} \geq \alpha _2$ , then this solution is unique.

Proof. Since $F_1^{-1}$ is only defined in $[F(\theta ),F(1)]$ , we need to find $p(L) \in [p^{\mathrm{ext}},1]$ such that $G(p(L)) \in [F(\theta ),F(1)]$ .

For all $q \in (0,1)$ , we have $G(q) \geq F(q) \geq F(\theta )$ and from Lemma 3.1, there exists a value $\overline{q} \in (0,1)$ such that $\displaystyle \min _{[0,1]} G = G(\overline{q}) \leq G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \lt \max _{[0,1]} F = F(1)$ . Moreover, one has $G(1) \gt F(1)$ ; thus, there exists a value $p^* \in (p^{\mathrm{ext}},1)$ such that $G(p^*) = F(1)$ . Then, for all $q \in [p^{\mathrm{ext}},p^*], G(q) \in [F(\theta ),F(1)]$ and we will find $p(L)$ in $[p^{\mathrm{ext}},p^*]$ . Since $F^{-1}_1$ increases in $(F(\theta ),F(1))$ , then $p(0) = F^{-1}_1(G(p(L))) \geq F^{-1}_1(F(p(L))) \geq p(L).$

Function $\mathcal{F}_1$ in (3.8) is well defined and continuous in $[p^{\mathrm{ext}},p^*)$ , $\mathcal{F} \geq 0$ in $[p^{\mathrm{ext}},p^*)$ . Moreover, since $F'(1) = 0$ , one has $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_1(p) = \displaystyle \int _{p^*}^{1} \frac{ds}{\sqrt{2F(1) - 2F(s)}} = +\infty$ .

Case 1: If $0 \lt p^{\mathrm{ext}} \lt \theta$ , we will prove that $\mathcal{F}_1$ is strictly positive in $[p^{\mathrm{ext}},p^*)$ . Indeed, for any $y \in [0,1]$ , if $y \lt \theta$ , by the definition of $F^{-1}_1$ , we have $F^{-1}_1(G(y)) \in [\theta,1]$ so $F^{-1}_1(G(y)) \gt y$ . If $y \geq \theta \gt p^{\mathrm{ext}}$ , then $G(y) = F(y) + \frac{1}{2} D^2(y - p^{\mathrm{ext}})^2 \gt F(y)$ so again $F^{-1}_1(G(y)) \gt y$ . Hence, $\mathcal{F}_1(y) \gt 0$ for all $y \in [p^{\mathrm{ext}},p^*)$ . We have $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$ , so there exists $p \in [p^{\mathrm{ext}},p^*)$ such that $M_1\;:\!=\; \mathcal{F}_1(p) = \displaystyle \min _{[p^{\mathrm{ext}},p^*]} \mathcal{F}_1 \gt 0$ , and system (3.9) admits at least one solution if and only if $L \geq M_1$ .

Case 2: If $\theta \leq p^{\mathrm{ext}} \lt 1$ , one has $G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}})$ , then $F^{-1}_1(G(p^{\mathrm{ext}})) = p^{\mathrm{ext}}$ so $\mathcal{F}_1 (p^{\mathrm{ext}}) = 0$ . On the other hand, $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$ . Thus, for any $L \gt 0$ , there always exists at least one value $p(L) \in (p^{\mathrm{ext}},p^*)$ such that $\mathcal{F}_1(p(L)) = L$ .

Proof of uniqueness: When $p^{\mathrm{ext}} \geq \alpha _2$ , we can prove that $\mathcal{F}_1' \gt 0$ on $(p^{\mathrm{ext}},p^*)$ . Indeed, denoting $\gamma (q) = F_1^{-1}(G(q))$ and changing the variable from $s$ to $t$ such that $s = t\gamma (q) + (1-t)q$ , one has

\begin{equation*} \mathcal {F}_1(q) = \displaystyle \int _{0}^{1} \frac {[\gamma (q) - q]dt}{\sqrt {2F(\gamma (q)) - 2F(t\gamma (q) + (1-t)q)}}. \end{equation*}

To simplify, denote $s(q)= t\gamma (q) + (1-t)q$ . For any $t \in (0,1)$ , one has $q \lt s(q) \lt \gamma (q)$ . Let us define $\Delta F = F(\gamma (q)) - F(s(q))$ , then one has

$ \sqrt{2}\mathcal{F}_1'(q) = \displaystyle \int _0^1 (\gamma '(q)-1) (\Delta F)^{-1/2} dt - \frac{1}{2} \int _0^1 (\Delta F)^{-3/2}(\gamma (q)-q)\frac{d\Delta F}{dq} dt$

$ = \displaystyle \int _0^1 (\Delta F)^{-3/2} \left [ (\gamma '(q)-1) \Delta F - \frac{1}{2} (\gamma (q)-q) (f(\gamma (q)) \gamma '(q) - f(s(q))s'(q)) \right ]$ .

Let $P$ be the formula in the brackets, then

$ \begin{array}{r l} P & = (\gamma '-1) \Delta F - \frac{1}{2} (\gamma -q) \left [f(\gamma ) \gamma ' - f(s)(t\gamma ' + 1 - t)\right ]\\[5pt] & = (\gamma '-1) \left [\Delta F -\frac{1}{2}(\gamma -q)f(\gamma ) + \frac{1}{2}(s-q)f(s)\right ] - \frac{1}{2}(\gamma - q)(f(\gamma ) - f(s)), \end{array}$

Define $\psi (y) \;:\!=\; F(y) - \frac{1}{2}f(y)(y - q)$ for any $y \in [q,\gamma (q)]$ , then one has $\psi '(y) = \frac{1}{2}[f(y) - f'(y)(y-q)] \geq \frac{f(q)}{2}\gt 0$ since $y \geq q \gt p^{\mathrm{ext}} \geq \alpha _2$ and $f$ is concave in $(\alpha _2,1)$ , $f(q) \gt 0$ . Moreover, $f$ is decreasing on $(\alpha _2,1)$ so $0 \lt f(\gamma (q)) \lt f(s(q)) \lt f(q)$ , and $\gamma '(q) = \frac{G'(q)}{f(F_1^{-1}(G(q)))} = \frac{f(q) + D^2 (q - p^{\mathrm{ext}})}{f(\gamma (q))} \gt 1$ . Hence, we can deduce that $P = (\gamma '-1)(\psi (\gamma ) - \psi (s)) - \frac{1}{2}(\gamma - q)(f(\gamma ) - f(s)) \gt 0$ for any $t \in (0,1)$ . This proves that function $\mathcal{F}_1$ is increasing on $(p^{\mathrm{ext}},p^*)$ , so the solution of equation (3.9) is unique.

3.1.2 Existence of SI solutions

In this case, the technique we use to prove the existence of SI solutions is analogous to SD solutions except in the case when $p^{\mathrm{ext}} \gt \beta$ (case 3 below). Since the proof is not straightforward, it is worth to re-establish this technique for SI solutions in two following steps:

Step 1: Rewriting as a non-linear equation on $p(L)$

Since now $p$ is symmetric on $(\!-\!L,L)$ and increasing in $(0,L)$ (see Figure 2(b)), then $ p(0) \lt p(x) \lt p(L)$ for any $x \in (0,L)$ . But from (3.3), we have that $F(p(x)) \leq F(p(0))$ , so $F'(p(0)) \leq 0$ . This implies that $p(0) \in [0,\theta ]$ .

For any $q \in (0,\theta )$ , we have $F'(q) = f(q) \lt 0$ so $F|_{(0,\theta )} \;:\; (0,\theta ) \longrightarrow \left (F(\theta ), F(0)\right )$ is invertible. Define $F_2^{-1} \;:\!=\; (F|_{(0,\theta )})^{-1} \;:\; \left ( F(\theta ), F(0)\right ) \longrightarrow (0,\theta )$ , $F_2^{-1}(F(\theta )) = \theta, F^{-1}_2(F(0)) = 0$ , and $F^{-1}_2$ is continuous in $[F(\theta ),F(0)]$ . For any $y \in \left ( F(\theta ), F(0)\right )$ , $\left ( F^{-1}_2\right )'(y) = \frac{1}{F'\left ( F^{-1}_2(y)\right )} = \frac{1}{f\left ( F^{-1}_2(y)\right )} \lt 0$ , so $F^{-1}_2$ is a decreasing function in $\left ( F(\theta ), F(0)\right )$ . From (3.5) and (3.7), we have $L = \displaystyle \int _{p(0)}^{p(L)} \frac{ds}{\sqrt{2G(p(L)) - 2F(s)}}$ . Denote

(3.11) \begin{equation} \mathcal{F}_2(q) \;:\!=\; \displaystyle \int _{F^{-1}_2(G(q))}^{q} \frac{ds}{\sqrt{2G(q) - 2F(s)}}. \end{equation}

Hence, an SI solution of system (1.2) has $p(0) = F^{-1}_2(G(p(L)))$ , and $p(L)$ satisfies

(3.12) \begin{equation} L = \mathcal{F}_2(p(L)), \end{equation}

and in this case, one needs to find $p(L)$ in $[0,p^{\mathrm{ext}}]$ .

Step 2: Solving of (3.12) in $[0,p^{\mathrm{ext}}]$

The following proposition states the existence of a solution of (3.12).

Proposition 3.2. For any $p^{\mathrm{ext}} \in (0,1)$ , considering the value $\beta$ as in (2.2), we have

  1. 1. If $ 0 \lt p^{\mathrm{ext}} \leq \theta$ , then equation (3.12) admits at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ for all $L \gt 0, D \gt 0$ . If $p^{\mathrm{ext}} \leq \alpha _1$ , this solution is unique.

  2. 2. If $\theta \lt p^{\mathrm{ext}} \leq \beta$ , then for all $D \gt 0$ , there exists a constant $M_2 \gt 0$ such that equation (3.12) has at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ if and only if $L \geq M_2$ .

  3. 3. If $\beta \lt p^{\mathrm{ext}} \lt 1$ , then there exists a constant $D_* \gt 0$ such that when $D \geq D_*$ , equation (3.12) has no solution. Otherwise, there exists a constant $M_3 \gt 0$ such that equation (3.12) has at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ if and only if $L \geq M_3$ .

Proof. As we assume that $F(0) \lt F(1)$ and $F(\theta ) \lt F(0)$ then, due to the continuity of $F$ , one can deduce that there exists a value $\beta \in (\theta,1)$ such that $F(\beta ) = F(0) = 0$ .

Since $F_2^{-1}$ is only defined in $[F(\theta ),F(0)]$ , we need to find $p(L) \in [0,p^{\mathrm{ext}}]$ such that $G(p(L)) \in [F(\theta ),F(0)]$ . For all $q \in (0,1)$ , we have $G(q) \geq F(q) \geq F(\theta )$ , thus equation (3.12) has solutions if and only if $\displaystyle \min _{[0,1]} G \lt F(0)$ . Even when $\displaystyle \min _{[0,1]} G = G(\overline{q}) = F(0)$ , $\mathcal{F}_2$ is still not defined in $[0,1]$ since $\mathcal{F}_2 (\overline{q}) = +\infty$ .

One has the following cases:

Case 1: $0 \lt p^{\mathrm{ext}} \leq \theta$ :

We have $\displaystyle \min _{[0,1]} G = G(\overline{q}) \leq G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \lt \max _{[0,\theta ]}F = F(0)$ , and $G(0) \gt F(0)$ so there is a value $p_* \in (0,p^{\mathrm{ext}})$ such that $G(p_*) = F(0)$ . Moreover $F'(0) = 0$ , then $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$ . Thus, function $\mathcal{F}_2$ is only well defined and continuous in $(p_*,p^{\mathrm{ext}}]$ .

When $0 \lt p^{\mathrm{ext}} \leq \theta$ , $F^{-1}_2(G(p^{\mathrm{ext}})) = F^{-1}_2(F(p^{\mathrm{ext}})) = p^{\mathrm{ext}}$ so $\mathcal{F}_2 (p^{\mathrm{ext}}) = 0$ . We can deduce that for any $L \gt 0$ , there always exists at least one value $p(L) \in (p_*,p^{\mathrm{ext}})$ such that $\mathcal{F}_2(p(L)) = L$ . When $p^{\mathrm{ext}} \leq \alpha _1$ , arguing analogously to the second case of Proposition 3.1, one has $\mathcal{F}_2' \lt 0$ on $(p_*,p^{\mathrm{ext}})$ , thus the solution is unique.

Case 2: $\theta \lt p^{\mathrm{ext}} \leq \beta$ :

Since $F$ increases on $(\theta,1)$ , then $\displaystyle \min _{[0,1]} G = G(\overline{q}) \lt G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \leq F(\beta ) = F(0)$ . Analogously to the previous case, $\mathcal{F}_2$ is well defined and continuous in $(p_*,p^{\mathrm{ext}}]$ , $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$ , and $\mathcal{F}_2$ is strictly positive in $(p_*,p^{\mathrm{ext}}]$ . Therefore, there exists $p \in (p_*,p^{\mathrm{ext}}]$ such that

(3.13) \begin{equation} M_2\;:\!=\; \mathcal{F}_2(p) = \min _{[p_*,p^{\mathrm{ext}}]} \mathcal{F}_2 \gt 0, \end{equation}

and system (3.12) admits as least one solution if and only if $L \geq M_2$ .

Case 3: $\beta \lt p^{\mathrm{ext}} \lt 1$ :

Consider the function $H(q) = F(q) + \frac{1}{2} f(q)(p^{\mathrm{ext}} - q)$ defined in an interval $[\theta,p^{\mathrm{ext}}]$ . For any $\theta \lt q \lt p^{\mathrm{ext}}$ , one can prove that $H'(q) \geq 0$ .

Indeed, if $q \leq \alpha _2$ , then $f'(q) \geq 0$ , and $f(q) \gt 0$ . One has $H'(q) = \frac{1}{2} f(q) + \frac{1}{2}f'(q)(p^{\mathrm{ext}} - q) \gt 0$ . If $q \gt \alpha _2$ , from Assumption 2.2, the function $f$ is concave in $(\alpha _2,1)$ , and hence $f'(q)(p^{\mathrm{ext}} - q) \geq f(p^{\mathrm{ext}}) - f(q)$ . Thus,

\begin{equation*} H'(q) = \frac {1}{2} (p^{\mathrm {ext}} - q) \left (f'(q) + \frac {f(q)}{p^{\mathrm {ext}} - q}\right ) \gt \frac {1}{2} (p^{\mathrm {ext}} - q) \left (f'(q) + \frac {f(q) - f(p^{\mathrm {ext}})}{p^{\mathrm {ext}} - q}\right ) \geq 0.\end{equation*}

Therefore, function $H$ increases in $(\theta,p^{\mathrm{ext}})$ . Moreover, $H(\theta ) = F(\theta ) \lt F(0)$ and $H(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \gt F(\beta ) = F(0)$ , and so there exists a unique value $\overline{p}_* \in (\theta,p^{\mathrm{ext}})$ such that $H(\overline{p}_*) = F(0)$ . Take $D_* \gt 0$ such that $D_*^2 = \frac{f(\overline{p}_*)}{p^{\mathrm{ext}} - \overline{p}_*}$ . Then, for any $D \gt 0$ , from Lemma 3.1, there is a unique value $\overline{q} \in (\theta,p^{\mathrm{ext}})$ such that $G'(\overline{q}) = 0$ , $G(\overline{q}) = \displaystyle \min _{[0,1]}G$ , and $D^2 = \frac{f(\overline{q})}{p^{\mathrm{ext}} - \overline{q}}$ . If $D \lt D_*$ , then $\frac{f(\overline{q})}{p^{\mathrm{ext}} - \overline{q}} \lt \frac{f(\overline{p}_*)}{p^{\mathrm{ext}} - \overline{p}_*}$ .

Let $h(q) = \frac{f(q)}{p^{\mathrm{ext}} - q}$ , then $h'(q) = \frac{1}{p^{\mathrm{ext}} - q} \left ( f'(q) + \frac{f(q)}{p^{\mathrm{ext}} - q} \right ) \gt 0$ for $q \in (\theta, p^{\mathrm{ext}})$ . So function $h$ is increasing in $(\theta, p^{\mathrm{ext}})$ , and we can deduce that $\overline{q} \lt \overline{p}_*$ . Hence, $\displaystyle \min _{[0,1]}G = G(\overline{q}) = F(\overline{q}) + \frac{1}{2}D^2(p^{\mathrm{ext}} - \overline{q})^2 = F(\overline{q}) + \frac{1}{2}f(\overline{q})(p^{\mathrm{ext}}- \overline{q}) = H(\overline{q}) \lt H(\overline{p}_*) = F(0)$ .

Moreover, $G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \gt F(\beta ) = F(0)$ , $G(0) \gt F(0)$ . Thus, there exists a maximal interval $(q_*,q^*) \subset [0,p^{\mathrm{ext}}]$ such that $G(q) \in (F(\theta ),F(0))$ for all $q \in (q_*,q^*)$ . We have $0 \lt q_* \lt \overline{q} \lt q^* \lt p^{\mathrm{ext}}$ and $G(q_*) = G(q^*) = F(0)$ . Therefore, $\mathcal{F}_2$ is well defined and continuous in $(q_*,q^*)$ , and $ \displaystyle \lim _{p \rightarrow q^*}\mathcal{F}_2(p) = \lim _{p \rightarrow q_*} \mathcal{F}_2(p) = +\infty$ . Reasoning like in the previous case, (3.12) admits solution if and only if $L \geq M_3$ , where

(3.14) \begin{equation} M_3 \;:\!=\; \displaystyle \min _{[q_*,q^*]} \mathcal{F}_2 \gt 0, \end{equation}

On the other hand, if $D \geq D_*$ , $\displaystyle \min _{[0,1]} G \geq F(0)$ , and equation (3.12) has no solution.

Proof of Theorem 2.2. As we showed in Section 3.1.1, the SD steady-state solution $p$ of (1.2) has $p(L)$ satisfying equation (3.9). From Proposition 3.1, we can deduce that for fixed $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , $M_d(p^{\mathrm{ext}}, D) = \displaystyle \min _q \mathcal{F}_1(q)$ . Thus, we obtain the results for SD steady-state solutions of (1.2) in Theorem 2.2.

Similarly, Proposition 3.2 provides that for fixed $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , we have $M_i(p^{\mathrm{ext}}, D) = \displaystyle \min _q \mathcal{F}_2(q)$ when $p^{\mathrm{ext}} \leq \beta$ or $D \lt D_*$ . Otherwise, $M_i(p^{\mathrm{ext}}, D) = +\infty$ .

3.1.3 Existence of non-SM solutions

As we can see in the phase portrait in Figure 3, there exist some solutions of (1.2) which are neither SD nor SI. These solutions can be non-symmetric or can have more than one (local) extremum. By studying these cases, we prove Theorem 2.3 as follows

Proof of Theorem 2.3. We can see from Figure 3(a) that for fixed $p^{\mathrm{ext}} \leq \beta, D \gt 0$ , the non-SM solutions $p$ of (1.2) have more than one (local) extreme value because their orbits have at least two intersections with the axis $p' = 0$ (see e.g. $T_3$ ). Those solutions have the same local minimum values, denoted $p_{\mathrm{min}}$ , and the same maximum values, denoted $p_{\mathrm{max}}$ . Moreover, we have $p_{\mathrm{min}} \lt \theta \lt p_{\mathrm{max}}$ , and $F(p_{\mathrm{min}}) = F(p_{\mathrm{max}})$ .

Since the orbits make a round trip of distance $2L$ , then the more extreme values a solution has, the larger $L$ is. Hence, to find the minimal value $M_*$ , we study the case when $p$ has one local minimum and one local maximum with orbit as $T_3$ in Figure 3(a). Then, we have

(3.15) \begin{equation} G(p(\!-\!L)) = G(p(L)) = F(p_{\mathrm{min}}) = F(p_{\mathrm{max}}), \end{equation}

and by using (3.4), we obtain

\begin{equation*} 2L = \mathcal {F}_1((p(\!-\!L)) + \displaystyle \int _{p_{\mathrm {min}}}^{p_{\mathrm {max}}} \frac {ds}{\sqrt {2F(p_{\mathrm {min}})- 2F(s)}} + \mathcal {F}_2(p(L)) \end{equation*}
\begin{equation*} = 2\left [\mathcal {F}_1(p(\!-\!L)) + \mathcal {F}_2(p(L))\right ] + \displaystyle \int _{p(L)}^{p(\!-\!L)} \frac {ds}{\sqrt {2G(p(L))- 2F(s)}}. \end{equation*}

Using the same idea as above, we can show that $L$ depends continuously on $p(L)$ . Moreover, we know that $M_d = \min \mathcal{F}_1,$ $M_i = \min \mathcal{F}_2$ ; therefore, there exists a constant $M_*$ such that (1.2) admits at least one non-SM solution $p$ if and only if $L \geq M_* \gt M_d + M_i$ .

On the other hand, for fixed $p^{\mathrm{ext}} \gt \beta, D \lt D_*$ , it is possible that (1.2) admits a non-symmetric solution with only one minimum. The orbit of this solution is as $T_4$ in Figure 3(b). In this case, we have $G(p(L)) = G(p(\!-\!L)) = F(p_{\mathrm{min}})$ with $p(\!-\!L) \lt p(L)$ and

\begin{equation*} 2L = \mathcal {F}_2(p(\!-\!L)) + \mathcal {F}_2(p(L)) \gt 2M_i. \end{equation*}

Hence, we only need $M_* \gt M_i$ .

3.2 Stability analysis

We first study the principal eigenvalue and eigenfunction for the linear problem. Then by using these eigenelements, we construct the super- and sub-solution of (1.1) and prove the stability and instability corresponding to each case in Theorem 2.5.

Proof of Theorem 2.5. Consider the corresponding linear eigenvalue problem (2.8). We can see that $\phi = \cos{\left (\sqrt{\lambda }x\right )}$ is an eigenfunction iff $\sqrt{\lambda }\tan{\left (L\sqrt{\lambda }\right )} = D$ . Denote $\lambda _1$ the smallest positive value of $\lambda$ which satisfies this equality, thus $L\sqrt{\lambda _1} \in \left ( 0, \frac{\pi }{2}\right )$ . Hence, $\lambda _1 \in \left (0, \frac{\pi ^2}{4L^2} \right )$ . Moreover, for any $x \in (\!-\!L,L)$ , the corresponding eigenfunction $\phi _1(x) = \cos{\left ( \sqrt{\lambda _1}x \right )}$ takes values in $(0,1)$ .

Proof of stability: Now let $p$ be a steady-state solution of (1.1) governed by (1.2). First, we prove that if $f'(p(x)) \lt \lambda _1$ for any $x \in (\!-\!L,L)$ , then $p$ is asymptotically stable. Indeed, since $f'(p(x)) \lt \lambda _1$ , there exist positive constants $\delta, \gamma$ with $\gamma \lt \lambda _1$ such that for any $\eta \in [0,\delta ]$ ,

(3.16) \begin{equation} f(p+\eta ) - f(p) \leq (\lambda _1 - \gamma ) \eta, \qquad f(p) - f(p-\eta ) \leq (\lambda _1 - \gamma ) \eta, \end{equation}

on $(\!-\!L,L)$ . Now consider

\begin{equation*} \overline {p}(t,x) = p(x) + \delta e^{-\gamma t} \phi _1(x), \qquad \underline {p}(t,x) = p(x) - \delta e^{-\gamma t} \phi _1(x). \end{equation*}

Assume that $p^{\mathrm{init}}(x) \leq p(x) + \delta \phi _1(x)$ . Then by (3.16), we have that $\overline{p}$ is a super-solution of (1.1) because

\begin{equation*} \partial _t\overline {p} - \partial _{xx}\overline {p} = (\lambda _1 - \gamma ) \delta e^{-\gamma t} \phi _1(x) + f(p) \geq f(p + \delta e^{-\gamma t} \phi _1(x)) = f(\overline {p}), \end{equation*}

due to the fact that $ 0 \lt \delta e^{-\gamma t} \phi _1(x) \lt \delta$ for any $t \gt 0$ , $x \in (\!-\!L,L)$ . Moreover, at the boundary points one has $\frac{\partial \overline{p}}{\partial \nu } + D(\overline{p} - p^{\mathrm{ext}}) = \frac{\partial p}{\partial \nu } + D(p - p^{\mathrm{ext}}) = 0.$

Similarly, if we have $p^{\mathrm{init}}(x) \geq p(x) - \delta \phi _1(x)$ , and so $\underline{p}$ is a sub-solution of (1.1). Then, by the method of super- and sub-solution (see e.g. [Reference Pao19]), the solution $p^0$ of (1.1) satisfies $\underline{p} \leq p^0 \leq \overline{p}$ . Hence, $ |p^0(t,x) - p(x)| \leq \delta e^{-\gamma t}\phi _1(x)$ . Therefore, we can conclude that, whenever $|p^{\mathrm{init}}(x) - p(x)| \leq \delta \phi _1(x)$ for any $x \in (\!-\!L,L)$ , the solution $p^0$ of (1.1) converges to the steady-state $p$ when $t \rightarrow +\infty$ . This shows the stability of $p$ .

Proof of instability: In the case when $f'(p(x)) \gt \lambda _1$ , there exist positive constants $\delta, \gamma$ , with $\gamma \lt \lambda _1$ , such that for any $\eta \in [0,\delta ]$ ,

(3.17) \begin{equation} f(p+\eta ) - f(p) \geq (\lambda _1 + \gamma ) \eta, \end{equation}

on $(\!-\!L,L)$ .

For any $p^{\mathrm{init}} \gt p$ , there exists a positive constant $\sigma \lt 1$ such that $p^{\mathrm{init}} \geq p + \delta (1- \sigma )$ . Then $\widetilde{p}(t,x) = p(x) + \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x)$ , with $\gamma ' \lt \gamma$ small enough, is a sub-solution of (1.1). Indeed, by applying (3.17) with $\eta = \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x) \in [0,\delta ]$ for any $x \in (\!-\!L,L)$ , we have

\begin{equation*} \partial _t \widetilde {p} - \partial _{xx} \widetilde {p} = \gamma ' \delta \sigma e^{-\gamma ' t} \phi _1(x) + \lambda _1 \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x) + f(p) \leq f(p + \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x)) \end{equation*}

if $\gamma \geq \frac{\gamma '\sigma e^{-\gamma ' t}}{1 - \sigma e^{-\gamma ' t}} = \frac{\gamma ' \sigma }{e^{\gamma ' t} - \sigma }$ for any $t \geq 0$ . This inequality holds when we choose $\gamma ' \leq \frac{\gamma (1-\sigma )}{\sigma }$ . Now, we have that $\widetilde{p}$ is a sub-solution of (1.1), thus for any $t \geq 0, x \in (\!-\!L,L)$ , the corresponding solution $p^0$ satisfies

\begin{equation*} p^0(t,x) - p(x) \geq \tilde p(t,x)-p(x) \geq \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x). \end{equation*}

Hence, for a given positive $\epsilon \lt \delta \displaystyle \min _{x} \phi _1(x)$ </